chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
1c3f80094ccb80a8 | Icarus and His Wings
The ancient Greek myth of the flight of Icarus brings with it hard lessons even in modern times. If you already know this story, feel free to skip down to the bottom to view my takeaways on what we can glean from it. If you don’t this story, below is a short synopsis on what happens in the myth.
Icarus and his father, Daedalus, were trapped on the island of Crete in a tall tower. Daedalus, a master craftsman, devised a plan for himself and Icarus to escape by flight. Daedalus constructs wings made of feathers and wax, and trains himself how to fly. Before their planned escape, Daedalus warns Icarus not to fly too high to be scorched by the sun, nor too low to have his wings dampened by the sea, for it was only that only following his own flight path in the middle of the horizon that their escape and ensuing salvation would be attained. Icarus fails to heed his father’s warning, and in the excitement of his first flight, he flies too closely to the sun. His wings melt, and he drowns in what is now known to be the Icarian Sea.
The story of Icarus seems to tell us that our desire to ascend, though it may be overwhelming, will end in ultimate destruction, despite having every opportunity to fly safely. Though Icarus was imprisoned for many years of his life, it was not prison that took his life from him; it was actually the burden of responsibility that led him to a final demise. There are two key moral lessons to learn from Icarus: that our attempts to become God lead to catastrophic failure and that the “middle road” is paradoxically the highest mode of being for man.
God in Flight
The first takeaway seems to be that Icarus craved a higher mode of being, evidenced by his flying too close to the Sun. The Sun very obviously has many ancient meanings, but in this story it seems to represent a greater point of view. With this greater point of view comes more knowledge than a Man can bear, for only God can possess such a point of view and knowledge. Despite the gift of the wings, his only chance for salvation, the responsibility that came with flight proved to be too much for him. Icarus teaches us that Man must know his place in the horizon. Thus we can learn from Icarus that salvation is a problem not only of altitude but also of trajectory. Man is designed to fly in the middle of the horizon.
The Middle Road
Daedalus tells Icarus to follow his flight path through the middle of the horizon; not too close to the sun and not too close to the sea. Remember, it was only if Icarus flew in the middle of the horizon that he would reach deliverance. This seems to tell us that our flight path should be fine-tuned to a certain balance that keeps Man in between the god-like region of the sun and the lower godless region of the sea. This middle flight path represents the tensions of the human experience: the balance between pleasure and pain, independence and dependence, grace and holiness, etc. Perhaps the middle flight path represents any and every spectrum of opposites. We can learn from the middle flight path that the redeemed human life is a disciplined and intentional choice to live below God but above uninspired Nature. As such, we can say that any imbalance in our flight pattern is a veering towards either destination that we do not truly belong to; however, as Icarus learned, any trajectory besides the middle of the horizon will eventually lead us to the Sea.
Daedalus: Daedalus, a master craftsman and Icarus’ father, is understood to represent wisdom in our lives. He represents competence, which is to know what to do.
Icarus: Icarus obviously represents the normal Man. As such, he must have both the chance for freedom and the wisdom to follow his father’s flight path of salvation.
The Wings: This is a tricky one, but I believe the Wing represent a fighting chance of salvation. They are not salvation itself because salvation comes only by following the middle of the horizon. However, without the wings, Icarus stays imprisoned.
The Sun: Represents the highest mode of being, a fully aware, high visibility and high resolution existence. This is how most people understand God to be like.
The Sea: Serves as a low resolution mode of being: depression, darkness, and the failure to be God. One might say that the sea is unredeemed Nature.
The flight path: This is perhaps the most important symbol in the story because it is the only one that acts as a standard to follow. Daedalus’ flight path in the middle of the horizon is clearly the right way to live. For Christians, the right way to live can be found in following Jesus (John 8:12) which is not all that different from the middle road.
Quantum Physics, Faith, and Observation
Dr. Stephen Barr, an astrophysicist at the University of Delaware, wrote a profound article titled “Faith and Quantum Theory” that has been on my mind as of late for a few reasons. The first is that as a theist much of my time is spent thinking about how to reconcile what can be seen with what cannot be seen and what to believe about both. I have found Dr. Barr’s piece to be a fantastic justification for the belief in the metaphysical. His argument has some moving parts, but goes fundamentally something like this: the most widely accepted theory on quantum physics is dependent on probabilities, and if probabilities are to have any meaning at all to anyone, they must eventually culminate in a real event. Real events must be realized by a mind, and therefore for the natural world to have any meaning at all a Mind must have existed. This Mind is what we call God.
In order to explore Dr. Barr’s complex argument more deeply, I am going to write a little bit about quantum theory, only enough to get our feet wet with the idea. If you are already familiar with the fundamental theories about quantum physics, feel free to skip down to Barr’s solution.
Disclaimer: I am not a physicist or a mathematician. Though this is a topic of interest to me, I don’t claim to be an authority on this subject. Read at your own risk…. 🙂
Wave Particle Duality Paradox
Though there have been several Wave-Particle experiments, perhaps the most famous is Thomas Young’s experiment with electrons in 1927. It essentially demonstrated that electrons can behave as both waves (like sound waves or waves moving in water) and particles (pieces of matter). Adding further to the mystery of wave-particle duality is that once the electrons were studied more closely with measurement devices, they behaved differently than when they were not observed. The key to this experiment was that the observer of the electron made a difference in the outcome of the experiment.
Schrodinger’s Equations
Schrodinger developed equations that essentially predicted how likely things are likely to behave like particles vs waves. His equations were remarkably accurate, and the short and sweet of his findings is that the bigger things get, the more likely they are to behave like particles, and the smaller things get, the more likely they are to act like a wave.
If Schrodinger’s equations interest you, here is a good video to watch.
The important thing to grasp about Schrodinger’s equations is that his equations yield probabilities, not definite realities. If Schrodinger’s equations are accurate, then they have, in Barr’s eyes, a profound philosophical implication on the very nature of Being, and would, therefore, be of ontological interest to us. Barr says,
“It starts with the fact that for any physical system, however simple or complex, there is a master equation—called the Schrödinger equation—that describes its behavior. And the crucial point on which everything hinges is that the Schrödinger equation yields only probabilities. (Only in special cases are these exactly 0, or 100 percent.) But this immediately leads to a difficulty: There cannot always remain just probabilities; eventually there must be definite outcomes, for probabilities must be the probabilities of definite outcomes. “
Barr goes on to explain that because physical systems are described in probabilistic terms, there must be a Mind to make any sense of anything at all:
As long as only physical structures and mechanisms are involved, however complex, their behavior is described by equations that yield only probabilities—and once a mind is involved that can make a rational judgment of fact, and thus come to knowledge, there is certainty. Therefore, such a mind cannot be just a physical structure or mechanism completely describable by the equations of physics…A probability is a measure of someone’s state of knowledge or lack of it…”
Barr goes on to explain that there are differing views of quantum theory that may circumvent the dependence on probability to explain the micro-world (Einstein, for example, believed that there were missing variables that would lead to definite outcomes rather than probabilities). However, if the Copenhagen Interpretation of Quantum Theory is correct, I think Barr is correct in his reasoning that a Mind is required to make relevant and meaningful ontological statements about reality.
In conclusion, Barr believes that a Mind is necessary to make sense of the most widely accepted Copenhagen interpretation of quantum theory because Schrodinger’s equations does not yield definite results, only probabilities. For those that have read this far, you may conclude two things: quantum physics can be used as a compelling argument for both free will, as probabilities best explain quantum reality, and for a Mind (God) who observes these realities.
Hey Now, You’re an All-Star
Shrek is the story of a lovable but grouchy ogre who, seeking to protect his swamp, mistakenly stumbles upon heroism and true love. There is a special charm about this surly green ogre, so much so that he has become a well-known meme on the internet these days. Though the story of Shrek has much humor, it seems to me that this story has much to say about human nature when it comes to love, economy, and true fulfillment. Aside from the irony of Dreamworks intentional lampooning of Disney and the musical accents of Smash Mouth, Shrek catches our interest because it contains the themes of at least two common archetypal stories: love outside of social class (in particular the woman of the higher class and the man of the lower) and the princess/dragon quest romantic hero theme.
The story of the rich princess and the poor pauper boy isn’t a new one. Whether it be from classical literature such as Charles Dickens’ Great Expectations or lighthearted Disney adaptations like Aladdin, this particular theme tells us that the love between a man and a woman is powerful enough to bridge economic gaps. Yet, in the real world, this phenomenon is actually extremely rare. (Look Here To See What I Mean) It is a well-established pattern in the social sciences that women tend to choose mates based upon hypergamy, which is a natural tendency to improve or at least preserve one’s status within social economic strata. In other words, most women will not marry men that aren’t at least their economic equals. (And yes, this is a major cause of income inequality. Rich women only marry rich men, which means money tends to stay in the same families).
The story of Shrek and Fiona is so captivating because it breaks the pattern of hypergamy regarding the choice of the woman. Think about it – Shrek is interesting because on paper Lord Farquaad offers Fiona the whole world: a magnificent castle, a royal lineage, and an easy, responsibility-free life. Fiona could have had the high life and flirted with the divine but instead, she chooses to live out her days as a cursed ogre. This is a powerful idea, because in the culture’s eyes, she didn’t stay in the same social caste. She actually chooses “down.” Fioana’s decision to fight the urges of an easy life absent of true love is a break in the pattern; it is a glitch in the matrix.
Shrek himself follows the archaic story line of slaying the dragon, not because of his heroism, but because of his own selfishness of wanting to preserve his beloved swamp. The irony here is that he is only made responsible when he has to be – it is mere circumstances that motivate him to save the princess. In doing so, he falls in love with a human in Fiona, whom he feels he is not good enough for. Ironically, as mentioned before, she winds up holding a curse that turns her into an ogre at night, and through this curse, she comes to love Shrek. The fact is that most of us are much like Shrek. We don’t become heroes until we have to. Shrek goes on to save Fiona, and he never even has to slay the dragon because the dragon becomes infatuated with Donkey. In some ways, Shrek’s greatest fear wasn’t the dragon itself. It seems to me that the feminine manifestation in Dragon points to the idea that Shrek’s real dragon was a devalued sense of self, and therefore an inability to love and give love. Shrek’s pressures were internal, but Fiona inspired him to embrace his strength and uniqueness.
So what does this all mean? I think it means that despite the numbers, love can transcend the gap between class and heroism (or lack thereof) and that deep down, that is what we all hope for. This is why we love Shrek and Titanic and Sleeping Beauty and all of the other romance stories that break the normal pattern of finding love within your own social strata. In regards to heroism, Jordan Peterson has said, “Slay the dragon in his lair before he comes to your village.” This is wise even if our original motivations to visit the dragon where he lives are driven by our own self-interests. In the end the dragon has been slain, the village has been saved, and the lesson has been learned.
Simulation Theory: We’re Really Here
Has anybody watched the Joe Rogan/Elon Musk Podcast?
He discusses in pretty deep detail an Origins Theory that I have never considered before. He believes that it is nearly a 100% sure thing that we are part of an advanced computer simulation. The Simulation idea goes something like this:
Advanced Beings far far superior to us have sent out computer code that simulates their past(s) to learn from them in the same way that we learn from the simulations that we create on earth. Somewhere in the relative past, some sort of immensely powerful Beings wanted to experiment with the universe and we are part of that experiment.
At first glance, this idea sounds absolutely insane, however, the more I think about it/study it, I don’t think it is as crazy as it sounds. Here’s how I think it has some compelling arguments:
1) Ask Musk discusses, humans have the ability to create with limited bandwidth, limited by brain chemistry. Our neurons can only fire to so fast, and so our computation speed is limited. Computers don’t have this problem, because they can make decisions at basically the speed of light. Whatever created us would have to have nearly unlimited bandwidth- advanced super-human beings would meet this requirement easily.
2) Our advancements in neural networks/artificial intelligence is impressive to the point that we can now begin to comprehend a computer that functions much like human can. Musk discusses how all businesses in the modern world are “cybernetic collectives” where humans and machines form the entire economy. In his view, we’re already connected- just not physically. It is no longer science fiction to imagine a computer that is smart enough to make another computer.
3) Stories of origins are inherently a theological ideas, and so it is interesting to ponder what kind of implications Simulation explains how we got DNA, which is essentially a set of 1s and 0s that tell us how to build our biological systems. That’s really powerful. In addition, the other constants of the universe would would have been defined as well by these advanced beings.
With the speed that computers can operate, and the massive-perfect memories that they can create to make decisions off of, tech leaders are legit terrified of what these things can do. Such powerful creations are feared to be advanced enough to create their own simulations to constantly improve itself.
I really really itched to research what the great Christian minds think about this, but I put off the urge to try to think through this myself. With a little more thought, I do have a few philosophical objections to this idea.
1) The simulations that we know of are limited by their own internal reference points that are defined by superior, more capable agents. It doesn’t seem logically possible for those in a simulation to be aware that they were in one. In other words, because simulations are only representations, it makes no sense that the representation could “figure out” that it wasn’t *really* what it was representing.
Imagine the Sims characters figuring out on their own that they weren’t real, and that they were just a series of 1s and 0s. How about a series of 1s and 0s on your TV screen figuring out that it in fact was not Shrek the Ogre and only a video data stream?
I don’t see any good reason to think that we’d be able to figure out that we were simulated beings. If we ever did, it would seem to be that we stumbled upon correctness. That is hardly a rational position.
2) The problem of the first cause still applies in the case of Advanced Beings/ Simulation Theory. This of course is subverted in the Christian apologetic by making distinction between existing contingently vs by necessity. There would have to be a model to explain why Advanced Beings exist by necessity. The counter-objection to this point would most likely be something along the lines of questioning the very Laws of Nature as we know it. It could go something like, “well, perhaps our reality was just developed with specific laws and constants, but why would all of reality have to be? Such a question begs the question that reality is bigger than what we know and would also be unable to be rationally affirmed, since the presupposed logic used in the question depends upon our own reality. It is a dangerous game to question the Laws of Nature- once this is done, rationale is gone with the wind.
In conclusion, I don’t think the Simulation Theory is as crazy as it sounds, given the trajectory of how quickly our technology is moving today. However, there seem to be at least a few philosophical problems with the idea.
Let me know what you guys think.
“It Was Here Already, Long Ago”
Social media gives us intoxicating power to rearrange our deepest desires and our most secret insecurities in the exact order that we want them. It isn’t deeply introspective to say that media now is just collection of all the perfect things about our lives – only the things that we want others to see. If you want attention, there is plenty of it to go around. If you want to come across as wealthy and successful when you’re broke in reality, you can do that too. If you want to compete with your friends, then on your mark, get set, go. We can create our own world by only revealing bits and pieces of ourselves. Ironically though, no one seems to be fooled by this game that’s being played. Ask your friends and they’ll all say the same thing, “that darn social media, man. It’s just toxic.” As much as we all use it, there seems to be a negative connotation with the phrase. Though social media gives us this great power that quite frankly I’m not sure we fully understand, I am not sure that the invention and use of media is the real problem. Rather, media seems to be only a new way that we release what is already inside of us. I am not sure if it makes us worse than who we would have been without it, but social media definitely makes us more calculated and organized in how we execute our desires.
Has there ever been any other invention that allowed so much freedom as social media? The automobile comes to mind. The invention of the automobile was something like the social media of the 20th century. Unlike ever before, one could be wherever he wanted to be so long as he had the time. This freedom gave rise to all kinds of opportunities, but one movement in particular could not have happened without this newfound liberty. Without automobiles, the “Free Love” movement of the 1960s never could have happened because four wheels gave us the ability to escape and act out our carnal desires on a magnitude we couldn’t have before. Men who knew better couldn’t act out the worst things because they didn’t have a car-yet. The freedom that came with cars quickly diminished accountability, and with it some of our shared traditional culture norms. It was the freedom of the automobile that showed us what was inside us all along.
Similarly, social media gives rise to not just a new place but a new life entirely. Travel, food, sex, fashion, etc are not immune to the wireless game that is played by so many. You can be your own favorite supermodel, traveler, surfer, gypsy, or athlete. The individual twists that we put on our profiles come from needs inside of us that are never quenched by likes and comments. Because these habits come from our own moral agency and not from our profiles, Instagram isn’t the real problem folks. Social media seems to be not only a revelation of what exists inside of us but also an amplification of how often and how deep we act on the worst parts of ourselves.
As cultures and technologies change, there are some things about us that stay the same. New inventions don’t seem to bring about new morals in people, they only reveal morals that were there all along. Our flesh seeks opportunity, and like the automobile, our phones give it to us. The reality is that social media has become so deeply rooted in our culture that we have to live with it, and so it takes an intentional effort to not get caught into the pressures of playing dress up and flirting with our lusts. As Solomon says in Ecclesiastes, there is nothing new under the sun. Social media doesn’t bring out the worst in people – we’re just already the worst.
Ecclesiastes Chapter 1:
“Meaningless! Meaningless!”
says the Teacher.
“Utterly meaningless!
Everything is meaningless.”
What do people gain from all their labors
at which they toil under the sun?
Generations come and generations go,
but the earth remains forever.
The sun rises and the sun sets,
and hurries back to where it rises.
The wind blows to the south
and turns to the north;
round and round it goes,
ever returning on its course.
All streams flow into the sea,
yet the sea is never full.
To the place the streams come from,
there they return again.
All things are wearisome,
more than one can say.
The eye never has enough of seeing,
nor the ear its fill of hearing.
What has been will be again,
what has been done will be done again;
there is nothing new under the sun.
10 Is there anything of which one can say,
“Look! This is something new”?
It was here already, long ago;
it was here before our time.
11 No one remembers the former generations,
and even those yet to come
will not be remembered
by those who follow them.
The Political Middle Class
Running parallel to the economic middle class, it can be observed that there is a majority “political middle class” of Americans who believe in reason, decency, hard work and moderate policy solutions to our nation’s greatest challenges. It seems to me that this particular group of people carry the most weight within society simply because they have the power of every day conversation, unlike the Fake News media, which the middle class secretly despises. The political middle class clocks into every day just like the rest of us.
This social middle class is primarily based on Judeo-Christian principles whether by belief or by commitment to the nearly universal Christian moral axioms embedded within Western culture. Christian values generally seem to be the bedrock that inform the political middle class. As such, it is no surprise that these people are sandwiched between the radical communist Leftists and the fascist hard Right, (which is nearly fundamentally racist and unsurprisingly has a lack of charity for those in need.) Whether by Christian orthodoxy or even a secular set of ethics middle class people come to the ugly conclusion that we ourselves are part of the problem. This is why the majority of people can accept that there are problems within society that we simply will never completely solve – like poverty, hunger, unemployment, depression and other painful afflictions that we suffer from. The social middle class is notorious for venting their frustrations only to end them with, “it is what it is.”
Do not mistake the acceptance of an imperfect world for a lack of action or a lack of commitment to improving society. There is something inside all of us that keeps us going, keeps us waking up the next day, and gives us hope that someday, society will be set straight. I think this speaks to the resiliency of the human spirit and inspiration from our Creator. The political middle class carries the weight of societies beliefs and ideas and, by God, we need Help.
If you’ll excuse me, I have go to clock in.
I often find myself thinking of my hikes through the hills and mountains of Georgia and Alabama. In the Fall the weather is cool, the leaves are bright, and the trails are empty. The only sounds are the winds. But I know the reason why I go. I go for the Lookout. A clear Lookout pushes you to finish your journey and dream of the next. What you see is what you take with you, and so with new trails come the hope and expectation of experiencing a new landscape. Word of mouth can yield realistic expectations for your journey, but you might not really know what it will take to get there until you don’t have a choice. You see, on the trail, there are only two choices. You go forward, or you turn around. If you turn around, then you miss the Lookout. You miss what you came for.
When I daydream of hiking, it is hard to remove the mental challenge and strain from my memories. The burning in your legs and the sweat on your brow can come back at any instant. It is just the price one pays. I remember vividly a recent hike that was particularly gruesome. My brother, cousin and I decided to hike a 7 mile loop without our packs. What us Florida boys did not understand was the impact of elevation on our trek. We hiked for what seemed like many hours to finally come to a fork in the trail. What we quickly realized was that we were not near as far as we had believed. The landscape was unfamiliar and did not fit what we were expecting to see. It was the ups and the downs that really took their toll on us and we were literally miles off from where we thought we were. In reality, we had only traveled the first three miles of the trail to the point where the loop would start. Now, we had done so quite quickly; for we were at an aggressive pace even without packs. I remember how frustrated I was that we failed to reach what we thought was worth the sweat. I remember feeling vulnerable as we walked through the crevices, completely exposed to the
We never made it to our Lookout, but we did come across a small river with a crossing and overhang. It was so different than what we have Florida so it was worth it, but I think that the lesson I learned was that sometimes the journey just isn’t what we come to expect. That doesn’t mean that it isn’t worth taking. Life can come to us like these trails. I think that Life is actually a lot like these trails. Someone gives you a map and says, “go here and you will find what you are looking for.” This means that along the way, you might come across Lookouts, valleys, violent waters and danger. Maybe the worst of those is disappointment. That doesn’t mean the next Lookout isn’t what you waited for. With hope in mind, you trust in the map and move forward.
What other choice do you have?
Soundness and Validity of an Argument
A logical argument can be examined in two ways: soundness and validity. You may have heard these two words used interchangeably, but for someone who may find interest in logic and philosophy, it is important to know the difference between the two terms. Let’s look at how we can determine what a valid argument is and then see how the soundness of an argument will determine how debatable the argument will be.
The validity of a deductive argument is simple: if the premises are true, then the conclusion must be true. Notice that this is a specific definition concerning the relationship between the premises and the conclusion. This rule doesn’t consider whether or not the premises, or the conclusion for that matter, are really true in real life. The validity of an argument isn’t concerned about whether the premises reflect reality- it is only concerned with what would be true if those premises reflected what was real. For example, we could come up with ridiculous premises and still yield a valid argument:
1. If I flew to school on Monday, then you own 10 unicorns.
2. I did fly to school on Monday.
3. Therefore, you own 10 unicorns.
As ludicrous of an argument as this is, it is a valid argument (because it follows the Modus Ponens structure). If the argument is valid, then what is so silly about it? Well, we all know that unicorns don’t exist in the real world. We also know that since I can’t fly in real life, that I didn’t fly to school on Monday. So, we can reasonably say that the conditional premise 1) in and of itself isn’t really true and premise 2) also doesn’t match reality either. This means that the argument is not a sound argument, even though it is logically valid.
In contrast to the validity of an argument, the soundness of an argument considers whether the premises really reflect the way things are in our own world. We can see by inspection that usually the soundness of a premise is the main focus of debate in any given argument. It is usually easier to determine whether an argument is invalid than it is to determine whether an argument is sound. For example, we can say something like this below:
1. If objective moral goodness exists, then God exists.
2. Objective moral goodness does exist.
3. Therefore, God does exist.
The argument above is simple and valid, however, both premise 1) and premise 2) can and have been debated. Since the argument is valid, the the conclusion is fixed and must be true if 1) and 2) are true. Therefore, 1) and 2) must be proven unsound to attack the argument. For example, someone who doesn’t believe this argument is sound may attack premise 1) by saying that just because objective moral goodness exists, doesn’t mean that it must come from God. Further, one may also object to premise 2) by disagreeing that objective moral goodness really exists.
In closing, it is crucial to remember the distinction between a sound argument and a valid argument. Just because an argument is valid doesn’t mean that it is necessarily sound. When valid arguments are challenged, they are typically attacked by arguing against the soundness of their premises. Therefore, when constructing a valid argument, it is critical to anticipate challenges to the soundness of one’s premises.
The Modus Ponens Argument
The Modus Ponens argumentative form is so common to us as thinkers that it is oftentimes easy to overlook it’s vital importance to our every day reasoning. This argumentative form in it’s original Latin means to “affirm by affirming.” Simply put, this means that we can affirm some kind of conclusion (announce that it is true) by affirming something else. Today let’s look at this structure and one of it’s associated fallacies.
The Modus Ponens structure looks something like this:
1. If P–>Q
2. P
3. Therefore, Q
Premise 1) is what is known as a conditional premise. A conditional premise has two parts: an antecedent and a consequent. Premise 2) is a simple truth assertion. In this case, premise 2) validates the antecedent of the conditional, leading us to conclude that Q is necessarily true. If we affirm the antecedent P, we then affirm Q. Assuming that premise 1) and 2) are really true in real life, let’s look at how this logic plays out:
1. If you are reading this blog post, then you are on the computer.
2. You are reading this blog post.
3. Therefore, you must be on the computer.
A common misapplication of this structure is what is known as affirming the consequent. Let’s examine a similar structure below:
1. If P–>Q
2. Q
3. Therefore, P
Notice the difference in premise 2). Premise 2) affirms the consequent, not the antecedent. Using the same real life scenarios as above, let’s insert them into this structure and we will see that it is indeed a fallacy.
2. You are on the computer.
3. Therefore, you must be reading this blog post.
Notice that in this case, all we know is that you are on the computer-we don’t know that you are reading this blog specifically. You could be checking the weather, buying plane tickets, or listening to Rick Astley. We simply can’t know what you are doing on the computer. All we know is that you are on it. In this case, it is easy to see that it does not logically follow that you must be reading this blog post, although it is possible. This kind of conclusion is a non-sequitur, which means that the conclusion “does not follow” from the premises.
Deductive Reasoning
Reason can be casually defined as the making sense of things. For as many bad ideas that are out there bouncing around from generation to generation trying to make sense of this world, it is comforting to know that there are some basic rules that can help us understand the way things really are. There is a referee of sorts that keeps ideas in check if we are willing to listen. Reason is governed by the rules of logic. Fundamentally, there are several types of reasoning: deductive, inductive, and abductive reasoning to name a few; but arguably the most powerful is deductive reasoning. All minds, both great and small, use deductive arguments as we go about our daily lives. Let’s look at the deductive form of reasoning so that we can better understand the structures that we use to think and solve problems.
Deductive reasoning is a kind of thinking that starts with a set of premises and moves towards a concrete conclusion. A set of premises that leads to a conclusion is also known as a syllogism. Deductive syllogisms are arguments that are logically airtight. This means that if the premises of the argument are true, then the conclusion must be necessarily true and there is no amount of reasoning that can change it. The most effective way to argue against a deductive argument is to question the premises of the argument.
For example, a deductive argument might be:
1. If you are reading this post, then you are on the computer.
2. You are reading this post.
3. Therefore, you must be on the computer.
or symbolically,
1. If A—>B
2. A
3. Therefore, B.
Notice that this particular structure is a common valid argumentative form. (We will delve more into this form in the coming days). With this form, thinkers can mix and match any set of premises and, so long as they are true in the real world, the conclusions must be concretely true and can’t be false. In other words, if we accept that premise 1) and premise 2) are really true, then the conclusion must be true. As mentioned earlier, we can only debate the soundness (the real truthfulness) of premise 1) and 2). I would encourage you to think of arguments of your own following this form!
In conclusion, deductive reasoning has immense power and is one of the most commonly used types of reasoning. Because of this power, the usual course of debate shifts to the soundness of the argument’s premises. Therefore, when constructing a deductive argument, one must carefully consider and piece together his premises in such a way that they can withstand the test of a rational charge. One need not be a Roman philosopher to think deductively! |
c1ef73d88489ac69 | Emergent Locality in Quantum Systems with Long Range Interactions Gauss Centre for Supercomputing e.V.
Emergent Locality in Quantum Systems with Long Range Interactions
Principal Investigator:
Fabien Alet (1) and David J. Luitz (2)
(1) Centre national de la recherche scientifique (CNRS), Toulouse University, France, (2) Max Planck Institute for the Physics of Complex Systems (MPIPKS), Dresden, Germany
Local Project ID:
HPC Platform used:
Hazel Hen of HLRS
Date published:
How fast can information travel in a quantum system? While special relativity yields the speed of light as a strict upper limit, many quantum systems at low energies are in fact described by nonrelativistic quantum theory, which does not contain any fundamental speed limit. Interestingly enough, there is an emergent speed limit in quantum systems with short ranged interactions which is far slower than the speed of light. Fundamental interactions between particles are, however, often of long range, such as dipolar interactions or Coulomb interactions. A very-large scale computational study performed on Hazel Hen revealed that there is no instantaneous information propagation even in the presence of extremely long ranged interactions and that most signals are contained in a spatio-temporal light cone for dipolar interactions.
Full Report
The best quantum theory for high energy particles, such as they appear in cosmic radiation or in particle accelerators like CERN’s large hadron collider, is based on Einstein’s theory of special relativity and includes the speed of light as a strict speed limit for the transmission of information. However, most quantum systems of many particles which can be produced in laboratories are at much lower energies and therefore can exhibit different physics. It turned out that there is an additional quantum speed limit for quantum systems with short ranged interactions between many particles, for example the interaction between electrons in solids, which is screened by the presence of many other electrons. For such systems, Lieb and Robinson proved in a seminal work in 1972 that there is an emergent speed limit slower than the speed of light, which limits the maximal information transport in quantum many-body systems. This plays a crucial role for the buildup of correlations of particles, for how fast a quantum system can reach thermal equilibrium, as well as for practical implementations of quantum computers as this bound limits the loss of quantum information.
Today there is an increasing interest in quantum systems which exhibit long range interactions between their constituents, since they can be manufactured in experiments with ultracold quantum gases of atoms with dipolar interactions. One recent example are experiments with exotic Dysprosium atoms, which have a large magnetic moment and exhibit long ranged dipolar interactions. For such systems, we currently have limited knowledge for how fast quantum information can travel. The current computational study addressed this issue by an exact numerical simulation of two generic models of strongly interacting quantum systems with long ranged interactions.
Models of quantum matter with many-body interactions represent a formidable challenge since they are not analytically solvable and experiments are currently not precise and universal enough to provide a universal answer. Therefore it is of crucial importance to solve these models numerically with state of the art computational techniques. This is the aim of the STIDS project. The main numerical challenge is that the complexity of the calculation grows exponentially with the number of particles in the system: in a nutshell, the complexity is (at best) doubled when adding one more particle.
In the precise study of information propagation, the long-range nature of the interactions leads to very fast dynamics and requires to simulate systems as large as possible. The work of the STIDS team has pushed calculations on HLRS’s Hazel Hen supercomputer located in Stuttgart to the limit of what is currently feasible, reaching 15 quantum particles on 31 lattice sites. These simulations are converged in terms of system size, meaning that the results do not change if the system size is further increased.
The complete description of the problem is encoded in the wave function of the quantum many-body system, which time evolution is obtained through the solution of the Schrödinger equation. Storing and computing this wave-function requires a massive amount of computer time and memory (RAM) for these large systems. The largest calculations for this project required more than 10TB of memory and 100 nodes of the Hazel Hen supercomputer in parallel for a single calculation. These resources were crucial for reaching the largest system sizes to prove that the findings are converged with the number of particles.
The main findings for the one dimensional systems of this study are: 1. There is an emergent speed limit for systems with long range interactions which decay with distance r faster than 1/r. This leads to a “light cone”, a region of causality in space-time outside of which no quantum communication can occur. 2. For interactions which decay slower than 1/r with distance, there is a causal region with power law envelope, which excludes immediate quantum communication even for very long-range interactions. 3. All quantum speed limits known so far for long ranging interactions are not tight, i.e. there are actual limits which are even slower than previous work suggested.
In conclusion, the present numerically exact study represents considerable progress on the question of how fast quantum information can travel in solids or ultracold atomic gases. These results are of fundamental importance for a deeper understanding of thermalization and of the time scales for the generation of quantum entanglement, the resource of quantum computation.
Reference for the present research:
Emergent locality in systems with power-law interactions. David J. Luitz and Yevgeny Bar Lev. Phys. Rev. A 99, 010105(R) – Published 30 January 2019 -- DOI: https://doi.org/10.1103/PhysRevA.99.010105
Contact for the present research:
David J. Luitz
Max Planck Institute for the Physics of Complex Systems
Nöthnitzer Straße 38, D-01187 Dresden (Germany)
e-mail: dluitz [at] pks.mpg.de
NOTE: This project was made possible by PRACE (Partnership for Advanced Computing in Europe) allocating a computing time grant on GCS HPC system Hazel Hen of the High Performance Computing Center Stuttgart (HLRS), Germany.
HLRS project ID: PP16153659
February 2019
Tags: Universite de Toulouse HLRS Materials Science |
888a1cb5134993fc | A Motivation for Quantum Computing
Quantum mechanics is one of the leading scientific theories describing the rules that govern the universe. It’s discovery and formulation was one of the most important revolutions in the history of mankind, contributing in no small part to the invention of the transistor and the laser.
Here at Math ∩ Programming we don’t put too much emphasis on physics or engineering, so it might seem curious to study quantum physics. But as the reader is likely aware, quantum mechanics forms the basis of one of the most interesting models of computing since the Turing machine: the quantum circuit. My goal with this series is to elucidate the algorithmic insights in quantum algorithms, and explain the mathematical formalisms while minimizing the amount of “interpreting” and “debating” and “experimenting” that dominates so much of the discourse by physicists.
Indeed, the more I learn about quantum computing the more it’s become clear that the shroud of mystery surrounding quantum topics has a lot to do with their presentation. The people teaching quantum (writing the textbooks, giving the lectures, writing the Wikipedia pages) are almost all purely physicists, and they almost unanimously follow the same path of teaching it.
Scott Aaronson (one of the few people who explains quantum in a way I understand) describes the situation superbly.
There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis that these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.
The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.
Indeed, the sequence of experiments and debate has historical value. But the mathematics needed to have a basic understanding of quantum mechanics is quite simple, and it is often blurred by physicists in favor of discussing interpretations. To start thinking about quantum mechanics you only need to a healthy dose of linear algebra, and most of it we’ve covered in the three linear algebra primers on this blog. More importantly for computing-minded folks, one only needs a basic understanding of quantum mechanics to understand quantum computing.
The position I want to assume on this blog is that we don’t care about whether quantum mechanics is an accurate description of the real world. The real world gave an invaluable inspiration, but at the end of the day the mathematics stands on its own merits. The really interesting question to me is how the quantum computing model compares to classical computing. Most people believe it is strictly stronger in terms of efficiency. And so the murky depths of the quantum swamp must be hiding some fascinating algorithmic ideas. I want to understand those ideas, and explain them up to my own standards of mathematical rigor and lucidity.
So let’s begin this process with a discussion of an experiment that motivates most of the ideas we’ll need for quantum computing. Hopefully this will be the last experiment we discuss.
Shooting Photons and The Question of Randomness
Does the world around us have inherent randomness in it? This is a deep question open to a lot of philosophical debate, but what evidence do we have that there is randomness?
Here’s the experiment. You set up a contraption that shoots photons in a straight line, aimed at what’s called a “beam splitter.” A beam splitter seems to have the property that when photons are shot at it, they will be either be reflected at a 90 degree angle or stay in a straight line with probability 1/2. Indeed, if you put little photon receptors at the end of each possible route (straight or up, as below) to measure the number of photons that end at each receptor, you’ll find that on average half of the photons went up and half went straight.
The triangle is the photon shooter, and the camera-looking things are receptors.
If you accept that the photon shooter is sufficiently good and the beam splitter is not tricking us somehow, then this is evidence that universe has some inherent randomness in it! Moreover, the probability that a photon goes up or straight seems to be independent of what other photons do, so this is evidence that whatever randomness we’re seeing follows the classical laws of probability. Now let’s augment the experiment as follows. First, put two beam splitters on the corners of a square, and mirrors at the other two corners, as below.
The thicker black lines are mirrors which always reflect the photons.
The thicker black lines are mirrors which always reflect the photons.
This is where things get really weird. If you assume that the beam splitter splits photons randomly (as in, according to an independent coin flip), then after the first beam splitter half go up and half go straight, and the same thing would happen after the second beam splitter. So the two receptors should measure half the total number of photons on average.
But that’s not what happens. Rather, all the photons go to the top receptor! Somehow the “probability” that the photon goes left or up in the first beam splitter is connected to the probability that it goes left or up in the second. This seems to be a counterexample to the claim that the universe behaves on the principles of independent probability. Obviously there is some deeper mystery at work.
Complex Probabilities
One interesting explanation is that the beam splitter modifies something intrinsic to the photon, something that carries with it until the next beam splitter. You can imagine the photon is carrying information as it shambles along, but regardless of the interpretation it can’t follow the laws of classical probability.
The simplest classical probability explanation would go something like this:
There are two states, RIGHT and UP, and we model the state of a photon by a probability distribution (p, q) such that the photon has a probability p of being in state RIGHT a probability q of being in state UP, and like any probability distribution p + q = 1. A photon hence starts in state (1,0), and the process of traveling through the beam splitter is the random choice to switch states. This is modeled by multiplication by a particular so-called stochastic matrix (which just means the rows sum to 1)
\displaystyle A = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}
Of course, we chose this matrix because when we apply it to (1,0) and (0,1) we get (1/2, 1/2) for both outcomes. By doing the algebra, applying it twice to (1,0) will give the state (1/2, 1/2), and so the chance of ending up in the top receptor is the same as for the right receptor.
But as we already know this isn’t what happens in real life, so something is amiss. Here is an alternative explanation that gives a nice preview of quantum mechanics.
The idea is that, rather than have the state of the traveling photon be a probability distribution over RIGHT and UP, we have it be a unit vector in a vector space (over \mathbb{C}). That is, now RIGHT and UP are the (basis) unit vectors e_1 = (1,0), e_2 = (0,1), respectively, and a state x is a linear combination c_1 e_1 + c_2 e_2, where we require \left \| x \right \|^2 = |c_1|^2 + |c_2|^2 = 1. And now the “probability” that the photon is in the RIGHT state is the square of the coefficient for that basis vector p_{\text{right}} = |c_1|^2. Likewise, the probability of being in the UP state is p_{\text{up}} = |c_2|^2.
This might seem like an innocuous modification — even a pointless one! — but changing the sum (or 1-norm) to the Euclidean sum-of-squares (or the 2-norm) is at the heart of why quantum mechanics is so different. Now rather than have stochastic matrices for state transitions, which are defined they way they are because they preserve probability distributions, we use unitary matrices, which are those complex-valued matrices that preserve the 2-norm. In both cases, we want “valid states” to be transformed into “valid states,” but we just change precisely what we mean by a state, and pick the transformations that preserve that.
In fact, as we’ll see later in this series using complex numbers is totally unnecessary. Everything that can be done with complex numbers can be done without them (up to a good enough approximation for computing), but using complex numbers just happens to make things more elegant mathematically. It’s the kind of situation where there are more and better theorems in linear algebra about complex-valued matrices than real valued matrices.
But back to our experiment. Now we can hypothesize that the beam splitter corresponds to the following transformation of states:
\displaystyle A = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}
We’ll talk a lot more about unitary matrices later, so for now the reader can rest assured that this is one. And then how does it transform the initial state x =(1,0)?
\displaystyle y = Ax = \frac{1}{\sqrt{2}}(1, i)
So at this stage the probability of being in the RIGHT state is 1/2 = (1/\sqrt{2})^2 and the probability of being in state UP is also 1/2 = |i/\sqrt{2}|^2. So far it matches the first experiment. Applying A again,
\displaystyle Ay = A^2x = \frac{1}{2}(0, 2i) = (0, i)
And the photon is in state UP with probability 1. Stunning. This time Science is impressed by mathematics.
Next time we’ll continue this train of thought by generalizing the situation to the appropriate mathematical setting. Then we’ll dive into the quantum circuit model, and start churning out some algorithms.
Until then!
[Edit: Actually, if you make the model complicated enough, then you can achieve the result using classical probability. The experiment I described above, while it does give evidence that something more complicated is going on, it does not fully rule out classical probability. Mathematically, you can lay out the axioms of quantum mechanics (as we will from the perspective of computing), and mathematically this forces non-classical probability. But to the best of my knowledge there is no experiment or set of experiments that gives decisive proof that all of the axioms are necessary. In my search for such an experiment I asked this question on stackexchange and didn’t understand any of the answers well enough to paraphrase them here. Moreover, if you leave out the axiom that quantum circuit operations are reversible, you can do everything with classical probability. I read this somewhere but now I can’t find the source 😦
One consequence is that I am more firmly entrenched in my view that I only care about quantum mechanics in how it produced quantum computing as a new paradigm in computer science. This paradigm doesn’t need physics at all, and apparently the motivations for the models are still unclear, so we just won’t discuss them any more. Sorry, physics lovers.]
14 thoughts on “A Motivation for Quantum Computing
1. Terrific! There’s always a lot of interesting stuff to dive into, and while figuring everything out, step by step and book by book, can be useful and illuminating, there is simply not enough spare time to learn about everything.
Such series that go straight to the point are great to get some basic insight into a subject. Also, your writing is very clear. I’ll be keeping an eye on this one. Thanks!
2. Cool stuff. Very clear. Is there a paper that describes the experiment with the photons? I wonder how the the fact that there is a possibility that the beamer changes something about the information the photon is caring is addressed in the paper. If it is addressed at all.
3. Wouldn’t the simplest explanation of the data be that half of the photons are such as to always bounce off beam splitters and half of them are such as to always pass through beam splitters? Then the beam splitter doesn’t even have to modify the state of the photon. These also seem like more natural intrinsic properties to give the photon than RIGHT and UP because they don’t raise questions like “What happens if you rotate the experiment?”
4. I guess you are assuming that all the photons are intrinsically identical to begin with. That would rule out my suggestion, although it seems an unwarranted assumption. It also seems you are assuming that the photon’s state can only be in two states (at least before you introduce the complex number stuff). If the photons are all in the same state initially then
two states are not enough to generate the results of the experiment, but you can do it with three. Call the states A, B, and C, and let A be the initial state. When a beam splitter gets an A photon, half the time it reflects it and makes it a B photon, and half the time it passes it and makes it a C photon. B photons are always reflected and C photons are always passed.
5. Here are two things I found confusing about this presentation. You suggest thinking of the state of a photon as a probability distribution when it seems clear that (before you introduce the complex number stuff) it is meant to be a binary variable. And when you are describing how the beam splitters process photons you don’t separately consider how they change their states and whether they reflect or pass them. (I think the terms RIGHT and UP were meant to be somehow suggestive of how the reflecting/passing works, but I don’t really understand what these terms are supposed to indicate. Probably my earlier comment about rotating the experiment is off base, based on a wrong understanding of these terms.)
• Good comments. I’ll try to address them one by one.
> all the photons are intrinsically identical to begin with…seems an unwarranted assumption.
Really? I have never heard of any theory that distinguishes, say, one standard Helium atom from another. Why would one photon be different from another?
I believe in reality the mirrors are polarized, and a photon passing through the mirror will correspond to a change in spin of the photon (the binary states being “polarized” or “not polarized”). I used the terms RIGHT and UP because then I don’t have to talk about spin and polarization, and the fact is it doesn’t matter what you call the states. What matters is the behavior. I like to think about quantum mechanics as an algorithmic mechanism for manipulating abstract states, not a physical process for manipulating objects.
> You suggest thinking of the state of a photon as a probability distribution when it seems clear that…it is meant to be a binary variable… [and also] you can do it with three [states instead of two].
You’re right, you can get the behavior by adding more states. I don’t have a good example that I can use to replace it (I will look for one), but I do know that once you add in the axiom that quantum transformations are linear, continuous, and reversible operators, you suddenly lose the ability to model it with classical probability. But it sounds like you already know this?
Too bad the answers on stackexchange pretty much went off on a tangent. Irreversible dynamics in quantum mechanics is possible even though reversible dynamics via the Schrödinger equation is fundamental. This is because the restriction of a reversible dynamics in a larger system may not give a reversible dynamics on your subsystem of interest.
Reversibility is not important for quantum computing. In measurement-based quantum computing (https://en.wikipedia.org/wiki/One-way_quantum_computer), one does computation equivalent to computation in the circuit model by preparing a multipartite “resource” state and then measuring each qubit, one by one, in a specific pattern and in an adaptive way with subsequent measurements depending on previous results. And there’re more works out there on dissipative quantum computing that I’m not familiar with.
7. The original positivist interpretation of quantum mechanics forces QC people to admit that certain non-empirically verified assumptions hold, such as unitary and psi-ontic wave functions. The “negative probabilities” are in a quasiprobability distribution where the anomaly is created at less than hbar and there are no final negative probabilities. By the way, the wave function of an electron is a complex-coefficient spinor function, which isn’t just a simple amplitude. The wave function can be positive, negative, complex, spinor or vector. Note that mathematicians have found quantum probability to be useless for modeling anything but atomic particles.
Just about every field of science attempts to quantify counterfactuals and probability in QM is no more needed than in any other branch of science. Everything in QM follows from the uncertainty principle and that only gives support to the idea that we use probabilities to measure OUR uncertainty! The system could even be chaotic. To get probabilities, you are assuming classical particles but there are no classical particles. Quantum mechanics really predicts the expected values of observables and it does not measure probabilities.
Just because simulating quantum states requires a high Turing complexity does not mean the argument runs backwards because the exponential computation on the wave function may well have nothing to do with the physical system of something like an dumb electron. Many in the QC subscribe to the Schrödinger’s cat fallacy. The fallacies here are as persistent as those about EPR. QCs may not be possible at all given that they go beyond current accepted theory of QM. People need to be honest about that. People making money building them are not honest, however.
8. Jeremy hi.
I haven’t read all of your articles yet, but since I’ve been a Physics undergrad many years ago, the phrase “complex probabilities” was a bit too much for me.
I can even now remember my dear prof. saying that “guys, even if we’re dealing with complex probability amplitudes, if you end up getting complex probabilities when calculating say the mean value of the total Energy, you’ve obviously did something wrong.”. I’m thinking you might have meant something different, but I just wanted to give a sincere feedback.
Oh yes, also I haven’t lived many years in an english speaking country, so perhaps I’m not well aware of the technical jargon and the colloquialisms used in the States to describe the wavefunction or the Dirac states ().
Cheers and keep it up. You’re a great inspiration, for a Physicist turned into an aspiring Applied Mathematician.
ps: now that I think of it, the relationship of Quantum Mechanics as a method of using complex linear algebra to find results for the real world, feels analogous to the use of Complex Analysis to answer questions of the Real Analysis.
ps2: I’d really like to read one of your posts in the future, that’d explain the idea behind Hugh Everett’s III Phd thesis [http://goo.gl/hRHxnf (PDF- 4.2MB)]. I don’t think it’s easily linked with Quantum Computing, so it might not interest you. But from skimming into it, I got the idea that it is closer to a Mathematician than to a Physicist. Maybe that’s why it was so unpopular between Physicists back then, and of course because N. Bohr was still alive. Oh yes, and it seems to me that he was deeply influenced by the work of C. Shannon on Information Theory.
9. As a physics graduate, what I want to say is that the historical approach to QM was only taught in some undergraduate introductory classes. Many talented student just skip it and learn the QM from mathematical assumption with experimental picture in mind.
reference:Modern Quantum Mechanics by Sakurai
10. This is related to your Edit at the end.
Bell’s theorem showed that if QM works the way they think it does, then the correlations you would see in certain experiments couldn’t be explained by a “local” hidden-variable theory. “Local” sort of means that information has to travel at the speed of light or less. (That includes not going backwards in time.)
The example was a variation on the original Einstein-Podolsky–Rosen (EPR) paradox: a correlated-particle pair is produced in a middle place, and the two particles go to two distant detectors to (say) the East and West. I think the detectors need to be oriented by true random noise at each end just before detection. The correlation between the two detections follows a curve that’s a trig function of the difference between the two detectors’ orientations at the moment(s) of detection(s).
The way I like to summarize it is: the shape of that curve *can’t be explained by any encoding of any amount of information* traveling along with the two particles. The key is that neither particle knows the relative orientation of the two detectors, each one acts locally as if it doesn’t know it, but the correlation between what they do *does* depend on it.
Later experiments, especially Aspect’s, showed those curves (as predicted by QM) are the curves that show up.
The place I finally found a decent explanation of amplitudes, EPR, Bell, and Aspect was in _Quantum Reality_ by Nick Herbert.
Anyway, this-all means ours can’t be local physics plus classical probability; I’m pretty sure that’s what you meant by ruling out classical probability. I would also guess that it would prevent you modeling quantum circuits without (the equivalent of) amplitudes but like you said, that would be getting into physics, and I probably know less physics than you do.
Leave a Reply to mark Cancel reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
376eb593f33ca290 | Wave function collapse
From formulasearchengine
Jump to navigation Jump to search
{{#invoke: Sidebar | collapsible }} In quantum mechanics, wave function collapse is said to occur when a wave function—initially in a superposition of several eigenstates—appears to reduce to a single eigenstate (by "observation"). It is the essence of measurement in quantum mechanics, and connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is continuous evolution via the Schrödinger equation.[1] However in this role, collapse is merely a black box for thermodynamically irreversible interaction with a classical environment.[2] Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system's states and the environment's states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation.[3]
In 1927, Werner Heisenberg used the idea of wave function reduction to explain quantum measurement.[4] Nevertheless it was debated, for if collapse were a fundamental physical phenomenon, rather than just the epiphenomenon of some other process, it would mean nature was fundamentally stochastic, i.e. nondeterministic, an undesirable property for a theory.[2][5] This issue remained until quantum decoherence entered mainstream opinion after its reformulation in the 1980s.[2][3][6] Decoherence explains the perception of wave function collapse in terms of interacting large- and small-scale quantum systems, and is commonly taught at the graduate level (e.g. the Cohen-Tannoudji textbook)Template:Fact.[7] The quantum filtering approach[8][9][10] and the introduction of quantum causality non-demolition principle[11] allows for a classical-environment derivation of wave function collapse from the stochastic Schrödinger equation.
Mathematical description
Mathematical background
The quantum state of a physical system is described by a wave function (in turn – an element of a projective Hilbert space). This can be expressed in Dirac or bra–ket notation as a vector:
The kets , specify the different quantum "alternatives" available - a particular quantum state. They form an orthonormal eigenvector basis, formally
Where represents the Kronecker delta.
The coefficients c1, c2, c3... are the probability amplitudes corresponding to each basis . These are complex numbers. The moduli square of ci, that is |ci|2 = ci*ci (* denotes complex conjugate), is the probability of measuring the system to be in the state .
The process of collapse
With these definitions it is easy to describe the process of collapse. For any observable, the wave function is initially some linear combination of the eigenbasis of that observable. When an external agency (an observer, experimenter) measures the observable associated with the eigenbasis , the wave function collapses from the full to just one of the basis eigenstates, , that is:
The probability of collapsing to a given eigenstate is the Born probability, . Post-measurement, other elements of the wave function vector, , have "collapsed" to zero, and .
More generally, collapse is defined for an operator with eigenbasis . If the system is in state , and is measured, the probability of collapsing the system to state (and measuring ) would be . Note that this is not the probability that the particle is in state ; it is in state until cast to an eigenstate of .
The determination of preferred-basis
Template:Fringe-section The complete set of orthogonal functions which a wave function will collapse to is also called preferred-basis.[2] There lacks theoretical foundation for the preferred-basis to be the eigenstates of observables such as position, momentum, etc. In fact the eigenstates of position are not even physical due to the infinite energy associated with them. A better approach is to derive the preferred-basis from basic principles. It is proved that only special dynamic equation can collapse the wave function.[14] By applying one axiom of the quantum mechanics and the assumption that preferred-basis depends on the total Hamiltonian, a unique set of equations is obtained from the collapse equation which determines the preferred-basis for general situations. Depending on the system Hamiltonian and wave function, the determination equations may yield preferred-basis as energy eigenfunctions, quasi-position eigenfunctions, mixed energy and quasi-position eigenfunctions, i.e., energy eigenfunctions for the interior of a macroscopic object and quasi-position eigenfunctions for the particles on the surface, and so on.
Quantum decoherence
Main Article: Quantum decoherence: Mathematical details
Wave function collapse is not fundamental from the perspective of quantum decoherence.[15] There are several equivalent approaches to deriving collapse, like the density matrix approach, but each has the same effect: decoherence irreversibly converts the "averaged" or "environmentally traced over" density matrix from a pure state to a reduced mixture, giving the appearance of wave function collapse.
History and context
The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematic und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik.[16] Consistent with Heisenberg, von Neumann postulated that there were two processes of wave function change:
The existence of the wave function collapse is required in
See also
|CitationClass=book }}
2. 2.0 2.1 2.2 2.3 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
5. Template:Cite web
6. Template:Cite web
8. Template:Cite techreport
14. Template:Cite web
15. Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical,Reviews of Modern Physics 2003, 75, 715 or http://arxiv.org/abs/quant-ph/0105127
16. Template:Cite arXiv
18. {{#invoke:Citation/CS1|citation |CitationClass=journal }})
19. Discussions of measurements of the second kind can be found in most treatments on the foundations of quantum mechanics, for instance, {{#invoke:citation/CS1|citation |CitationClass=book }}; {{#invoke:citation/CS1|citation |CitationClass=book }}; and {{#invoke:citation/CS1|citation |CitationClass=book }}. |
21ac6bb83713b7d4 | Towards an Ontology for Unified Knowledge: The Hypothesis of Logical Quanta.
Print Friendly, PDF & Email
The weirdness of quantum mechanics
From the very beginning of quantum mechanics it was seen that there was something absurd about it. A hundred years later, we are still speaking about quantum paradoxes.1 There is a difference albeit; we now know that these paradoxes govern the way things are at the most fundamental level. The quantum paradox can be described with one phrase; Things in quantum world behave in a strongly different way than in our everyday world.2 There is a loophole in our understanding of the (real) nature of the physical world.
This is not the only one. There are also loopholes in our understanding of the nature of life and, even more, the nature of consciousness. There is also more weirdness about the special abilities of the inner world of human beings. Confronting this situation, we usually avoid the problem by dividing it in no compatible sections and dismissing all evidence not fitting to our bias3 3 This vein of thinking helped a lot in solving many scientific problems of everyday world, 4 but there are strong indications that it ends at a deadlock.
Working on the opposite direction, there is a hard temptation to follow a confusing way of thinking, like this; quantum mechanics is weird, spiritual life is also unusual, so these are similar in this respect, or it is possible to interpret the second through the first. 5 This paper presents a synthesis of quantum mechanics and the ontology based on the notion of logos, as this has been developed in ancient and Medieval Greek philosophy and theology. We acknowledge the danger just mentioned and we try to overcome it, by clarifying as possible the way we work.
On the other hand, philosophy of logos has been developed in a theological context, and theology, nowadays, is strongly ideological. An ontological proposal for unifying the knowledge, the spiritual and the scientific one, has to be accepted both by believers and non believers. This restriction demands a special interpretation of theology. In fact, there is a conceptual tool useful for both cases. This is the distinction that should be made, between empirical data connected with physical or spiritual facts, their explanation and, finally, the ontology that one can construct by using them, in other words, the metaphysics that one could attach to these data.
Empirical data, explanation and ontology.
In physics we usually attach a set of empirical data to a theory that explains them. Such a theory is a conceptual construction explaining the causes of these data and predicts their evolution in time and /or space. The data are correlated with entities and, usually, when we know the data and the theory we think that we know the entities and the ontological state of these entities. In our everyday life, the theory that predicts the evolution of the entities and the theory, i.e. the ontology that describe their ontological state, coincide.
In Philosophy of Science, it is well known that the adoption of the right theory that describes the evolution of an entity, or a phenomenon, is very complicated. However, the distinction between the theory and the description of the ontological state is less obvious, and in our everyday life it is expelled. The interconnection of the facts and the theory that explains them is well studied by the philosophy of science and we know well that a data set can be explained by more than one theory, and we can find related examples in many scientific fields. 6 In certain scientific fields, we have different theories, which describe the same ontological states of an entity. A scientific theory is expressed by a mathematical formalism.7 In our everyday world physics, i.e. the classical physics, the entities that formalism describes are well defined. There is a rigid connection between data, formalism and entity. This is not the case in quantum physics, as we will clarify afterwards.
The same distinction is very useful, when we work at interpreting theology. Theology starts by determining the ontological status of the entities, and then develops a theological theory about them and connects them with empirical data. Traditional theologies work like classical physics. The interconnections between the three stages are very rigid. I have in mind traditional monotheistic theologies. In our globalized world, this attitude of monotheistic mainstream theologies has been proved insufficient. The problem is that the same or very similar data, like religious and mystical experiences or miracles, are explained by different theologies in various ways, all of them claiming the same credibility with reference to the same ontological state of a fundamental conceptual entity, named God. This situation is probably hard for traditional theologies, but allows us a very fertile approach to any theological system. We can accept the truthfulness even the objectiveness, of spiritual or mystical empirical data and can distinguish them from any subjective theological system and the almost arbitrary ontology that this system produces. It is possible to accept certain parts of a theological system and introduce them to our interpretation of physical phenomena. The result could be a synthesis of an old tradition with contemporary philosophical or scientific research. Through this procedure, there could be a great gain. A unification of our understanding of spiritual and physical world.
Interpreting Quantum Mechanics
It is quite important to clarify the conceptual framework of the interpretative problem of quantum mechanics. It is about the behavior of a quantum entity, being in a very special condition, in a superposition state. 8 Such a quantum entity is a microscopic particle that we study per se, when it is not correlated with macroscopic environment. This happens, generally talking, when such an entity exists between two successive measurements. 9 Quantum weirdness appears, when such an entity interacts with macroscopic environment at the end of the second measurement.
In common English, a quantum entity appears to be either a particle, or a strange kind of wave. It appears with a different “personality”, which is supposed to be depended on the structure of the measurement apparatus we use. 11 It responds instantly to any change we make to apparatus, sometimes even before we make our decision, as if it knows what we (will) have in our mind. 12 Somehow, it changes its condition and it is transformed into to a regular particle. This transformation obeys to strictly defined rules that are statistical. When such a transformation occurs, we, by no means, know what exactly will happen. A quantum entity appears to communicate instantly with the whole universe. 13 After all, there is the famous Uncertainty Principle of Heisenberg: “According to quantum mechanics, the more precisely the position (momentum) of a particle is given, the less precisely can one say what its momentum (position) is. This is (a simplistic and preliminary formulation of) the quantum mechanical uncertainty principle for position and momentum.” 14 It is obvious that no entity of our world can have such a behavior.
Using the distinction we have made, when we study a quantum phenomenon, we have a well defined set of empirical data, a set of explanations for them, but we have no ontology, that could be well accepted by physics and philosophers, describing what a quantum entity is. We have a behavior that is well observed, well explained and calculated by mathematical formalism of quantum mechanics, but we cannot adapt the nature of a quantum entity with any entity of our everyday word. There are various interpretations of quantum mechanics aiming to reconcile our observations of quantum world and our observation of our everyday world. 15
From our point of view, these interpretations follow two main ways. The first is to avoid, somehow, the ontological problem and focus on the explanatory part of quantum theory. These are based on what we call Copenhagen’s interpretation. There are various alternatives but, in fact, it is still impossible to avoid ontology. They introduce a number of principles aiming to explain the experimental data. The most famous among them is the Complementarily Principle, proposed by Niels Bohr, 16 or the Projection Postulate proposed by von Neumann. 17 Modal interpretations refute the rigid ‘eigenstate-eigenvalue link’ 18 and so on. In any case, those principles express an ontology which is radically different from our every day understanding of our world.
The second way is more radical and develops new ontology describing the whole physical world. This path follows Bohmian mechanics, 19 Many Worlds 20 interpretations, or Collapse theories. 21 They explicitly introduce new ontology, either at the quantum level, or at a cosmic level. Physicists do not like the concept of Metaphysics. Any quantum interpretation is strictly and necessarily metaphysical, but this, however, is not how physicists like to think. They question the problem through mathematics; develop their ontology by giving ontological meaning at certain parts of quantum mechanical formalism. They achieve the development, more or less, a self consistent explanation, but none of which could be preferable, because their ontology is not integrated with the rest experience of the human civilization.
Confronting this problem, we propose an alternative approach. Our point of departure is not the formalism, but an already developed ontology. This is an ontology still based on the observation of the physical world, but uses different methods than contemporary science. This method is not completely analytical, but it is based on a combination of intuitive, conceptual and analytical approach to the problems. It is the way a basketball player computes and makes a shooting, the way ancient Greeks built Acropolis. Ancient and Medieval Greeks developed the ontology of Logos to communicate their understanding of how physical world works. This usage of the notion of logos usually passes unnoticed, as it is overridden by the intense use of divine Logos in Christian Theology. But in the background of theological conflicts, ontology of logos of a natural being has been developed to a complete system that was able to describe both spiritual and the physical world of sences.
The ontology of logos
It is usual to say that father of the concept of logos (in Greek λόγος, which is translated in English as word, but we will prefer to use type logos and his plural logoi), is Greek philosopher Heraclitus from Ephesus (535475 BC). It’s hard to believe that a single person could conceive such a revolutionary, in those times, thought, as the following:
“This world-order [cosmos], the same of all, no neither god nor man did create, but it ever was and is and will be: ever living fire, kindling in measures and being quenched in measures.” 22
Heraclitus and others pre-Socratic philosophers expelled divine action from the world and formulated for the universe a natural way of being and evolving. Heraclitus was a step forward from his ancestors. Among others, he first made a basic distinction between the “stuff” universe is made and the principle that controls the way this stuff evolves and the beings become to existence. This stuff was fire and the principle was Logos. 23 Everything is becoming according to Logos and, if we speak in contemporary terms, Logos includes all information that controls life and evolution of all beings. We can call this information “active information” because it is strongly connected with beings and constitutes them, it makes them exist. As far as Heraclitus is concerned, fire and logos are not divine, they are somehow material. 24
Soon after the stuff of the universe was separated from the formatting principle, the latter became divine, immaterial and even constituted a completely separated world, the Plato world of ideas. Ideas were not only separated from beings, but they had a more analytical structure. There was not an abstract idea that controls the world, but there were many ideas, and each one, controls all the similar beings. Plato’s system was more complicated than Heraclitus, but yet not enough. The emphasis was put on the separation and superiority of the divine world of ideas, from the physical word, the separation of the principle that controls the universe, from itself. 25
Stoics rejoined the controlling principle with the physical world, reusing the concept of logos for their ontology. Logos is inside beings and it is divine even though it was material. Beings and God are completely united, this was typical pantheism.
“In accord with this ontology, the Stoics, like the Epicureans, make God material. But while the Epicureans think the gods are too busy being blessed and happy to be bothered with the governance of the universe, the Stoic God is imminent throughout the whole of creation and directs its development down to the smallest detail. God is identical with one of the two ungenerated and indestructible first principles (archai) of the universe. One principle is matter which they regard as utterly unqualified and inert. It is that which is acted upon. God is identified with an eternal reason (logos, Diog. Laert. 44B ) or intelligent designing fire (Aetius, 46A) which structures matter in accordance with Its plan.” 26
The major contribution to the evolution of the concept of logos was made by Philo of Alexandria, 20 BC – 50 AD. He and his contemporary Judith theologians, tried to harmonize Judith theology with Greek philosophy. He combined the concept of Judith God with the concepts of logos and ideas. He joined logos with ideas and distinguished Logos, as the principle of all beings, from idea-logos the ontological principle of every separate being. Logos was connected with God, and became the ultimate power of God, the Son of God. Ideas were renamed to logoi and were the ontological background of every being. Logoi were pictures of beings, established at the mind of God, and Logos created beings according to these logoi.27
“For the world has been created, and has by all means derived its existence from some extraneous cause. But the word (logos) itself of the Creator is the seal by which each of existing things is invested with form. In accordance with which fact perfect species also does from the very beginning follow things when created, as being an impression and image of the perfect word.” 28
Logos is expressed through logoi, and logoi are unified in Logos. From then on, the ontology of logos follows this scheme: Logos is the ontological background of logoi, which are the ontological background of beings. The dissociation of Logos to logoi was developed by Christian theology. Logos became the second person of Holy Trinity and monopolized their interest. Albeit, they used the concept of logos quite often, when they tried to describe God’s connection with beings. That was a major problem for ancient Christian theology, which confronted the problem of evil, as a result of the tight connection of Creator with creation.
Origen (185–254 AD) did not used logoi to solve the problem of evil, he preferred the concept of souls, 30 but he confirmed definitely that, for every being, there is his logos and he associated logoi of being, with epistemology. He taught that human mind can “see” the logos of being through “φυσική θεωρία”, which can be translated as natural contemplation. Heraclitus first associated logos, with a certain state of human mind, but it was Origen and his pupil Evagrios Pontikos, who developed in details the interaction of state of human mind and the “vision” of logoi of being.
The theory of logoi of Maximus the Confessor
Maximus the Confessor (580-662 A.D.), was the Christian theologian who used the most the concept of logos in his work. We owe him the detailed and subtle record of the use of logos of natural being. He didn’t make any radical contribution to it, but he pushed to the end the various properties of logoi of being that were previously introduced, as he intended to develop his theological framework. He uses the concept of logos of natural being for two major goals. The first was to correct theology of Origen 31 and the second, to express the ascetical and mystical experience of religious life.32
The problem of Origen is correlated with the problem of Evil. Origen taught that Logos-Creator created a spiritual world that consisted of souls. This world was (almost) perfect, but somehow, the souls got bored and tried to rebel against the Creator who punished them to be imprisoned to bodies and matter and so had been produced all beings we see. Logos has been embodied to Jesus Christ to give manhood a second chance and, finally, at the end of time, all beings will recover their spiritual nature.
There were many problems, in this scheme of Origen. The most important was that there was confusion between Creator and creation, because in this scheme God and souls are co-eternal. The radical distinction between God and World is the strongest characteristic of Judith-Christian theology. Another characteristic is that God is perfect and everything He does (must be) perfect. The world we observe is not perfect, so there is a problem. Origen tried to solve this problem with the teaching of the fall of souls but confused Creator with creation. To avoid these problems, Maximus uses the ancient distinction between the stuff that the beings are made of and the principle or pattern that shapes this stuff. So he used the concept of logoi that govern the way that beings are made and evolved. 33
If logoi constitute a world outside God, there must be a time when they didn’t existed, so we must assume that there was a change at the state of God. The time before the creation of logoi, God was not a Creator and after that time, He became a Creator. That was unacceptable for Maximus and his contemporary people’s vision of God. So he declared that logoi are God’s wills which are co-eternal in God’s mind.34 At a point that is timeless, God created the beginning of time and logoi started to be expressed as beings. With this scheme, God is always a Creator and the material creation is not co-eternal. But the problem of Evil remains.
Logoi and beings are very strongly correlated, and logoi are very strongly joined with Logos. Logoi and beings are interacting and continuously evolving and the whole creation is moving to a certain point, which is Logos.35 So Logos is simultaneously, the beginning and the end of the motion and evolution of all beings. Logos, as the end of evolution, offers a kind of restoration of everything, and Maximus believed that the problem of Evil is solved. 36 However, it is not, because there still is a lot of suffering that cannot be explained. Maximus offered an explanation; all that we suffer is given by God to make for us necessary the spiritual world. 37 A medieval person could accept that, but such an image of how God acts, is hardly acceptable by a contemporary man.
Maximus supported his theological scheme by taking advantage of the ontology of logos as it was developed by previous philosophers and theologians. By doing so, he gave us many details about it. He declared that logos of every being is the ontological background of all of his physical properties. 38 He described the hierarchical levels that exist in every logos, a scheme that we call tree-structure of logoi of beings. More specifically a logos which is the result of synthesis of other partial logoi, it is the ontological background of the synthesis of the partial logoi, he controls them as they evolve to constitute him. 39 This property of logoi was very important for him, because he believed that the power that pushes the evolution and motion of beings is not at the beginning of history, but at the end. For Maximus it is God-Logos who attracts the beings to Him and makes them move.
Maximus understood logoi as God’s wills that are inside His mind, but he also believed that human mind is capable to “view” them through natural contemplation. 40 Ascetical life refines human mind and it passes from natural contemplation to mystical contemplation, 41 which assures that logoi have a real existence. Maximus established his “logical” realism to the ascetical experience. This is quite important for us, because it allows us to use the distinction between facts, explanation and ontology that we’ve mentioned previously. We can accept the empirical core of Maximus Theory and interpret differently the explanation and the ontology.
Such an interpretation of Maximus teachings, leads us to summarize that logos is a hidden pattern that controls the beings and reality, logos in his original meaning that has been introduced by Heraclitus is information that is active, that is expressed as a being. Logoi are not concepts, but they are real, information has self existence. Logoi (information) have inner structure, they are organized at hierarchical levels and these levels make the tree of logoi, the ontological tree of our universe, which has a construction from bottom to top. The top of it, it is down, it is his foundation. The top of this tree supports the whole tree, it is a reversed tree. For Maximus, the top of the tree which supports it is God-Logos, the basis, the beginning and the end of everything. 42
This property has important physical consequences. Logos of a being, which is constituted by other beings, controls the logos of these beings and makes them to constitute it. The cause of a fact can be in the future. In theology we call it eschatology. This can be understood only if we interpret Maximus doctrine that logoi are sited at God’s Mind. Orthodox theology determines logoi as “aktistoi” because they co-exist with God. That means that they are not simply eternal. Eternal is something that remains the same as the time passes. Logoi do not remain the same, they evolve, but they are outside time and space. About logoi there is no meaning for before and after. A composite logos controls the logoi which consist him. It is the cause of their evolution, but when it is expressed at space-time, the (composite) entity that it controls, appears in time, after its components. Causality is independent from the arrow of time.
Every being is attached with its logos. It is more accurate to say that a being, is a composite being, it is logos-information expressed as a (material) being in space-time. Logos interacts with other logoi but this life of logoi, is taking place outside time and space. Logoi have an inner structure, which is inverted from a point of view inside space-time. Life of logoi gives to beings special properties that are revealed to human mind under special conditions. A human mind that is properly exercised, can feel all these. Throughout human civilization, there are evidences of deep feeling of an inner side of all beings. This experience is interpreted in Medieval Greek philosophy with ontology based on logos. This ontology was strongly correlated with Christian theology but ontology of logos, pre-exists Christianity. It is a common denominator of the whole Ancient and Medieval Greek Philosophy. If it is necessary to introduce metaphysics in Physics, ontology of logos is a appropriate candidate.
The Hypothesis of logical quanta
To visualize ontology of logos, we used the scheme of an inverted ontological tree. As we are going up we can find logoi of fundamental elements of our world. We can find logoi of elementary particles. So we can speak about logos of quantum particle. Such a particle is an entity that it is not correlated with macroscopic environment. The Hypothesis of Logical Quanta (HLQ) says that a quantum particle is a logos disconnected from the ontological tree, it is a pure logos not connected with an entity that exists at space-time, it is pure information which has not yet been expressed at space-time. Such a pure logos is a potential entity. HLQ answers the basic question of any interpretation of quantum mechanics; what a quantum particle is, and the answer is that it is a logos, that a quantum particle is pure, yet unexpressed, information.
Quantum entities as logoi, have the properties of logoi. They “exist” in a special space; we can call it “logical space” with no spatial or time coordinates. Even so, they evolve and interact with other logoi, both with pure logoi, other quantum entities, and logoi connected with beings, macroscopic entities. The projection of pure logos to space-time is expressed by Schrödinger equation. 43 Schrödinger equation does not describe the evolution of a “real” entity, but the projection to “real” world of the timeless evolution of a logical entity. It is important to emphasize that logical space and space-time are rigidly connected and that ontological cause, lies in logical space. Ontological background of every physical entity is his logos. Every entity has its own logos, and as every entity is constituted by other entities, every logos is a synthesis of other logoi. We can say that beings float at a sea of logoi, they are the visible top of an “iceberg”.
With the conceptual equipments that HLQ gives us, we can interpret various quantum mechanical issues. First, we can explain the collapse of the wave function. It is equivalent with the question what and why happens, when a quantum entity ends being in superposition and it changes into a classical entity. HLQ explains that it happens, when a quantum entity-logos is connected with the ontological tree. A composite logos controls the logoi that compose it. When a “free” logos is connected with the ontological tree, it is no longer free and is under the action of composite logos. This action causes the collapse of wave function. Because of this action, a pure logos is expressed to an entity, and is correlated with the composite macroscopic entity, the measurement apparatus. This statement entails that we have a phase transition that happens as a quantum entity is correlated with a macroscopic entity.
The wave-particle duality is well understood, if we consider that the logoi of every quantum entity, more accurately of every (elementary) particle, however massive it could be, are all together within the same dimensional space, a space without spatial and time coordinates, and constitute a “logical fluid” with defined wave properties. In two slit experiment, there may be always one entity at a time, but the logoi of all particles are all together, so that it behaves like a wave. Our interpretation contradicts complementary principle, in our case an electron is neither wave, nor particle, it is logos that interacts with logoi of apparatus and appears to be either a particle, or a wave even both as wave and particle.
Non locality and delayed decision, or non catastrophic measurements are easily understood by the non spatial or time coordinates of logoi. At every instant, a quantum entity through its logos communicates with every single part of the experimental apparatus and it corresponds instantly with anything that happens in it. From the point of view of an observer that stands in space-time, it looks as if the quantum entity knows what will happen, or observer’s action changes the past. Entangled particles are particles that have the same logos, or better, their logoi are tightly connected.
As far as we can consider HLQ, we cannot explain the values of probabilities that we take by the Schrödinger’s equation solution. But no other interpretation does it. We can only comment that, if the wave function ψ were a real function and quantum entities were localized at phase space, it is hard to think how the diversity and complexity of our world could arise from quantum world. Schrödinger’s equation probabilities and Uncertainty Principle loosen the connection between quantum entity and information which is included in its logos. All elementary particles are undistinguishable and their logoi include the same information. It is necessary, for our world to exist, that this information should be expressed in various ways.
HLQ arises from a metaphysical background, but it is not more metaphysical than other interpretations. One could notice that, every particular implementation of HLQ exists in similar form in other interpretations. Bohm’s dynamic and quantum potential have common properties with logoi. But there is a very important difference. Logoi is a characteristic of every being and not only of quantum entities. Logoi is not a set of hidden variables, but includes every variable. Modal interpretations give primary role to the apparatus, even if they offer no ontology for their claims. Other interpretations suggest actions which reverse the time arrow 44 and so on.
There are strong indications supporting HLQ from other scientific fields. Many physicists suggest that information is crucial for the structure of Universe. 45 There is also Holographic Principle that potentially gives a mathematical meaning to “logical dimensions.” 46 The creative role that Ilya Prigogine gives to the arrow of time and the concept of emergence, 47 that is very popular nowadays, has a lot in common with the action of logos.
Most supporting to our Hypothesis is the work of Roland Omnès, who concludes his analysis with the necessity of distinction between reality and logos, the formatting principle, but he says: “The notion of logos is obviously insufficiently developed and is rather questionable. We shall see however that it offers a possible way out of several problems.” 48 I think that Omnès is not familiar with the complete ontology of logos as it was developed by Medieval Greek philosophy.
Human mind and Unification of Knowledge
The greatest merit with HLQ that was developed by ancient philosophers and finally, declared by Maximus, is the aspect that human mind is created or evolved with the ability to “see” logoi of being. Reality has many levels of organization and many points of view. HLQ suggests that all levels and all perspectives of Reality are based on information. This information is not the kind that contemporary science of information studies. As Antony Zeilinger proved, there is information that cannot be expressed by bits.49 This is a strong indication that there is information of a different kind than the usual we know in our everyday life. Information, we are talking about, has inner structure and is self existent. These points drive us to our next step of understanding.
Information of a composite logos, is more than the sum of partial information that is included at the logoi that compose it. This is a result of quantum mechanical formalism, but it can be extended to logoi of macroscopic entities.50 A composite logos has new functionality and new relations to other logoi of beings. As we move downwards the ontological tree, from one level to the other, an active information excess is always produced. The more complicate is a being, the more information excess it includes. Talking about the human brain, which is the most complicated structure in the known universe, we can consider the information excess it possesses. This excess could be the cause for whatever we call free will.
Considering the above it seems very tenable to suppose that a mind is the result of the “logical structure” of brain, the logoi of entities of the physical structure of brain. Memories could be stored and processed in it. It is not the biochemical structure of brain that stores and processes information and produces the mind, but the structure of logoi beneath it. Connections and interactions between logoi of neurons are more stable than connections of neurons. Procession of information could be made by logoi of whole parts of brain. This model is flexible enough to explain the way mind arises from brain. HLQ shows us new ways of research in this field. They are ways that are established on physical structure of brain, but are not restricted by it.
If this model is valid, it follows that the mind has access to “logical space”. Whatever we call spiritual life or activity, is taking place in it. This scheme, if developed, can give us answers about the nature of mathematics, intuition, art and every phenomenon we characterize as spiritual. We can develop a unified approach of various aspects of human civilization based on a certain interpretation of quantum mechanics. That doesn’t mean that spiritual phenomena have a quantum mechanical structure or explanation, as it is often said. HLQ gives a special role to information. This role opens new ways of understanding the way brain works. These ways need to be explored with scientific method to find out what is really going on.
Science and religion
HLQ offers us an opportunity to understand scientifically religion, without denying his experiential reality. It allows us to distinguish experiential reality of God, from ontological reality of Him. Traditional theologies interpret God in terms of Creation. God is a concept that explains the existence of the world and deep feelings and facts of communication with Him. Every civilization develops an explanatory model about God, based on its knowledge about how it isthe world and the man. This model is thought to be an ontological reality and evolved to a doctrine believed by the particular civilization.
Nowadays, reality of world and human nature, have been proved very complicated and contradictory. All these models about God transfer these contradictions to God’s nature. It is the well known problem of evil. Religions cannot overcome it with a rational way and they are driven to logical deadlock. This deadlock drives contemporary man to reject religion edifice, leaving a serious psychological emptiness. HLQ help us to construct a model about God that is logically consistent and includes religion experiences accepting them as real.
As we have noticed in previously, causality lies in logical space and is outside time. Causality follows the arrow of time only phenomenological and has nothing to do with it. The necessity of Creator is due only to human perception. The question of who created the world is pointless. Medieval theologians thought that logoi are inside God’s mind. We can unite God with His mind, the logical space. It is not a complete answer to the question, what or who God is, but it is flexible enough and gives us the possibility to understand religious phenomena, like prayer and mystical experience.
Human mind has the ability to access logical space, in other words, God’s mind. This ability, as all abilities of human beings, can be cultivated and developed and can produce strong feelings to the person that practices it. These feelings produce the mystical vein of every religion. The act of accessing logical space is understood, as a special kind of communication and it is described, as prayer. Every religion and civilization expresses all these empirical data with its own theological and philosophical concepts. It is not hard to understand that a person with a special gift can develop his ability to communicate or interact through logical space with other persons, or with previous or future facts. These and many others, quite unusual facts, can be explained with the aid of HLQ, without denying our naturalistic view of the world.
HLQ is a proposal that it is strictly defined at the field of quantum mechanics. It is indisputable that it is a metaphysical one, but there is no self consistent way to avoid metaphysics, if one aims to face the question of what a quantum entity is. By accepting HLQ, we grasp a powerful tool to explain emergence of life and mind. We give information the status of matter and energy, but we need a formalism to describe its inner structure. If we achieve this, we could construct a proper model about the connection of mind with brain. At this time, HLQ is a way that needs to be explored towards the various directions that are in front of us.
Afshar, S. S., “Sharp complementary wave and particle behaviours in the same welcher weg experiment”, IRIMS preprint, May 2003.
Albert, David, Quantum mechanics and Experience, Harvard, 1994
Atmanspacher, Harald and Primas, Hans (2002), “Epistemic and Ontic Quantum Realities”. PhilSci Archive, ID cod 938
Barrett, Jeffrey, “Everett’s Relative-State Formulation of Quantum Mechanics”, Standford Ecyclopedia of Philosophy,
Barrow, J., P. Davies, Jr. Harper, Science Ultimate Reality, Quantum Theory, Cosmology and Complexity, Cambridge University Press, 2004
Bayer von Hans Christian, Information the New Language of Science, Phoenix Orion Books, 2004
Bohr (1935), “Quantum Mechanics and physical reality”, Quantum Theory and Measurment, Princeton Series in Physics, Princeton, New Jersey 1983, p. 144
Borgen, Peder, Philo of Alexandria, Brill Academic Publishers, Brill 1996
Cartwright, Nancy, How the Laws of Physics Lie, Claredon Press, 19863
Clauser, John F., “De Broglie wave interference of small rocks and live viruses”, Experimental metaphysics, Quantum Mechanical Studies for Abner Shimony,
Kluwer Academic Publishers Vol 193, p. 1
Cramer, John G., “An Overview of the Transactional Interpretation of Quantum Mechanics”, International Journal of Theoretical Physics 27, 227 (1988)
Cramer, John G., “Velocity Reversal and the Arrow of Time”, published in Foundations of Physics 18, 1205 (1988)
Cushing, James T. Philosophical concepts in Physics, The historical relation between philosophy and scientific theories, Cambridge University Press, 1998
Durr, D. S. Goldstein and N. Zanghi, “Bohmian mechanics and the wave Function”, Experimental metaphysics, Quantum Mechanical Studies for Abner Shimony, Kluwer Academic Publishers, p. 32
Egan,Harvey D., AnAnthology of Christian Mysticism, Pueblo Book,Liturgical Press 1991
Einstain, Podolsky, Rosen (1935), “Can quantum-mechanical description of physical reality be considered complete?” , Quantum Theory and Measurment, Princeton Series in Physics, Princeton, New Jersey 1983, p. 138
Everett, H., 1957a, On the Foundations of Quantum Mechanics, thesis submitted to Princeton University, March 1, 1957
Feynman, Richard, QED, Princeton University Press, 1985, translated in Greek, Trochalia 1988
Ghirardi, Giancarlo “Collapse Theories”, Standford Ecyclopedia of Philosophy,
Ghose, Partha, Testing Quantum Mechanics on New Ground, Cambrige University Press, 1999
Hawking, Stephen, Roger Penrose, The nature of Space and Time, Princeton University Press, 1996
Heisenberg, Werner, Encounters with Einstein, and other essays on people, places and particles, Seabury Press, 1983
Heisenberg, Werner, Physic and Philosophy, Penguin Classic3, Great Britain 2000
Hey, Tony and Patrick Walters, The Quantum Universe, Cambridge University Press, 1987
Jammer, M., The Philosophy of Quantum Mechanics, New York, Wiley 1974
Kraut, Richard, The Cambridge Companion to Plato, Cambridge University Press
Landaou, Robert and Menas Kafatos, The Non-local Universe, Oxford University Press, 1999
Laughlin, Robert B., A Different Universe, Reinventing Physics from the Bottom Down, Basic Books, New York, 2005
Long, A.A., The Cambridge Companion to Early Greek Philosophy, Cambridge University Press, 1999
Louth, Andrew, Maximus the Confessor, Routledge 1996
Messiah, Albert, Quantum Mechanics, Dover Publications, Inc. Mineola, New York, 1999
Migne, Patrologia Graeca volume PG 90 and 91
Minkel, J. R., “The hollow universe”, New Scientist, issue 2340, 27 April 2002, page 22
Minkel, J.R., “The top-down Universe”, New Scientist, issue 2355, 10 August 2002, page 28
Moore, Edward, Origen of Alexandria and St. Maximus the Confessor, Boca Raton, Florida 2005
Mouraviev, Serge, “The Hidden Patterns of the Logos”, The Philosophy of Logos, Volume 1 Athens 1996, p. 148
Murchu, Diarmuid o’, Quantum Theology, Spiritual Implications of the New Physics, The Crossroad Publishing Company, New York, 2004
Omnès, Roland, Interpretation of Quantum Mechanics, Princeton University Press, 1994,
Omnès, Roland, Quantum Philosophy: Understanding and Interpreting Contemporary Science, Princeton University Press, 1999
Penrose, Roger, Abner Shimony, Nancy Cartwright, Stephen Hawking, The Large, the Small and the Human Mind, Cambridge University Press, 1997
Penrose, Roger, The Road to Reality, Jonathan Cape London, 2004
Penrose, Roger, The Shadows of the Mind, Oxford University Press, 1994
Pierris, A. L., “Logos as ontological principle of reality”, The Philosophy of Logos, International Center for Greek Philosophy and Culture, Athens 1996, Volume II
Powers, Jonathan, Philosophy and New Physics, Routledge 1991
Redhead, Michel, From Physics to Metaphysics, Cambridge University Press, 1995
Runia, D.T., Philo and the Church Fathers: A Collection of Papers, Brill Academic Publishers, Brill 1995
Runia, David T., “Philo, Alexandrian and Jew,” Idem, Exegesis and Philosophy: Studies on Philo of Alexandria (Variorum, Aldershot, 1990)
Sandywell, Barry, Presocratic Reflexivity: Logological Investigations, Routledge (February, 1996
Selleri, Franco, Die Debatte um die Quantentheorie (Facetten der Physik), F. Vieweg (1983), Translated in Greek, Gutenberg, 1986
Sherwood, Polycarp, St. Maximus the Confessor, The ascetic life, the four centuries on charity, Newman Press, N.Y.
Stonier, Tom, Information and the Internal Structure of the Universe, An Exploration into Information Physics, Springer-Verlag, 1990
Talbot, O Michael, The Holographic Universe, Harper Collins Publishers, London 1996
Tanona, Scott, “Idealization and Formalism in Bohr’s Approach to Quantum Mechanics”, Philosophy of Science, Vol. 71, number 5 p. 683
Thunberg, Lars, Microcosm and Mediator, The Theological Anthropology of Maximus the Confessor, second edition, Open Court, Chicago and La Salle, Illinois, 1995
Valtasar, von Hans Urs, Cosmic Liturgy, The Universe According to Maximus the Confessor, Communio Ignatius, translated from German, Brian E. Daley, S.J. 2003
Wheeler, John Archibald and Wojiech Zurek, Quantum Theory and Measurment, Princeton Series in Physics, Princeton, New Jersey 1983
1 Roger Penrose 1994, p. 305, ( g.e.)
2 Rolland Omnès 1999, p. 161
3 Alison George, Lone voices special: Take nobody’s word for it, An interview with the Nobel winner Brian Josephson, issue 2581 of New Scientist magazine, 09 December 2006, page 56-57
4 Physics and the Real World, Ellis, George F R. This paper was prepared for “Science and Religion: Global Perspectives”, June 4-8, 2005, in Philadelphia, PA, USA,
5 There are many such examples, like Diarmuid o’ Murchu, 2004
6 In quantum mechanics we talk about and distinguish formalism and interpretation. James T. Cushing, 1998 p. 439 (g.e.)
7 Roland Omnès 1999, p. 124.
8David Albert, 1992, p. 30
9 Werner Heisenberg, 2000, p. 14
10 Jenann Ismael, “Quantum Mechanics”, Online Stanford Encyclopedia of Philosophy,
11 This effect is well shown in the two-split experiment, Penrose, 2004, p. 504 and a very good description at Tony Hey and Patric Walters 1987 p.22 (g.e).
12 It is the known Wheeler’s delayed-choice experiment, Robert Nadeau and Menas Kafatos 1999, 50 and Penrose 2004, p. 512.
13 Detailed description Penrose 1994, p.309 (g.e.)
14 Jan Hilgevoord and Jos Uffink, The Uncertainty Principle, First published Mon Oct 8, 2001; substantive revision Mon Jul 3, 2006, SEP,
15 Roland Omnès 1999, p. 149.
16 Jonathan Powers, 1995, p 177 (g.e.)
17 David Albert, 1994, p. 80.
18 Michael Dickson, 1998, p. 88
19 David Albert, 1994, p.135, Franco Selleri, 1983 p. 59 (g.e.)
20 Lev Vaidman, “Many World Interpretation”, Stanford Encyclopedia of Philosophy, First published Sun 24 Mar, 2002
21 Giancarlo Ghirardi, “Collapse Theories”, Stanford Encyclopedia of Philosophy, First published Thu 7 Mar, 2002,
22 Greek text and English translation can be found at, B30
23Edward Hussey, Heraclitus, A. A. Long 1999, p. 154 (g. e.) The most characteristic text is B1; “Though this Word (logos) is true evermore, yet men are as unable to understand it when they hear it for the first time as before they have heard it at all. For, though all things come to pass in accordance with this Word (logos), men seem as if they had no experience of them, when they make trial of words and deeds such as I set forth, dividing each thing according to its kind and showing how it is what it is. But other men know not what they are doing when awake, even as they forget what they do in sleep”.
24 Edward Hussey, Heraclitus, A. A. Long 1999. p. 164 (g.e.)
25 Richard Kraut, “Plato”, Stanford Encyclopedia of Philosophy, first published Sat 20 Mar, 2004, . It deserve to mention that now days it is controversial if Plato endorses this understanding of his writings. Mark Balaguer, Platonism in Metaphysics”, first published Wed 12 May, 2004, Stanford Encyclopedia of Philosophy,
26 Dirk Baltzly, “Stoicism”, Stanford Encyclopedia of Philosophy, first published Mon Apr 15, 1996; substantive revision Mon Dec 13, 2004
27 David T. Runia, “Philo, Alexandrian and Jew,” Idem, Exegesis and Philosophy: Studies on Philo of Alexandria (Variorum, Aldershot, 1990),
28 ON FLIGHT AND FINDING 12.1, translated by Charles Duke Yonge London, H. G. Bohn, 1854-1890.
30 Edward Moore, “Origen of Alexandria” (185 – 254 A.D.), The Internet Encyclopedia of Philosophy,
31 Hans Urs von Valtasar, 2003, p. 127.
32 Polycarp Sherwood, p. 81.
33 This scheme is known as “double creation”, Lars Thunberg, 1995, p. 151.
34 Lars Thunberg, 1995, p. 64.
35 Hans Urs von Valtasar, 2003, p. 135.
36 Lars Thunberg, 1995, p 145.
37 PG 91, 1104A
38 PG 91 1229
39 PG 90, 447
40 PG 91, 1228
41 PG 90, 1133 A
42 It is a reverse of Origen’s myth; Hans Urs von Valtasar, 2003, p. 133 and p. 154.
43 Roger Penrose 2004, p. 498
44 John G. Cramer, 1988
45 Tom Stonier, 1990
46 By Amanda Gefter, “The elephant and the event horizon”, From issue 2575 of New Scientist magazine, 26 October 2006, page 36-39,
47 Robert Nadeau and Menas Kafatos, 1999, p. 113.
48 Roland Omnès, 1994, p. 527.
49 Anton Zeilinger, University of Vienna, Why The Quantum? “It” from “Bit”? A participatory universe? Three far-reaching challenges from John Archibald Wheeler and their relation to experiment, Science Ultimate Reality, Quantum Theory, Cosmology and Complexity, p. 211.
50 Michael Redhead, 1995, p. 51. (g.e.) |
f0a87660d859ebe7 | Klein Gordon Equation
Some articles on equation, klein, gordon equation, gordon:
Klein–Gordon Equation - Derivation
... The non-relativistic equation for the energy of a free particle is By quantizing this, we get the non-relativistic Schrödinger equation for a free particle, where is the momentum operator ( being the del operator) ... The Schrödinger equation suffers from not being relativistically covariant, meaning it does not take into account Einstein's special relativity ... operators for momentum and energy yields the equation This, however, is a cumbersome expression to work with because the differential operator cannot be evaluated while under ...
Klein–Gordon Equation
... The KleinGordon equation (Klein–Fock–Gordon equation or sometimes KleinGordon–Fock equation) is a relativistic version of the Schrödinger ... It is the equation of motion of a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles ... It cannot be straightforwardly interpreted as a Schrödinger equation for a quantum state, because it is second order in time and because it does not admit a positive definite ...
Famous quotes containing the words equation and/or gordon:
Jail sentences have many functions, but one is surely to send a message about what our society abhors and what it values. This week, the equation was twofold: female infidelity twice as bad as male abuse, the life of a woman half as valuable as that of a man. The killing of the woman taken in adultery has a long history and survives today in many cultures. One of those is our own.
Anna Quindlen (b. 1952)
—George Gordon Noel Byron (1788–1824) |
6397f829a98d34a1 | previous home next
Timeline of Structural Theory
"The nature of the chemical bond is the problem at the heart of all chemistry"
Bryce L. Crawford Jr.
In large part the science of chemistry is concerned with modeling the chemical structure of matter and understanding the nature of the chemical bond. However, the chemical bond sits right on the boundary between the classical and quantum mechanical worlds and academics, professional chemists and teachers liberally pick ideas, concepts and models from both. To understand the nature of chemical bonding it is necessary to see how the various ideas have developed over the past 200 years. This page introduces the main theoretical approaches to understanding chemical structure and reactivity, places them in historical context and considers them with reference to each other.
When chemists think about the structure and behaviour of a substance like ammonia, NH3, they are influenced by experimental (empirical) evidence, physical theory, the history of science, educational dogma and philosophical position.
There are three general approaches to understanding a substance such as ammonia, NH3:
Empirical study is concerned with observation and experiment rather than explanation and theory, but theory may guide experiment and experiments often lead to the development of new theory.
Classical Theory & Newtonian Mechanics model the chemical world in terms of scaled down, ideal macroscopic objects: colliding spheres, bending rods, rotating joints, etc.
Quantum Mechanical theories are concerned with low mass, fast moving photons and electrons that exhibit wave-particle duality.
Chemistry sits right on the boundary between the quantum and classical worlds: the chemical bond is a quantum mechanical construct but the behaviour of molecular entities can often and conveniently be described in classical terms. As a result we have two entirely different types of model of chemical structure, bonding and reactivity.
The diagram below shows a timeline of the development of the main theoretical chemical structure and reactivity ideas over the last 200 years:
The red arrows in the diagram are used to represent the act of "conversion from quantum to classical". They are used twice, between the Bohr atom and Lewis theory, and between VB theory and VSEPR theory. On both occasions there is a theoretical leap of faith because the impression is given that Lewis theory and VSEPR theory are based upon quantum theory, when they are not.
Timelines: A Proviso
One reader of this page has pointed out that the irresistible temptation to show the logical development of a subject using timelines does not represent the actual history, which is never neat & tidy.
Read more on John Denker's website.
1803: John Daltonargued – on the 21st of October 1803 to the Manchester Literary and Philosophical Society in Manchester, and I am writing this text on the 200th anniversary and less than a mile away – for the atomic theory originally proposed by the Greeks. Dalton defined an atom as the smallest part of a substance that can participate in a chemical reaction. He proposed that elements are composed of identical atoms and that elements combine in definite proportions, stoichiometry. Dalton also produced an early table of atomic weights.
1869: The Mendeleev Tablelle I, the first plausible periodic table, was published. It was constructed using the recently discovered element, stoichiometric and periodicity data because some 35 elements had been discovered since 1800.
The success of this first version can be attributed to the gaps which Mendeleev correctly predicted would contain undiscovered elements, and he predicted their properties.
To the modern eye, the biggest omissions are the the Group 18 rare gases (He, Ne, Ar, K & Xe) and that only a small number of f-block elements are shown.
Of more importance, however, is that the elements are arranged by mass rather than atomic number, a concept had yet to be discovered so Mendeleev can be forgiven.
The 1869 Tabelle I is a quite remarkable construct:
1896: Radioactivity is a window into atomic structure.
In the year 1904, JJ Thomson proposed that the atom had a "plum pudding" structure, with the negative electrons in circular arrays – the plums – embedded in a spherical pudding of positive charge with mass evenly distributed. It was the study of radioactivity, in particular alpha radiation, that enabled Rutherford to develop his experiments to probe atomic structure.
March 1904 edition of the Philosophical Magazine JJ Thomson writes: "... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ..."
Diagram from Wikipedia & Charles Baily's presentation: here. The lower depictions are incorrect, in the sense that they are not true to the model proposed by Thompson in 1904.
Elements and Atoms: Case Studies in the Development of Chemistry
Carmen Giunta of Le Moyne College Department of Chemistry has collected many of the original papers plus commentary dealing with eighteenth and nineteenth century science in a web book called Elements and Atoms: Case Studies in the Development of Chemistry. This web resource is highly recommended:
1900: Planck' Quantum Idea was developed as a way of explaining black body radiation and the associated ultraviolet catastrophe, by proposed that energy energy comes in small packets or quanta.
Planck's constant is:
1905: Einstein and the Photoelectric Effect It was known from experiment that metals emit electrons when exposed to light (the system has to be in a vacuum), however, it could not be explained why the rate of emission depended upon the wavelength of the light in the way that it did. In 1905, Einstein showed that if light consisted of particle-like quantised photons, where the energy of the photon depended upon its wavelength, the photoelectric effect could be explained. This work led to a revolution in the understanding of both the electron and light. Light could behave as both a wave and a particle, depending upon the experiment. It was for this work that Einstein received the Nobel prize.
1911: Rutherford, already a Nobel prize winner (1908), interpreted the results of the Geiger-Marsden experiment involving a beam of alpha particles fired at a thin foil of gold designed to measure the deflection as the alpha particles interacted with the "plum-pudding" gold atoms.
Rutherford was astonished with the results which showed that most of the alpha particles passed straight through the gold foil unaffected, but a small minority were deflected by large angles. Rutherford commented at the result: "It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you".
Rutherford proposed a model of the atom had a very small, dense, positively charged nucleus surrounded by electrons.
The HyperPhysics website has a nice diagram comparing the tiny size of the nucleus and the size of a gold atom with the sizes of the Sun and the solar system. This is our version of the diagram, converted to metric units:
The "Rutherford Atom" has developed into a model that is still widely used, although the artists impressions hugely distort the relative sizes of the various atomic components (much to this author's annoyance!):
Our story now moves on from atomic structure to how the negative electrons 'associate' with positive nucleus.
1913: The Bohr Atom In 1913 Niels Bohr - while working in Rutherford's laboratory - constructed a model of the atom that had small, light, fast, particle-like, negatively charged electrons "orbiting" a small, massive positively charged nucleus... although the reason why the electron did not spiral into the nucleus could not be explained. The electron shells were quantised, and as the electrons moved from shell to shell they emitted or absorbed photons the energy of which was equal to the energy difference between the shells.
The Bohr model is the first plausible model of the atom and it is still widely used in education, particularly in illustrations because it is so easy to draw and understand.
Diagram from Wikipedia: "The Rutherford-Bohr model of the hydrogen atom (Z = 1) or a hydrogen-like ion (Z > 1), where the negatively charged electron confined to an atomic shell encircles a small positively charged atomic nucleus, and an electron jump between orbits is accompanied by an emitted or absorbed amount of electromagnetic energy (hν). The orbits that the electron may travel in are shown as grey circles; their radius increases as n
2, where n is the principal quantum number. The 3 → 2 transition depicted here produces the first line of the Balmer series, and for hydrogen (Z = 1) results in a photon of wavelength 656 nm (red)."
Atomic Modeling in the Early 20th Century: 1904-1913
Charles Baily, University of Colorado, Boulder
This excellent Power Point presentation – now as a .pdf file – discusses:
• J.J. Thomson (1904): “Plum Pudding” Model
• Hantaro Nagaoka (1904): “Saturnian” Model
• Ernest Rutherford (1911): Nuclear Model
• Niels Bohr (1913): Quantum Model
Personal communication: There is an associated paper by Charles Baily with the same title presented at the 24th Regional Conference on the History and Philosophy of Science, Boulder, CO (10/12/08).
1916: Lewis Theory was developed at UC Berkeley by the active research group led by G.N.Lewis. The theory was first published in 1916 and was expanded in book form 1923.
Lewis used the new ideas about atomic structure that were widely discussed in his labs:
Lewis proposed the two electron chemical bond, later named the covalent bond by Langmuir. Linus Pauling supported the Lewis analysis, here.
Students of chemistry first learn about Lewis through the well known (some would say too well known) Lewis Octet Rule. This extraordinarily useful rule of thumb states that atoms like to have a full octet of eight electrons in their outer or valence shell. The argument is that fluorine, with seven electrons 'wants' another electron to give the stable 'full octet' fluorine ion, F. Likewise, sodium loses an electron to give the sodium ion, Na+, another species with a full octet. Students soon realise that 8 is not the only special number. Phosphorus pentachloride, PCl5, has 10 electrons. Benzene has 6 electrons in its aromatic π-system. Transition metal complexes often follow the 18 electron rule. Etc.
The Lewis model in its modern form is widely taught in schools and at university level, even though Lewis's electrons are entirely classical:
Negative electron dots are simply assigned to atomic shells, covalent bonds or lone pairs, where they are counted against positive nuclear charge.
In essence, Lewis 'octet' theory is electron accountancy with magic numbers:
The 'theory' gives absolutely no explanation as to why the numbers of electrons about an atomic centre: 2, 8, 8, 18, 18, 32, should exhibit such special stability.
Yet, Lewis theory and electron accountancy are the key tools used by most chemists most of the time to help understand structure and reactivity.
Some examples of Lewis theory in action:
Valence Shell Electron Pair Repulsion, below and in detail here, is an extension of Lewis theory.
Lewis theory is good at describing:
Much atomic, ionic and molecular structure
Electron accountancy during chemical reactions
Reaction mechanisms with curly arrows
Maps to VSEPR theory (below)
Lewis theory is very accommodating and is able to 'add-on' those bits of chemical structure and reactivity that it is not very good at explaining itself. Consider the mechanism of electrophilic aromatic substitution, SEAr:
The diagram above is pure Lewis theory:
In Lewis theory, benzene's six π-electrons have exactly the same status as neon's eight electrons. Both are magic numbers associated with stability, but no explanation is given as to why this should be so.
Lewis theory is bad at explaining:
The nature of the covalent bond
Why oxygen, O2, is a magnetic diradical
The hydrogen bridge bond found in diborane, B2H6
Transition metal chemistry
Read more about Lewis structures and the relationship between Lewis theory and other structural theories in an excellent page by physicist John Denker.
1913-25: Spectroscopy & Quantum Numbers Atomic spectra had been taken since the 1850s by scientists like Bunsen and Kirchhoff. In Denmark, Niels Bohr re-studied atomic spectra and - along with Sommerfield, Stoner and Pauli - devised the quantum numbers from empirical (spectroscopic) evidence:
n principal
l subsidiary (azimuthal, angular momentum or orbital shape)
ml magnetic
ms spin
After the discovery/invention of the Schrödinger wave equation, below, the Bohr model became known as the old quantum theory, here and Wikipedia.
Question: Is light made of waves or particles?
Answer: Both
Experiments that explore the wave-nature of light show light to be wavelike... and experiments that demonstrate that light is made of discrete photons show that light is indeed constructed from particles.
Yeah, we know. Just get used to it. Light is said to exhibit 'wave-particle duality'.
In 1924 de Broglie's proposed that all moving particles has a wavelength is inversely proportional to momentum and that the frequency is directly proportional to the particle's kinetic energy (Wikipedia):
λ = h/p
This concept leads to one of the most important ideas in 20th century science:
The small, light, fast moving electron also exhibits wave-particle duality. It can be conceived of as a particle or as a wave.
This development leads to atomic and molecular orbitals.
1926: The Schrödinger Wave Equation
Edwin Schrödinger knew of de Broglie's proposal that a electrons exhibited wave-particle duality. With this idea in mind, he devised/constructed a differential equation for a wavelike electron resonating in three dimensions about a point positive charge.
Solutions to the Schrödinger wave equation - resonance modes described by mathematical wavefunctions - assumed discrete, quantised, energies which corresponded to spectral lines of one electron atoms and ions: H•, He+, Li2+ etc., and they corresponded exactly with Bohr's quantum numbers. This development lead to quantum mechanics.
Waves and the the mathematical functions that describe them: wavefunctions, are well understood mathematically. For example, they can be added together or subtracted from each other. Consider two sine waves and their superposition, here:
The atomic orbitals derived from the Schrödinger wave equation, being waves, can be added together. The arithmetic can be carried out in various ways.
The term "wavefunction" can be interchanged with term "orbital". By convention, mathematical expressions are termed wavefunctions and chemical structure and structure and reactivity are discussed with reference to orbitals.
Atomic Orbitals are constructed from the four quantum numbers. The AOs fill with electrons, lowest energy AO first, the aufbau principle. An orbital can contain a maximum of two electrons, and these must be of opposite spin, the Pauli exclusion principle. One final rule, Madelung's rule points out that the orbitals fill with electrons as n + l, as principle quantum number plus subsidiary quantum number, rather than n.
1s < 2s < 2p < 3s < 3p < 4s < 3d < 4p < 5s < 4d < 5p < 6s < 4f < 5d < 6p < 7s < 5f < 6d < 7p
Orbitals shape and phase. s-Orbitals are radial and they have n-1 (n minus one) nodes, where n is the principal quantum number. Thus, the 1s orbital is devoid of nodes, the 2s orbital has one node, etc. The s-orbital nodes are spherical and they are best viewed in cross section (below). There is a change of phase at the node. Max Born suggested that the squared wavefunction equates electron density, but squaring results in the loss of all phase information.
p-Orbitals have both radial and angular components and have a "figure of 8" shape.
1928: Pauling's Five Rules: Crystal Structure
The crystal structure of an ionic compound can be predicted using a set of empirical rules:
Crystal structures are usually named after a definitive crystal structure, such as: zinc sulfide (structure), sodium chloride, cesium chloride, calcium fluoride (fluorite), rutile, diamond, etc.
Read more in the Wikipedia and ScienceWorld.
1932: Pauling's Electronegativity Linus Pauling used empirical heat of reaction data to introduce the elemental property of electronegativity which he defined as: "The desire of an atom to attract electrons to itself".
Electronegative elements, such as fluorine, "want" electrons so they can form negative ions while electropositive elements, such as cesium, like to lose electrons and form positive ions.
The great befit of electronegativity is that the numbers can be used to quantify this effect and predict bond dipole moment (polarity) and degree of ionic character. For example: fluorine (3.98) is electronegative and cesium (0.79) electropositive. CsF is strongly polarised Cs+ F and is 89% ionic.
1930: Valence Bond Theory Once atomic orbitals were understood in terms of both Bohr's quantum numbers and the Schrödinger wave equation, the quest was on to understand bonding in molecules. Linus Pauling's approach was to take the atomic orbitals and mix (hybridize) them together. For example, the 2s orbital can mix with the three 2p orbitals to give four "hybrid" sp3 orbitals which are arranged tetrahedrally about the central atom.
Thus, hybridization can rather easily explain the tetrahedral geometry of methane. Valence bond theory can also explain why the carbons in ethene (ethylene) are triangular planar by invoking sp2 hybridization and why ethyne (acetylene) is linear: sp hybridization.
VB theory also introduces the concept of "resonance", an idea dependent upon electronegativity. For example, chlorine is more electronegative than hydrogen and the compound hydrogen chloride, HCl, is polarised H+Cl. VB theory suggests the various possible forms are in resonance.
VB theory is widely employed in education because it produces easily understandable structures. However, the mathematics of orbital manipulation is easier if non-hybridized orbitals are employed. For this reason VB theory has become a theoretical "dead end" compared with MO theory... or has it? Look here.
Read more about the valence bond approach to understanding polyatomic structure here.
Molecular Orbital Theory assumes that molecules are multi-nucleated atoms: the molecular orbitals, MOs, are assumed to encompass the two nuclei. Electrons are added to the MOs using the same rules that are used to add electrons to atomic orbitals: the aufbau principle and the Pauli exclusion principle. MOs have a similar geometry to atomic orbitals, but are more involved. The MO approach is most obviously seen and understood with diatomic molecules, H2, N2, etc.
The MO approach to diatomic hydrogen places the two nuclei (protons) close to each other. An electron is added to the lowest energy MO, the sigma bonding MO. The second electron also goes into the sigma MO. The third electron goes into the next MO which has a node between the two nuclei (a region of zero of electron density) and is called the "sigma star" antibonding MO.
However, the all encompassing MO approach is difficult to apply to molecules with many atoms. The 'trick' is to use the Linear Combination of Atomic Orbital (LCAO) simplification in which atomic orbitals are added together to form molecular orbitals. Hydrogen is constructed by adding two 1s orbitals into a 1 sigma MO.
The interaction of atomic and molecular orbitals can be represented in MO Energy diagrams:
There are various possible AO to MO interactions:
The LCAO approach has been highly developed in software. The AOs are described in terms of "basis functions", such as finite elements, Gaussian type orbitals (GTOs) or Slater type orbitals (STOs). (For historical reasons basis functions are called basis AOs. Such ab initio (or "from the start") software is able to calculate molecular geometry and energy to high precision. Commercial software is available.
Read more about diatomic molecules and polyatomic molecules elsewhere in the chemogenesis web book.
1937/9: Hellmann-Feynman Theorem Wikipedia
The Hellmann-Feynman theorem states that once the spatial distribution of electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
Thus classically, the equilibrium configuration of a molecule like H2, (H–H, bond length 74 pm) has the resultant force acting on each nucleus vanishing. The electrostatic (++) repulsion between the two positive nuclei is exactly balanced by their attraction to the electrons between them.
The Hellmann-Feynman theorem was discovered independent by Hans Hellmann (1937) and Richard Feynman (1939).
1943: Valence Shell Electron Pair Repulsion (VSEPR) states that electron pairs (both bonded covalent electron pairs and nonbonded "lone-pairs" of electrons) repel each other. Methane, CH4, has four bonded electron pairs and these repel each other to give the four hydrogens tetrahedral geometry about the central carbon. Likewise, ammonia has three bonded electron pairs and one lone pair which mutually repel each other so that ammonia is trigonal pyramidal.
Geometry can be predicted using the "AXE" system where A is the central atom, X the number of (electron pair bonded) ligands and E the number of lone pairs.
There are two VSEPR explorers on the web, The Chemical Thesaurus here and Cool Molecules here.
Read more about VSEPR theory here, and test your knowledge of VSEPR by going to the Chemistry Tutorials & Drills web site.
The relationship between VB theory and VSEPR is interesting in that VSEPR seems to grow out of VB theory.
However, VB theory is a wave mechanical theory whereas VSEPR assumes that bonds and lone pairs are entirely classical and says nothing about how the bonding actually occurs. When moving from VB to VSEPR, it is as if the wave mechanical electron bonding becomes fixed in space... in rather like the image on photographic film becomes fixed during the development process.
There is no deep theoretical justification to VSEPR theory, other than it predicts an atomic centre will arrange its ligands so that it will assume a geometry with the maximum spherical symmetry. VSEPR theory is "pulled out of a hat", however, as a method it is very successful.
1960s: Frontier Molecular Orbital Theory was developed in the 1960s by Kenichi Fukui who recognised that chemical reactivity can often be explained in terms of interacting Highest Occupied MOs (HOMOs), Lowest Unoccupied MOs (LUMOs) and Singly Occupied MOs (SOMOs).
The FMO approach was developed by Woodward & Hoffmann in the late nineteen sixties who used it to explain an apparently diverse set of reactions involving π-systems, including Diels-Alder cycloaddition, here. Hoffmann used the approach to explore transition metal complexes.
Read Fukui and Hoffmann's 1981 Nobel prize lectures.
1941: van Arkel-Ketelaar Triangle recognises three extreme types of bonding: metallic, ionic and covalent and that many bond types are intermediate between the extreme types. This behaviour can be rationalised in terms of electronegativity.
Read in detail about the van Arkel-Ketelaar Triangle in detail elsewhere in this web book.
1993 & 2008:Tetrahedron of Structure, Bonding & Material Type Michael Laing expanded the van Arkel-Ketelaar triangle into a tetrahedron by separating covalent bonding into two types: molecular and network covalent, although Laing uses the terms "van der Waals" and "covalent":
M. Laing, A Tetrahedron of Bonding, Education in Chemistry, 160 (1993)
Molecular covalent materials consist of small molecules with strong intramolecular covalent bonds but weak van der Waals intermolecular attraction. Methane and ammonia are molecular materials.
Network covalent materials have strong covalent bonds which extend throughout the material, examples include: diamond, silica and phenolic resins.
Mark R Leach has recently, 2008, quantified the tetrahedron of structure, bonding & material type with respect to valency and electronegativity:
truncated tetrahedron
Read in detail about the Tetrahedra of Structure, Bonding & Material Type in detail elsewhere in this web book.
1970s: Molecular Mechanics and Molecular Dynamics
Valence Shell Electron Pair Repulsion (VSEPR) theory has been extensively parameterised and developed into computer software. The method treats a molecule as a collection of particles held together by elastic or simple harmonic forces. These forces can be described in terms of potential energy functions. The sum of these terms gives the overall steric energy. This system can be modeled by the Westheimer equation is:
Etotal = Es + Eb + Ew + Enb
D.B. Boyd and K.B. Lipkowitz, Molecular Mechanics: Method and Philosophy, J.Chem.Educ. 59 4 269-274 (1982), P.J. Cox, Molecular mechanics: Application , J.Chem.Educ. 59 4 275-277(1982)
The techniques allows small molecules such as butane and cyclohexane to be energy minimized into their most stable conformation:
Larger molecules, including DNA and proteins can be modelled using MM software. Below is a representation of bacteriorhodopsin:
In recent years, molecular mechanics has been extended into molecular dynamics to model large dynamic structures, such as proteins, with move over a given time scale. Have a look here (MM) and here (MD).
Modern Geometry Optimisation Software uses a variety of techniques: molecular mechanics, semi-empirical, ab initio (from the beginning) and density functional. The quantum chemistry software completely hides the mathematics of the geometry optimisation process. A molecule is constructed (drawn) with a mouse and the energy minimized using any desired level of theory.
All computational methods uses a broadly similar strategy. Atoms are placed in virtual space in an approximate geometry with respect to bond lengths and angles. A calculation is performed to determine the energy of the system. The software then alters the geometry, and the energy is recalculated. The software loops, until it finds the arrangement of nuclei which gives the system the lowest energy, and this corresponds to the optimum molecular geometry.
The time taken to minimise depends upon the method used, the size of the molecule, the degree of precision required and well as the processor speed and available memory.
It is possible to mix-and-match. An entire protein may be modelled using MM/MD methodology, with the central active site plus substrate optimised using ab initio techniques. Software is available from a number of vendors: WaveFunction, HyperChem (download a fully functional but time limited demo version) and Gaussian.
The Electron Corral
The image below is not of an atom, but shows an alternative electron corral pattern, predicted by the Schrödinger wave equation and created by electrons in experiment:
The Bifurcation of Theories & Models
The crucial time for understanding [how we understand] chemical structure & bonding occurred in the active UC Beckley labs of G. N. Lewis over the years from 1912–23.
Lewis and colleagues actively debated the new ideas about atomic structure, the Rutherford & Bohr atoms, and postulated how they might give rise to models of chemical structure, bonding & reactivity. Taken directly from the Bohr atom, the Lewis model uses electrons that are "countable dots of negative charge".
Lewis's first ideas about chemical bonding were published in 1916, and later in a more advanced form in 1923. These early ideas have been extensively developed and are now taught to chemistry students the world over.
More advanced models of chemical structure, bonding & reactivity are based upon the Schrödinger equation in which the electron is treated a resonant standing wave.
Although largely outside the scope of this web book, the theoretical dichotomy also occurs in semiconductor physics where electrical behaviour is either modelled in terms of band theory, a natural development of MO theory or in terms of localized electrons & electron holes within the valance band, a development of the VSEPR model.
Summing Up: Mixing & Matching Models & Theories
Chemical theories are either based on:
Quantum mechanical models that represent electrons as waves. This approach is good at modeling atomic structure, atomic and molecular spectroscopy, the nature of the chemical bond, LCAO, FMO theory and pericyclic chemistry. Mathematical calculations employ Hartree-Fock, density functional or Hückel techniques.
Classical mechanics represents atoms as spheres that bond together and exhibit valency. The geometry of molecules and molecular ions can be very neatly predicted by VSEPR 'theory'. This type of approach can be paramertised into molecular mechanics and molecular dynamics software models.
The following is taken from Introduction to Macromolecular Simulation by Peter J. Steinbach, here:
Quantum chemistry texts can blur the distinction between quantum mechanics and classical mechanics and by grouping together LCAO MO calculations along with VSEPR, MM and MD techniques.
Pick n Mix When - as chemists - we consider a substance like ammonia, we employ a variety of models and ideas, fore example:
Detailed quantum analysis shows that ammonia's trigonal pyramidal structure is able to invert "like an umbrella". In ammonia, this inversion occurs by quantum tunneling.
Can Orbitals be Observed?
One rather important question remains: Do atomic and molecular orbitals exist? Are they real?
The answer is: No. In principle orbitals cannot be observed.
E. Scerri, Have Orbitals Really Been Observed?, Journal of Chemical Education, 77, 1492-1494, (2000) here.
E. Scerri, The Recently Claimed Observation of Atomic Orbitals and Some Related Philosophical Issues, Philosophy of Science, 68 (Proceedings) S76-S88, N. Koertge, ed. Philosophy of Science Association, East Lansing, MI, (2001), here.
Read More
Introduction to Macromolecular Simulation by Peter J. Steinbach
Theoretical Chemistry a Self-Guided Introduction for College Students by Jack Simons
An Introduction to Theoretical Chemistry by Jack Simons
MO Theory
Take a look at: Classic Papers from the History of Chemistry (and Some Physics too)
previous home next
Gibb's Build-a-reaction Modern Lewis Theory
© Mark R. Leach 1999 –
Queries, Suggestions, Bugs, Errors, Typos...
If you have any:
Suggestions for links
Bug, typo or grammatical error reports about this page,
|
073f9bd32ac9b77e | Dismiss Notice
Join Physics Forums Today!
How are the formulas in QM derived?
1. Sep 4, 2007 #1
I'm not quite familiar in QM. All I can see are formulas, I'm familiar with diff.calc., integral, & diff.equations.
Do most QM formulas use these methods in deriving their formulas? if not, what else?
2. jcsd
3. Sep 4, 2007 #2
Well, linear algebra for one. Then calculus. And Fourier analysis. And usually solving the Schrödinger equation means solving a differential equation.
Last edited: Sep 4, 2007
4. Sep 5, 2007 #3
User Avatar
Science Advisor
Homework Helper
well arriving at the fundamental expression, you first observe nature and make postulates. Then of course you take advantage of the mathematical formalism available.
5. Sep 5, 2007 #4
I recommend the straightforward experimental-theoretical derivation of section 5-2, "Plausibility Argument Leading to Schroedinger's Equation" (p. 128-134) in Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles, 2nd edition by Robert Eisberg and Robert Resnick.
It is based upon
1. the de Broglie and Einstein postulates
2. conservation of energy
3. linearization
4. and (initially) the free particle, constant potential, sinusoidal solutions
6. Sep 5, 2007 #5
User Avatar
Science Advisor
Yes. Not all, but most.
7. Sep 5, 2007 #6
the best thing you could do in this area would be to read the seminal works by Heisenberg, Schrodinger and Dirac. Once folks such as Wigner, von Neumann, Born, Jordan, Teller, etc. put QM on a solid mathematical footing (i.e. via operator theory), people lost interest in how the original equations were formulated.
8. Sep 5, 2007 #7
Roughly QM was derived from classical mechanics along this path (Very roughly...)
Lagrangian CM --> Hamiltonian CM --> Poisson Brackets (Still CM) --> Commutator operators (Pretty much Quantum) --> Either Heisenberg or Schroedinger approach. There's much abstract math at each step of this 'outline', but studying any of these stages will help, maybe. (Remember that this stuff was developed over two or three decades by some pretty sharp people.)
9. Sep 6, 2007 #8
User Avatar
Science Advisor
Homework Helper
Once QM was put firmly on axiomatical basis in any of its multiple formulations, all formulas are derived from the ones occuring in the axioms.
10. Sep 6, 2007 #9
Yes, this is how QM was developed historically, and this is how it is presented in many textbooks. However, this path strikes me as being somewhat illogical: we shouldn't be deriving a more general and exact theory (quantum mechanics) from its crude approximation (classical mechanics).
Fortunately, there is a more logical path, which doesn't take classical mechanics as its point of departure. The basic formalism of QM (Hilbert spaces, Hermitian operators, etc.) can be derived from simple and transparent axioms of "quantum logic"
G. Birkhoff, J. von Neumann, "The logic of quantum mechanics", Ann. Math. 37 (1936), 823
G. W. Mackey, "The mathematical foundations of quantum mechanics",
(W. A. Benjamin, New York, 1963), see esp. Section 2-2
C. Piron, "Foundations of Quantum Physics", (W. A. Benjamin, Reading, 1976)
Time dynamics and other forms of inertial transformations are introduced via representation theory for the Poincare group
I listed just the most significant references. There are many more works along these lines. However, for some reason, these ideas have not percolated to the textbook level (at least not to the extent they deserve).
11. Sep 6, 2007 #10
i believe that is starting to change, j.j. sakurai's text does a nice job as does Atkins (perhaps not super rigorous, but starts axiomatically via operator theory)
12. Sep 6, 2007 #11
The best textbook that follows the Poincare-Wigner-Dirac approach to dynamics is S. Weinberg "The quantum theory of fields", vol.1. Highly recommended!
Similar Discussions: How are the formulas in QM derived?
1. Basic qm derivation (Replies: 11)
2. How to interpret QM (Replies: 27) |
36e95afaa696d9f7 | Dismiss Notice
Join Physics Forums Today!
Unbounded operators in non-relativistic QM of one spin-0 particle
1. Apr 3, 2009 #1
User Avatar
Staff Emeritus
Science Advisor
Gold Member
What exactly are the axioms of non-relativistic QM of one spin-0 particle? The mathematical model we're working with is the Hilbert space [itex]L^2(\mathbb R^3)[/itex] (at least in one formulation of the theory). But then what? Do we postulate that observables are represented by self-adjoint operators? Do we say that a measurement of an operator [itex]A[/itex] on a system prepared in state [itex]|\psi\rangle[/itex] yields result an and leaves the system in the eigenstate [itex]|n\rangle[/itex] with probability [itex]|\langle n|\psi\rangle|^2[/itex]? Then how do we handle e.g. the position and momentum operators, which don't have eigenvectors?
Can the problem of unbounded operators be solved without the concept of a "rigged Hilbert space"? Is it easy to solve when we do use a rigged Hilbert space? What is a rigged Hilbert space anyway?
I think I brought this up a few years ago, but apparently I wasn't able to understand it even after discussing it. I think I will this time, because of what I've learned since then. Don't hold back on technical details. I want a complete answer, or the pieces that will help me figure it out for myself.
2. jcsd
3. Apr 3, 2009 #2
I too would like a clarification on the subject of "rigged Hilbert space". Sometimes it seems like it is just a word people throw in to justify introducing non-normalizable eigenstates and treating them in a similar way as other eigenstates with the substitution [itex]\langle n|m\rangle =\delta_{nm}\rightarrow\langle x|x'\rangle=\delta(x-x')[/itex]. Is this just some trick or is there more to it.
Sometimes people introduce boxes with periodic boundary conditions and let the size of those boxes go to infinity at the end.... is this more rigorous? Probably not ..?
4. Apr 4, 2009 #3
User Avatar
Science Advisor
More axiomatically, one can start with a complete normed algebra
of operators (a Banach algebra), satisfying some extra axioms
that make it into a C* algebra. Then construct a Hilbert space
on which the elements of the algebra act as operators. This is
called the "GNS" construction. The algebraic approach avoids
some of the operator ambiguities that can arise with the
"Hilbert space first" approach.
Using Rigged Hilbert Space (RHS), aka "Gelfand Triple".
(Personally I dislike both names, and prefer the more
explicit "Gelfand Triple Space", though I think I'm alone in
that usage.)
Without an RHS, you've got to pay careful attention to the domains of
operators. The general spectral theorem for s.a. operators on inf-dim
Hilbert space is littered with domain stuff. But the whole point of
the RHS idea is to avoid that stuff and provide a rigorous mathematical
underpinning of Dirac's original bra-ket stuff that uses such improper
Do you have a copy of Ballentine's QM textbook? It's one of
the few that explain and emphasize how all the Dirac-style QM
we know and love is really all being done in an RHS.
(Ballentine also shows how some of the operators in non-rel
QM arise by considering unitary representations of the
Galilei group, which was another part of your question.)
I just looked at the RHS Wiki page but it's very brief and doesn't
tell you much. Although Ballentine describes RHS, it's only at an
introductory level. There's an old book by Bohm & Gadella,
"Dirac Kets, Gamow Vectors, and Gel'fand Triplets" which explains
a bit more, but they too don't get into the mathematical guts.
There's no way I can fit a complete technical answer in a Physics
Forums post, but maybe I can get you started...
The basic idea is to start with a Hilbert space "H" and then construct
a family of subspaces. To do this, take the formula for your
Hilbert space norm, and then modify it to make it harder for all
states to have a finite norm. E.g., change the usual norm from
\int dx \psi^*(x) \psi(x)
to something like
\int dx |x|^n \psi^*(x) \psi(x)
Clearly, for n>0, only a subset of the original [itex]\psi[/itex]
functions still have finite norm. It is therefore a "seminorm" (meaning
that it's defined only a subset of H). This family of seminorms,
indexed by n, define a family of progressing smaller and smaller
subsets of the original Hilbert space H. It turns out that each such
subspace is a linear space, and is dense in the next larger one.
More generally, this construction comes under the heading of
"Nuclear Space", with a corresponding family of "seminorms".
The Wiki page for Nuclear Space has some more info.
Now, to proceed further, you need to know a couple of things about
inf-dim vector spaces and their duals. First a Hilbert space H is
isomorphic to its dual (i.e., isomorphic to the set of linear
mappings from H to C). Then, if you restrict to a linear subspace
of H, (let's call it [itex]\Omega_1[/itex], corresponding the
case n=1 above), the dual of [itex]\Omega_1[/itex], which I'll
denote as [itex]\Omega^*_1[/itex], is generally larger than H.
I.e., we have [itex]\Omega_1 \subset H \subset \Omega^*_1[/itex].
Note that the usual norm and inner product are ill-defined between
vectors belonging to the dual space [itex]\Omega^*_n[/itex], but
we still have well defined dual-pairing between a vector from
[itex]\Omega^*_n[/itex] and a vector from [itex]\Omega_n[/itex].
This is enough for Dirac-style quantum theory.
Actually, I'm getting a bit ahead of myself. First, we should take an
inductive limit [itex]n\to\infty[/itex] of the [itex]\Omega_n[/itex]
spaces, which I'll denote simply as plain [itex]\Omega[/itex] without
the subscript. This is the subspace of functions from H which vanish
faster than any power of x.
The "Rigged Hilbert Space", or "Gel'fand Triple", is the name given
to the triplet of densely nested spaces:
\Omega \subset H \subset \Omega^*
The word "rigged" should be understood to mean "equipped and ready for
action". (Even with this explanation I personally still think it's a
poor name.)
[Continued in next post because of "Database error"...]
Last edited: Apr 4, 2009
5. Apr 4, 2009 #4
User Avatar
Science Advisor
[Continuation of previous post...]
The master stroke now comes in that the so-called improper states of
position, momentum, etc, in Dirac's bra-ket formalism correspond to
vectors in [itex]\Omega^*[/itex]. It is possible to take a s.a.
operator on [itex]\Omega[/itex], and extend to an operator on
[itex]\Omega^*[/itex], where the extension is defined in terms of its
action on elements of [itex]\Omega[/itex] via the dual-pairing.
Taking this further, there is a generalization of the usual spectral
theorem, called the Gelfand-Maurin Nuclear Spectral Theorem which
shows that eigenvectors of A in the dual space [itex]\Omega^*[/itex]
are "complete" in a generalized sense, even though they're not
So although people "throw around" the phrase "rigged Hilbert space"
it's actually very important to the mathematical underpinnings of QM,
though perhaps less so if you just want to do Dirac-style basic
calculations. The RHS *is* the arena for modern QM, rather than the
simpler Hilbert space as widely believed. The RHS, with the G-M Nuclear
Spectral Theorem, is a far more general mathematical foundation than
the trick of "finite boxes", etc, that jensa asked about.
There's also an old textbook by Maurin "General Eigenfunction Expansions..."
which gives the rigorous proof (though not very clearly, imho). But I
think I'll stop here and see what followup questions arise.
6. Apr 4, 2009 #5
This connection between unboundedness of operators and the non-normalizable eigenvectors is non-existing, and indicates some confusion.
Example 1:
If you consider the system defined by a Hamiltonian (this is the infinitely deep potential well)
H = -\frac{\hbar^2}{2m}\partial_x^2 + \infty\;\chi_{]-\infty,0[\cup ]L,\infty[}(x)
and solve it's eigenstates, you get a sequence of normalizable eigenvectors [itex]|\psi_1\rangle,|\psi_2\rangle,\ldots[/itex], and in this basis the Hamiltonian is
H = \frac{\hbar^2\pi^2}{2mL^2}\left(\begin{array}{ccccc}
1 & 0 & 0 & 0 & \cdots \\
0 & 4 & 0 & 0 & \cdots \\
0 & 0 & 9 & 0 & \cdots \\
0 & 0 & 0 & 16 & \cdots \\
This is an unbounded operator, but still it can be diagonalized in the Hilbert space in the standard sense.
Example 2:
If you regularize the differential operator [itex]\partial^2_x[/itex] by making some cut-off in the Fourier-space, you obtain some pseudo-differential operator which will be approximately the same as the [itex]\partial_x^2[/itex] for wave packets containing only large wave lengths. So fix some large [itex]R[/itex] and set
(H_R \hat{\psi})(p) = \frac{p^2}{2m} \chi_{[-R,R]}(p)\hat{\psi}(p),
which is the same thing as
(H_R \psi)(x) = \frac{1}{2m} \frac{1}{2\pi\hbar} \int\limits_{-R}^R\Big(
\int\limits_{-\infty}^{\infty} p^2 \psi(x') e^{i(x-x')p/\hbar} dx'\Big) dp.
This operator is bounded and [itex]\|H_R\| = \frac{R^2}{2m} < \infty[/itex]. However, it's eigenvectors are outside the Hilbert space [itex]L^2(\mathbb{R})[/itex].
So it is possible to have an unbounded operator so that its eigenvectors are inside the Hilbert space, and it is possible to have a bounded operator so that its eigenvectors are outside the Hilbert space.
Last edited: Apr 4, 2009
7. Apr 4, 2009 #6
If we ask a question that what is the probability for a momentum to be in an interval [itex][p_0-\Delta, p_0+\Delta][/itex], we get the answer from the expression
\frac{1}{2\pi\hbar} \int\limits_{p_0-\Delta}^{p_0+\Delta} |\hat{\psi}(p)|^2 dp.
I'm not convinced that it is useful to insist on being able to deal with probabilities of precise eigenstates. Experimentalists cannot measure such probabilities either.
8. Apr 4, 2009 #7
User Avatar
Staff Emeritus
Science Advisor
Gold Member
Good answers, both of you. I appreciate that you are taking the time to explain these things to me. I need some time to think about the technically advanced parts of Strangerep's posts, so for now I'll just reply to Jostpuur. I'll reply to Strangerep later today, or tomorrow.
Jostpuurs post #6 brings up one of the things I was thinking about when wrote the OP and when I was reading Strangerep's reply. The RHS stuff is interesting, and I definitely want to learn it, but I feel that as long as we're just talking about the non-relativistic quantum theory of one spinless particle, it should be possible to avoid the complications by stating the axioms of the theory of physics carefully, instead of changing the mathematical model (by replacing the Hilbert space by a Gelfand triple). What I mean by the "axioms of the theory of physics" are the statements that tell us how to interpret the mathematics as predictions of probabilities of possible results of experiments. Am I right about this, or do we absolutely need something like a RHS just to state the simplest possible quantum theory in a logically consistent way?
Joostpur, I agree that your example 1 proves that it's possible for an unbounded operator on a Hilbert space to have eigenvectors. I didn't expect that. The Hilbert space in your example is [itex]L^2([0,L])[/itex], not [itex]L^2(\mathbb R^3)[/itex], but those two spaces are isomorphic (unless I have misunderstood that too), so this should mean that there's an unbounded operator on [itex]L^2(\mathbb R^3)[/itex] that has an eigenvector.
I don't understand 100% of example 2, but I accept it as a convincing argument that it's possible for a bounded operator to fail to have eigenvectors. The part that's confusing me is that I don't see what the "eigenvectors" of HR are. I assume that they are some sort of distributions.
9. Apr 4, 2009 #8
User Avatar
Science Advisor
Confusion in terminology perhaps, but there is certainly a connection.
Expressed more precisely, let me quote the Hellinger-Toeplitz
theorem (from Lax, p377):
"An operator M that is defined everywhere on a Hilbert space H and
is its own adjoint, (Mx,y) = (x,My), is necessarily bounded."
The proof is only a few lines.
There follows a corollary (further down on p377):
"It follows from this [...] that unbounded operators that are their
own adjoints can be defined only on a subspace of the Hilbert space".
The last bit about "in the Hilbert space" is incorrect. Let me
simplify your example...
Your eigenvectors can be written as infinite-length vectors:
e_1 := (1,0,0,0,....)~~
e_2 := (0,1,0,0,....)~~
e_k := (0,0,...,0,1,0,0,...)~~
Forgetting the constants, your Hamiltonian can be written as
H_{nm} ~=~ n^2 ~ \delta_{nm}
Now, I can construct a particular linear combination of the [itex]e_k[/itex]:
\psi := \sum_{k=1}^\infty \frac{1}{k} ~ e_k
whose squared norm is
(\psi,\psi) ~=~ \sum_{k=1}^\infty \frac{1}{k^2} ~<~ \infty
So [itex]\psi[/itex] is in the Hilbert space. Now consider
\phi := H \psi ~=~ \sum_{k=1}^\infty k^2 ~ \frac{1}{k} ~ e_k
~=~ \sum_{k=1}^\infty k ~ e_k
whose squared norm is
(\phi,\phi) ~=~ \sum_{k=1}^\infty k^2 ~\to~ \infty
and therefore [itex]\phi[/itex] is not in the Hilbert space.
Hence, it's incorrect to say that this Hamiltonian is an
operator on the entire Hilbert space. It's only a well-defined
operator on a subspace of the Hilbert space.
The rigged Hilbert space formalism was developed to make sense of
this. The Hamiltonian *can* be diagonalized in a sense, but must
be done in terms of generalized eigenvectors in a larger space
of tempered distributions (the [itex]\Omega^*[/itex] from my
earlier post).
Your Hamiltonian in example 2 (Edit: in the limit [itex]R\to\infty[/itex] )
is not well-defined on all of [itex]L^2(\mathbb{R})[/itex]. I.e., it's not
You seem to be defining
the eigenvectors on a subset of [itex]L^2(\mathbb{R})[/itex] (with
finite "R") and then assuming they remain well-defined when you
take [itex]R\to\infty[/itex]. But the limit
\lim_{R\to\infty} \|H_R\| ~=~ \lim_{R\to\infty} \frac{R^2}{2m}
does not exist.
Sure, plenty of people get along fine without understanding
these subtleties. Dirac was one of them. He just knew intuitively
that it was ok, somehow. Later, some mathematicians came along
and made it more rigorous and respectable, using rigged Hilbert
space and related concepts. And Fredrick's question was clearly
asking about the mathematically precise stuff.
Last edited: Apr 5, 2009
10. Apr 4, 2009 #9
I was already aware of the fact that unbounded operators are often[itex]{}^*[/itex] defined on some subsets of the original Hilbert space, although I thought it would not be necessary to get into that matter in my post. For example the domain of [itex]H=-\frac{\hbar^2}{2m}\partial_x^2[/itex] is
D(\partial_x^2) = \big\{\psi\in L^2(\mathbb{R})\;|\; \int\limits_{-\infty}^{\infty} p^4 |\hat{\psi}(p)|^2 dp < \infty \big\},
but this doesn't usually get mentioned in every post that is concerned with this Hamiltonian.
I can admit that I didn't know this theorem.
([itex]{}^*[/itex]: Or let's say that I was under belief that this "often" the case, while not being aware of the fact that it is always the case, with self-adjoint operators.)
I had not thought about this example carefully, and was not aware of the fact that the domain is not the full space, but now when I look my post, I don't think that I would have very explicitly claimed the domain to be the full space either.
I don't agree on this. Let [itex](X,\mu)[/itex] be some measure space, and [itex]f\in L^{\infty}(X)[/itex] some measurable function. Then the formula
\psi\mapsto M_f\psi,\quad (M_f\psi)(x) = f(x)\psi(x)
defines a bounded operator [itex]M_f:L^2(X)\to L^2(X)[/itex], and [itex]\|M_f\|\leq \|f\|_{\infty}[/itex]. My example belongs to this class of operators.
Last edited: Apr 5, 2009
11. Apr 4, 2009 #10
The harmonic oscillator is an example of operator which is defined on some subspace of [itex]L^2(\mathbb{R}^n)[/itex], and which has a sequence of eigenvectors whose span is dense in [itex]L^2(\mathbb{R}^n)[/itex].
I'll continue with the example from my previous post, where [itex]M_f[/itex] was defined. Suppose for simplicity that the [itex]f[/itex] was also injective, so that it doesn't get same values at different locations. Now we ask whether [itex]\lambda[/itex] is an eigenvalue. If it is, then an equation
f(x)\psi(x) =\lambda \psi(x)
must be true for a.e. [itex]x\in X[/itex]. This cannot happen unless [itex]f(\overline{x})=\lambda[/itex] with some [itex]\overline{x}\in X[/itex], and unless [itex]\psi(x)=0[/itex] for a.e. [itex]x\neq \overline{x}[/itex]. So the eigenvectors would have to be [itex]\psi(x) = \chi_{\{\overline{x}\}}(x)[/itex]. If the measure [itex]\mu[/itex] is such measure that [itex]\mu(\{\overline{x}\})=0[/itex], then the eigenvector doesn't exist because it is zero. What happens in the non-rigorous formalism is that this kind of eigenvector is multiplied with an infinite constant so that it becomes non-zero.
If the Hamiltonian is defined in the Fourier-space by a multiplication
(H\hat{\psi})(p) = \frac{p^2}{2m}\hat{\psi}(p)
then in the non-rigorous formalism the eigenvectors are delta-functions [itex]\delta(p-p')[/itex], and in the spatial representation they are the plane waves [itex]e^{ixp/\hbar}[/itex]. If the Hamiltonian is made bounded by force by multiplying the operated function with [itex]\chi_{[-R,R]}(p)[/itex], then the same eigenvectors still work, but the eigenvalues are different. The eigenvalues are the same for [itex]-R\leq p\leq R[/itex], but go to zero for other [itex]p[/itex].
12. Apr 5, 2009 #11
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I don't see how this observation implies anything more than that H is unbounded, and we already knew that. :confused:
Edit: I do now, thanks to Jostpuur. See my next post.
It looks well-defined to me. It's just not injective on all of [itex]L^2(\mathbb R)[/itex] and I guess that means it's not self-adjoint on [itex]L^2(\mathbb R)[/itex].
[tex]H_R\psi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dp\ e^{-ipx}\frac{p^2}{2m}\tilde\psi(x)\chi_{[-R,R]}(p)=\frac{1}{2\pi}\int_{-R}^{R}dp\ e^{-ipx}\frac{p^2}{2m}\int_{-\infty}^{\infty}dx'\ e^{ipx'}\psi(x')[/tex]
We can take the expression on the right-hand side as the definition of HR, and if we do I think it's clear that this operator is well-defined. The middle expression implies that it's not injective. The integral doesn't depend on what values the Fourier transform [itex]\tilde\psi[/itex] has outside of the interval [-R,R]. (LOL, the tilde is invisible in itex mode).
If I insert [itex]\psi(x')=e^{ikx'}[/itex] into the right-hand side of my equation above, I don't get a constant times [itex]e^{ikx}[/itex]. I get zero. But I might be doing something wrong.
Last edited: Apr 5, 2009
13. Apr 5, 2009 #12
In my opinion strangerep wrote a relevant comment concerning my example 1, but made a mistake with the example 2.
Recall that if an operator [itex]T:V\to V[/itex] between two norm spaces is unbounded, then it does not mean that [itex]\|T\psi\|=\infty[/itex] for some [itex]\psi\in V[/itex]. This would be contradictory with the implicit assumption [itex]T(V)\subset V[/itex]. Instead it means that there is a sequence of vectors [itex]\psi_1,\psi_2,\psi_3,\ldots\in V[/itex] such that [itex]\|\psi_n\|\leq 1[/itex] for all n, and
\sup_{n\in\mathbb{N}} \|T\psi_n\| = \infty.
When [itex]V[/itex] is a subset of some larger vector space in which the norms of vectors are infinite, and the image point [itex]T\psi[/itex] is defined with some formula in this larger vector space, it can happen that [itex]\|T\psi\|=\infty[/itex] for some [itex]\psi\in V[/itex]. In this case we don't obtain an operator [itex]T:V\to V[/itex], but instead an operator [itex]T:D(T)\to V[/itex] where
D(T) = \{\psi\in V\;|\; \|T\psi\|<\infty\}
is the domain of the operator. This is what happens in the example of the infinite potential well. The Hamiltonian is not defined on the entire Hilbert space [itex]L^2([0,L])[/itex], but only on some subspace. However, this subspace is dense in the Hilbert space.
I don't think that I said anything wrong in my example 1 though. Despite the fact that the Hamiltonian is not defined in the entire Hilbert space, the Hamiltonian has a sequence of orthogonal eigenvectors, whose span is dense in the Hilbert space, so the Hamiltonian is pretty diagonalizable there.
Last edited: Apr 5, 2009
14. Apr 5, 2009 #13
User Avatar
Staff Emeritus
Science Advisor
Gold Member
D'oh. This is what I missed. Thanks. [itex]\|H\psi\|[/itex] must be finite when the codomain of H is a Hilbert space, so Strangrep's calculation does prove that H can't be defined on all of [itex]L^2([0,L])[/itex]. It's a proof by contradiction:
Assume that [itex]H=-(\hbar/2m)d^2/dx^2[/itex] is a linear operator from [itex]L^2([0,L])[/itex] into [itex]L^2([0,L])[/itex]. Then it's defined on the specific [itex]\psi[/itex] that Strangrep defined, but that [itex]\psi[/itex] satisfies [itex]\|H\psi\|=\infty[/itex], and that contradicts the assumption that the range of H is a subset of [itex]L^2([0,L])[/itex].
15. Apr 5, 2009 #14
User Avatar
Science Advisor
I intended (but neglected to say) "in the limit [itex]R\to\infty[/itex]".
I've now edited my earlier post to fix this. Sorry for my lack of care.
16. Apr 5, 2009 #15
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I have heard about it (e.g. in a reply you wrote to me months ago), but I haven't studied it yet. It's on page 250 of the functional analysis book I bought a while ago (Conway), and I have read very little of that book so far. It's going to take a while before I get there, so maybe we can skip the details (especially proofs), and just talk about what the point is. What is the point? I thought it had something to do with relativity and causality.
Edit: I think I see the point you were going for. I was asking about how to introduce observables into the theory, and you just meant that this is one way to define them. Is it a better way than to define the observables as self-adjoint operators on a separable Hilbert space?
Does the C*-algebra/GNS approach have anything to do with the RHS concept, or are the two unrelated? (They seem unrelated to me).
Spectral theorem...page 262...and that's probably not the most general one. It's about normal operators (A*A=AA*). I think it would be easier for me to learn the RHS stuff than to get through a whole book of functional analysis. (I think I know what I need to know about measures, integration and distributions).
Unfortunately no. It's one of several books that I'm thinking about buying, but I haven't done it yet. I checked it out after reading Demystifier's thread about it (I noticed you did too), and I think it looks great. Fortunately, the relevant pages are available at Google books.
I'm trying to find a specific set of statments that can be said to define the theory. I'm sure there are many different sets of statements that do the job (in the sense that each set is logically consistent and makes the same predictions about the results of experiments as some other set). I'd like to see the simplest set of statements that can define the theory, and also the set of statements that's the easiest to generalize to the relativistic case.
I chose to ask specifically about non-relativistic QM of one spin-0 particle because it's the simplest of all relevant quantum theories, and I felt that it should be possible to define it in a pretty simple way. The traditional way (which is kind of sloppy) is to postulate among other things that states are represented by the rays of a (separable) Hilbert space (or specifically [itex]L^2(\mathbb R^3)[/itex]) and that the time evolution of a state is given by the Schrödinger equation. I think I would prefer to drop the explicit stuff about the Schrödinger equation, and instead postulate something about inertial observers and unitary representations of the Galilei group. This would give us both the Schrödinger equation and a definition of the Hamiltonian, the momentum operators and the spin operators (and probably the position operator too, but I haven't fully understood that part...something about central charges of the Lie algebra).
I'm also interested in how the axioms must be changed when we go from non-relativistic to special relativistic quantum mechanics, and finally to general relativistic quantum mechanics. (But we can ignore that last one in this thread :smile:).
There are some parts of of Ballentine's explanation where I feel that he dumbs it down a bit too much, but I think I understand what he should have said instead, so it's not a problem. :smile: His explanation, combined with yours, is very helpful actually.
Hm, this part sounds familiar. I read the part about tempered distributions in Streater and Wightman recently, but I didn't try to understand every word. They defined a space of test functions that vanish faster than any power of x, and defined a tempered distribution to be a member of its dual space. The part I didn't understand was the exact definition of "vanish faster than any power of x". I'm going to read that part again, and see if I can understand it.
Is the bottom line here that the members of H* are distributions with H (square integrable functions) as the test function space, and that the members of [itex]\Omega^*[/itex] are tempered distributions? Hm, what you said to Jostpuur in #8 looks like a "yes" to that question.
I just realized that there's one small difference. The members of [itex]L^2(\mathbb R^3)[/itex] are not all infinitely differentiable, and test functions are usually assumed to be.
It seems a bit strange and complicated to define a sequence [itex]\Omega_n[/itex] instead of defining [itex]\Omega[/itex] right away, but then I didn't understand S & W on a first read, and they seem to go straight for [itex]\Omega[/itex] (if I remember correctly). Maybe that's why I didn't understand them.
That sounds interesting, but it's not even in my book. :smile:
I'm definitely going to have to learn the details then. I really appreciate your effort in this thread. I still don't get it completely, but I'm getting closer.
Last edited: Apr 5, 2009
17. Apr 5, 2009 #16
User Avatar
Science Advisor
Hmmm. That's a rather large question. It's not specifically about
relativity and causality, but more about constructing a quantum theory
starting from an algebra of observables, instead of starting from a
Hilbert space.
You can read the axioms for C*-algebras for yourself, but briefly,
they're a subclass of Banach *-algebras, which in turn are *-algebras
with a norm defined on every element satisfying certain extra axioms
(e.g., the norm is submultiplicative -- See the top of Wiki Banach
algebra page for what that means).
One then considers linear functionals in the dual space of this normed
algebra to arrive at a way of mapping observables to numbers, thus
getting a quantum theory.
When starting from a Heisenberg algebra, one usually employs the
regularized (Weyl) form of the canonical commutation relations to
banish the pathological behaviour that caused the need for RHS in the
other formalism. The GNS construction basically means being given a
vacuum vector (and its linear multiples, i.e., a 1D linear space), and
multiplying it by all the elements of the algebra to generate a full
Hilbert space (of course, I'm skipping lots of technicalities here).
Of which book? (I don't have Conway.)
Lax does the same, and stops short of talking about RHS and the
Gelfand-Maurin generalization. (Looking at the index, I couldn't
even find "nuclear spaces" mentioned.) Reed & Simon Vol-1 talk
about nuclear stuff, but in the context of tempered distributions.
That's why I asked a question here a while ago about proofs of the
G-M theorem. Eventually I got hold of Maurin's book, difficult
though it is.
Specific examples of the [itex]\Omega, \Omega^*[/itex] are indeed
(respectively) the Schwartz space of test functions, and its dual space
of tempered distributions. The important thing about functional
analysis is that abstracts away from specific spaces to general
properties of inf-dim linear spaces, abstracting a lot of messy
integration stuff by expressing things as linear operators instead.
Although I too found functional analysis quite challenging and
bewildering initially, I've now come to prefer it immensely and I only
drop back to explicit integral stuff when considering specific examples.
Functional analysis is an essential tool for the mathematical
physicist, imho.
Yes, so far. It boils down to: (1) pick a algebra of observables (actually
their universal enveloping algebra), and (2) find all unitary irreducible
representations of this algebra. (3) Construct tensor-product spaces thereof.
The details fill many books of course. Weinberg takes this approach
(more-or-less) in his volumes.
Ah, the position operator (and localization) can get tricky. It's not
too bad for the Galilei case (Ballentine covers it), but constructing a
relativistic position operator is still controversial and problematic.
No, H is self-dual. The test function space (Schwartz space) is an
example of my [itex]\Omega[/itex] space (i.e., a dense subspace of H).
Yes to that part.
This (and the sequence of progressively stricter norms I mentioned
originally) are just a rigorous way to define and generalize the notion
of "...functions vanishing faster than any power of x...".
The Wiki page on "nuclear space" has a bit more, though it doesn't
mention the G-M theorem. Try to find Maurin's book if you can.
(or maybe vol-4 in the series by Gelfand & Vilenkin -- I couldn't
obtain the latter, but many authors reference it).
Edit: I just remembered... there's some old ICTP lecture
notes by Maurin on this stuff, available as:
It covers a lot of the theorems, but skips the lengthy proofs.
Last edited: Apr 5, 2009
18. Apr 5, 2009 #17
User Avatar
Science Advisor
I don't see why it's necessary to go beyond Hilbert space. Rather than defining a position operator, we could define projection operators with eigenvalue 1 if the particle is in some particular volume V, and 0 otherwise; heuristically, these would be
[tex]P_{x\in V} \equiv \int_V d^3\!x\,|x\rangle\langle x|[/tex]
Similarly for a volume in momentum space. Then, instead of defining a hamiltonian whose action could take a state out of the Hilbert space, we could define a unitary time evolution operator.
19. Apr 6, 2009 #18
George Jones
User Avatar
Staff Emeritus
Science Advisor
Gold Member
This is a very interesting thread in which I would like to participate actively, and from which I would like to learn, but I'm too busy with work for the next week or two to do the necessary reading and thinking.
Even for a non-relativistic particle in a box, there is a lot a technical "grit". The position operator doesn't have any eigenstates that live in the Hilbert space of states, it only has distributional eigenstates. Also, the momentum operator is unbounded, and thus, by the Hellinger-Toeplitz theorem (as already posted by strangerep), the momentum operator cannot act on all the states in the Hilbert space of states.
This is an example of something much more general. If two self-adjoint operators satisfy a canonical commutation relation, then it is easy to show that at least one of the operators must be unbounded.
I think you're referring to the fact that (unlike the case for the Poincare group) non-relativistic quantum mechanics deals with representations of a central extension of the Galilean group, not with representations of the Galilean group. This is related to mass in non-relativistic quantum mechanics. Ballentine never uses the term "central extension," but, unlike most (all?) standard quantum mechanics texts, he does give a non-rigorous version. See: page 73, Multiples of identity (c); page 76; pages 80-81.
I think that you would like chapter 9, Generalized Functions, form the book Fourier Analysis and Its Applications by Gerald B. Folland.
I think that it's just a matter of taste whether one uses Hilbert spaces or rigged Hilbert spaces as a rigourous basis for quantum mechanics. For example, Reed and Simon write (in v1 of their infamous work):
"We must emphasize that we regard the spectral theorem as sufficient for any argument where a nonrigorous approach might rely on Dirac notation; thus, we only recommend the abstract rigged space approach to readers with a strong emotional attachment to the Dirac formalism."
I also think that the reason for the popularity of the Hilbert space approach is historical.
In the early 1930s, before the work of Schwartz and Gelfand on distributions and Gelfand triples, von Neumann came up a rigorous Hilbert space formalism for quantum theory.
I think if a rigorous rigged Hilbert space version of quantum theory had come along before the rigorous Hilbert space version of quantum theory, then the Hilbert space version might today be even less well-known than the rigged Hilbert space version actually is. Students would now be hearing vague mutterings about "making things rigourous with Gelfand triples," instead of hearing vague mutterings about "making things rigourous with Hilbert spaces."
Last edited: Apr 6, 2009
20. Apr 6, 2009 #19
In my understanding, the reason why standard Hilbert space formalism is not suitable for QM is rather simple. Let's say I want to define an eigenfunction of the momentum operator. In the position space such an eigenfunction has the form (I work in 1D for simplicity)
[tex]\psi(x) = N \exp(ipx)[/tex]
where [tex]N [/tex] is a normalization factor. This wavefunction must be normalized to unity, which gives
[tex] 1 =\int \limits_V |\psi(x)|^2 dx = N^2V[/tex]
where [tex]V[/tex] is the "volume of space", which is, of course, infinite. This means that the normalization factor is virtually zero
[tex] N = 1/\sqrt{V}[/tex]
So, the value of the wavefunction at each space point is virtually zero too, and it can't belong to the Hilbert space. But the wavefunction is not EXACTLY zero, because its normalization integral is equal to 1. So, here we are dealing with resolving uncertain expressions like "zero x infinity".
As far as I know, there is a branch of mathematics called "non-standard analysis", which tries to assign a definite meaning to such "virtually zero" or "virtually infinite" quantities and to define mathematical operations with them. I guess that using methods of non-standard analysis in quantum mechanics could be an alternative solution for the "improper states" in QM (instead of the rigged Hilbert space formalism). Did anyone hear about applying non-standard analysis to QM?
21. Apr 6, 2009 #20
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I'm still a bit confused by distributions and tempered distributions. Let's see if we can sort this out.
We define D to be the set of all [itex]C^\infty[/itex] functions from [itex]\mathbb R^n[/itex] to [itex]\mathbb C[/itex] with compact support. (This D isn't used in the construction of a rigged Hilbert space. I'm defining it just for completeness). We say that [itex]\phi_n\rightarrow\phi[/itex] if there's a compact set K that contains the supports of all the [itex]\phi_n[/itex], and every [itex]D^\alpha\phi_n[/itex] converges uniformly on [itex]\mathbb R^n[/itex] to [itex]D^\alpha\phi[/itex]. Here we're using the notation
[tex]D^\alpha f(x)=\frac{\partial^{|\alpha|}}{\partial x_1^{\alpha_1}\cdots\partial x_n^{\alpha_n}}f(x)[/tex]
The members of D are called test functions. Now we define a distribution as a linear function [itex]T:D\rightarrow\mathbb C[/itex], which is continuous in the following sense:
I think I would prefer to do it a bit differently (if the following is in fact equivalent to the above, but it might not be). We define an inner product and the associated norm by
[tex]\langle f,g\rangle=\int_D f g\ d\mu[/tex]
where [itex]\mu[/itex] is the Lebesgue measure on [itex]\mathbb R^n[/itex]. Now we define the space of distributions to be the dual space of D. Is this definition equivalent to the first?
Similar Discussions: Unbounded operators in non-relativistic QM of one spin-0 particle |
e7af23e34e11a3b2 | Time, Magic and the Self (I/III)
January 24, 2013 § Leave a comment
There is.
Isn’t it? Would you agree? Well, I would not. In other words, to say ‘There is.’ is infinitesimally close to a misunderstanding. Or a neglect, if you prefer. It is not the missing of a referent, though, at least not in first instance. The problem would be almost the same if we would have said ‘There is x’. It is the temporal aspect that is missing. Without considering the various aspects of temporality of the things that build up our world, we could neither understand the things nor the world.
Nowadays, the probability for finding some agreement for such a claim is somewhat higher than it once was, in the high tides of modernism. For most urbanists and architects, time was nothing but a somewhat cumbrous parameter, yet nothing of any deeper structural significance. The modern city was a city without time, after breaking the traditions, even not creating new ones. Such was the claim, which is properly demonstrated by Simon Sadler [1] citing Ron Herron, group member of Archigram.
“Living City”1 curator Ron Herron described his appreciation of “Parallel of Life and Art”: It was most extraordinary because it was primarily photographic and with apparently no sequence; it jumped around like anything.
Unfortunately, and beyond the mere “functioning,” the well-organized disorg-anization itself became a tradition. Koolhaas called it Junkspace [2]. Astonishingly, and not quite compatible to the admiration of dust-like scatterings that negate relationality, Archigram claims to be interested in, if not focused to life and behavior. Sadler summarizes (p.55)
“Living City” and its catalogue were not about traditional architectural form, but its opposite: the formlessness of space, behavior, life.
Obviously, Sadler himself is not quite aware about the fact that behavior is predominantly a choreography, that is, it is about form and time as well as form in time. The concepts of form and behavior as implied by Archigram’s utopias are indeed very strange.
Basically, the neglect of time beyond historicity is typical for modern/modernist architects, urbanists and theorists up to our days, including Venturi [2], Tschumi [4] or Oswald [5]. Even Koolhaas does not refer expressis verbis to it, albeit he is constantly in a close orbit of it. This is astonishing since key concepts in the immediate neighborhood of time such as semiotics, narration or complexity are indeed mentioned by these authors. Yet, without a proper image of time one remains on the level of mere phenomena. We will discuss this topic of time on the one side and architects and architecture on the other later in more detail.
Authors like Sigfried Giedion [6] or Aldo Rossi [7] didn’t change much concerning the awareness for time in the practice of architecture and urbanism. Maybe, partly because their positions have been more self-contradictive than consistent. On the one hand they demanded for a serious consideration of time, on the other hand they still stuck to rather strong rationalism. Rationalist time, however, is much less than just half of the story. Another salient reason is certainly given by the fact that time is a subject that is notoriously difficult to deal with. As Mike Sandbothe cites Paul Ricoeur [8]:
Ultimately, for Ricoeur time marks the „mystery“ of our thinking, which resists representation by encompassing our Dasein in a way that is ineluctable for our thinking.2
This Essay
One of the large hypotheses that I have been following across the last essays is that we will not be able to understand the Urban3 and architecture without a proper image of differentiation. Both parts of this notion, the “image” and the “differentiation” need some explication.
Despite “differentiation” seems to be similar to change, they are quite different from each other. The main reason being that differentiation comprises an activity, which, according to Aristotle has serious consequences. Mary Louise Gill [9] summarizes his distinction as follows:
Whereas a change is brought about by something other than the object or by the object itself considered as other (as when a doctor cures himself), an activity is brought about by the object itself considered as itself. This single modification yields an important difference: whereas a change leads to a state other than the one an object was previously in, an activity maintains or develops what an object already is.4
In other terms, in case of change it is proposed that it is relatively unconstrained, hence with less memory and historicity implied, while activity, or active differentiation implies a greater weight of historicity, less contingency, increased persistence and thus an increased intensity of being in time.
Besides this fundamental distinction we may discern several modes of differentiation. The question then is, how to construct a proper “whole” of that. Obviously we can think of different such compound “wholes,” which is the reason for our claim that we need a proper image of differentiation.
Now to the other part of the notion of the “image of differentiation,” the image. An “image” is much more than a “concept.” It is more like a diagram about the possibility to apply the concept, the structure of its use. The aspect of usage is, of course, a crucial one. Actually, with respect to the relation between concepts and actions we identified the so-called “binding problem”. The binding problem claims that there is no direct, unmediated way from concepts to actions, or the reverse. Models are needed, both formalizable structural models, being more close to concepts, and anticipatory models, being more close to the implementation of concepts. The operationalization of concepts may be difficult. Yet, action without heading to get contact to concepts is simply meaningless. (The reason for the emptiness of ‘single case’-studies.) Our overall conclusion regarding the binding problem was that it is the main source for frictions and even failure in the control and management of society, if it is not properly handled, if concepts and actions are not mediated by a layer of “Generic Differentiation.” Only the layer of “Generic Differentiation” with its possibility for different kinds of models can provide the basic conditions to speak about and to conceive any of the mechanisms potentially relevant for the context at hand. Such, the binding problem is probably one of the most frequent causes for many, many difficulties concerning the understanding, designing and dealing with the Urban, or its instances, the concrete city, the concrete settlement or building, the concrete neighborhood.
This transition between concept and action (or vice versa) can’t be fully comprised by language alone. For a certain reasons we need a diagram. “Generic Differentiation”, comprising various species of probabilistic, generalized networks, is conceived as part of a larger compound—we may call it “critical pragmatics”—, as it mediates between concepts and actions. Finally we ended up with the following diagram.
Figure 1: “Critical Pragmatics for active Subjects.” The position of Generic Differentiation is conceived as a necessary layer between the domains of concepts and actions, respectively. See text below for details and the situs where we developed it.
basic module of the fractal relation between concept/conceptual, generic differentiation and operation/operational comprising logistics and politics that describes the active subject urban reason 4t
Note, that this diagram just shows the basic module of a more complete diagram, which in the end would form a moebioid fractal due to self-affine mapping: this module appears in any of the three layers in a nested fashion. Hence, a more complete image would show this module as part of a fractal image, which however could not be conceived as a flat fractal, such like a leaf of fern.5 The image of pragmatics as it is shown above is first a fractal due to the self-affine mapping. Second, however, the instances of the module within the compound are not independent, as in case of the fern. Important traces of the same concepts appear at various levels of the fractal mapping, leading to dimensional braids, in other words to a moebioid.
So, as we are now enabled for approaching it, let us return to the necessity of considering the various aspects of temporality. What are they in general, and what in case of architecture, the city, the Urban, or Urban Reason? Giedion, for instance, related to time with regard to the historicity and with regard to an adaptation of the concept of space-time from physics, which at that time was abundantly discussed in science and society. This adaptation, according to Giedion, can be found in simultaneity and movement. A pretty clear statement, one might think. Yet, as we will see, he conceived of these two temporal forms of simultaneity and movement in a quite unusual way that is not really aligned to the meaning that it bears in physics.
Rossi, focusing more on urban aspects, denotes quite divergent concepts of time. He did not however clearly distinguish or label them. He as well refers to history, but he also says that a city has “many times” (p.61 in [7]), a formulation that reminds to Bergson’s durée. Given the cultural “sediments” of a city within itself, its multiply folded traces of historical times, such a proposal is easy to understand, everybody could agree upon it.
Besides the multiplicity of referential historical time—we will make the meaning of this more clear below—, Rossi also proposes implicitly a locality of time through the acceleration of urbanization through primary elements such as “monuments”, or building that own a “monumental” flavor. Unfortunately, he neither does refer to an operationalization of his time concept nor does he provide his own. In other words, he still refers to time only implicitly, by describing the respective changes and differentiations on an observational level.
These author’s proposals provide important hints, no doubt. Yet, we certainly have to clarify them from the perspective of time itself. This equals firstly an inversion of the perspective from architectural or urbanismic vantage point taken by Giedion and Rossi, who in both cases started from built matter. Before turning to architecture, we have to be clear about time. As a second consequence, we have to be cautious when talking about time. We have to uncover and disclose the well-hidden snares before we are going to push the investigation of the relation between temporality and architecture further down.
For instance, both Giedion and Rossi delivered an analysis. This analyticity results in a pair of consequences. Either it is, firstly, just useful for sorting out the past, but not for deriving schemes for synthesis and production, or, secondly, it requires an instantiation that would allow to utilize the abstract content of their analysis for taking action. Such an instantiation could produce hints for a design process that is directed to the future. Yet, neither Giedion [6] nor Rossi [7] did provide such schemes. Most likely precisely due to the fact that they did not refer to a proper image of time!
This essay is the first of two in a row about the “Time of Architecture”. As Yeonkyung Lee and Sungwoo Kim [10] put it, there is much need for its investigation. In order to do so, however, one has to be clear about time and its conception(s). Insofar we will attempt to trace time as a property of architecture and less as an accessory, we also have to try to liberate time from its distinctive link to human consciousness without sacrificing the applicability of the respective conception to the realm of the human.
Hence, the layout of this essay is straightforward.
(a) First we will introduce a synopsis on various conceptions of time as brief as possible, taking into account a few, and probably the most salient sources. This will equip us with possible distinctions about modes or aspects of time as well as the differences between and interdependencies of time and space.
In architecture and urbanism, almost no reference can be found to philosophical discourses about time. Things are handled intuitively, leading to interesting but not quite valuable and usable approaches. We will see that the topic of “time” raises some quite fundamental issues, reaching at least into the field of hermeneutics, semiotics, narratology, and of course philosophy as well. The result will be a more or less ranked list of images of time as it is possible from a philosophical vantage point.
(b) Before the background of this explication and the awareness for all the possible misunderstandings around the issue of time, we will introduce a radically different perspective. We will ask how nature “creates time”. More precisely, we will ask about the abstract elements and mechanisms that are suitable for “creating time.” As weird this may seem at first, I think it is even a necessary question. And for sure nobody else posed this question ever before (outside of esoterics, perhaps, nut we do not engage in esoterics here!).
The particularity of that approach is that the proposed structure would work as a basis for deriving an operationalization for the interpretation of material systems as well as an abstract structure for a foundation of philosophical arguments about time. Of course, we have to be very careful here in order to avoid falling back into naturalist or phenomenological naiveties. Yet, carefulness will allow us to blend the several perspectives onto time into a single one, without—and that’s pretty significant—reducing time to either space or formal exercises like geometry. Such, the reward will be a completely new image of time, one that is much more general than any other and which overcomes the traditional separations, for instance that which pulls apart physical time and time of experience. Another effect will be that the question about the origin of time will vanish, a question which is continuously being discussed in cosmology (and theology, perhaps, as well).
(c) From the new perspective then we will revisit architecture and the Urban (in the next essay). We will not only return to Giedion, Rossi, or Koolhaas but we also will revisit the “Behavioral Turn that we have been introducing some essays ago.
Displayed in condensed form, our program comprises the following three sections:
• (a) Time itself as a subject of philosophy.
• (b) The creation of time.
• (c) Time of Architecture.
Before we start a few small remark shall be in order. First, it may well appear as somewhat presumptuous to try to handle time in sufficient depth within just one or two sections of a single essay. I am fully aware about this. Yet, the pressure to condense the subject matter also helps to focus, to achieve a structural picture on the large scale. Second, it should be nevertheless clear that we can’t provide a comprehensive overview or summary about the various conceptions of time in philosophy and science, as interesting this would have been. It would exceed even the possibilities of a sumptuous book. Instead, I will lay out my arguments by means of a purposeful selection, enriched with some annotations.
On the other hand this will provide one of the very rare comprehensive inquiries about time, and the first one that synthesizes a perspective that is backward compatible to those authors to whom it should.
Somewhat surprising, this could even include (theoretical) physics. Yet, the issue is quite complex and very different from mainstream, versions of which you may find in [27, 28]. Even as there are highly interesting and quite direct links to philosophy, I decided to put this into a separate essay, which hopefully will happen soon. Just to give you a tiny glimpse on it: Once Richard Feynman called his mentor and adviser John Wheeler in the middle of the night, asking him, “How many electrons are there in the universe?” According to the transmission Wheeler answered: “There is exactly one.” Sounds odd, doesn’t it? Nevertheless it may be that there are indeed only a few of them, according to Robbert Dijkgraaf, who also proposes that space-time is an emergent “property,” while information could be conceived as more fundamental than those. This, however, has a rather direct counterpart in the metaphysics of Spinoza, who claimed that there is only 1 single attribute. Or (that’s not an unhumbleness), take our conception of information that we described earlier. Anyway, you may have got the point.
The sections in the remainder of this essay are the following. Note that in this piece we will provide only chapter 1 and 2. The other chapters from “Synthesis” onwards will follow as a separate piece.
1. Time in Philosophy—A Selection
Since antiquity people have been distinguishing two aspects of time. It was only in the course of the success of modern physics and engineering that this distinction has been gone forgotten in the Western world’s common sense. The belief set of modernism with its main pillar of metaphysical independence may have been contributing as well. Anyway, the ancient Greeks assigned them the two gods of chronos and kairos. While the former was referring to measurable clock-time, the second denoted the opportune time. The opportune time is a certain period of time that is preferential to accomplish an action, argument, or proof, which includes all parts and parties of the setting. The kairos clearly exceeds experience and points to the entirety of consummation. The advantage of taking into account means and ends is accompanied by the disadvantage of a significant inseparability.
Aristotle, of course, developed an image of time that is much richer, more detailed and much less mystical. For him, change and motion are apriori to time [11]. Aristotle is careful in conceiving change and motion without reference to time, which then gets determined as “a number of change with respect to the before and after” (Physics 219 b 1-2). Hence, it is possible for him to conceive of time as essentially countable, whereas change is not. Here, it is also important to understand Aristotle’s general approach of hylemorphism, which states that—in a quite abstract sense—substance always consists of a matter-aspect and a form-aspect [11]. So also for time. For him, the matter-aspect is given by its kinetic, which includes change, while the form aspect shows up in a kind of order6. Time is a kind of order is not, as is commonly supposed, a kind of measure, as Ursula Coope argues [13]. Aristotle’s use of “number” (arithmos) is more a potential for extending operations, as opposed to “measure” (metron), which is imposed to the measured. Hence, “order” does not mean that this order is necessarily monotone. It is an universal order within which all changes are related to each other. Of course, we could reconstruct a monotone order from that, but as said, it is not a necessity. Another of the remarkable consequences of Aristotle’s conception is that without an counting instance—call it observer or interpretant —there is no time.
This role of the interpreter is further explicated by Aristotle with respect to the form of the “now”. Roark summarizes that we have understand that
[…] phantasia (“imagination”) plays a crucial role in perception, as Aristotle understands it, and therefore also in his account of time. Briefly, phantasia serves as the basis for both memory and anticipation, thereby making possible the possession of mental states about the past and the future. (p.7)
Actually, the most remarkable property of Aristotle’s conception is that he is able to overcome the duality between experience and physical time by means of the interpretant.
It is not by chance alone that Augustine denied the Aristotelian conception by raising his infamous paradox about time. He does so from within Christian cosmogony. First he argues that the present time vanishes, if we try to take close look. Then he claims that both past and future are only available in the present. The result is that time is illusory. Many centuries later, Einstein would pose the same claim. Augustine transposed the problem of time into one of the relation between the soul and God. For him, no other “solution” would have been reasonable. Augustine instrumentalises a misunderstanding of references, established by mixing incompatible concepts (or language games). Unfortunately, Augustine inaugurated a whole tradition of nonsense, finally made persistent by McTaggart’s purported proof of the illusion of time [14] where he extended Augustine’s already malformed argument into deep nonsense, creating on the way the distinction between A-series (past, present and future) and B-series (earlier, later) of time. It is perpetuated until our days by author’s like Oaklander [15][16] or Power [17]. Actually, the position is so nonsensical and misplaced—Bergson called it a wrong problem, Wittgenstein a grammatical mistake—that we will not deal with it further7.
Heidegger explicitly refers to phenomenology as it has been shaped by Edmund Husserl. Yet, Heidegger recognized that phenomenology—as well as the implied ontology of Being—suffers from serious defects. Thus, we have to take a brief look onto it.
With the rise of phenomenology towards the end of the 19th century, the dualistic mapping of the notion of time has been reintroduced and reworked. Usually, a distinction has been made between clock-time on the one hand and experiential time on the other. This may be regarded indeed as quite similar to the ancient position. Yet, philosophically it is not interesting to state such. Instead we have to ask about the relation between the two. The same applies to the distinction of time and space.
There are two main positions dealing with this dualism. On the one side we find Bergson, on the other Brentano and Husserl as founders of phenomenology. Both refer to consciousness as an essential element of time. Of course, we should not forget that this is one of the limitations we have to overcome, if we want to achieve a generalized image of time.
Phenomenology suffers from a serious defect, which is given by the assumption of subjects and objects as apriori entities. The object is implied as a consequence of the consciousness of the subject, yet this did not result in a constructivism à la Maturana. Phenomenology, as an offspring of 19th century modernism and a close relative of logicism, continued and radicalized the tendency of German Idealism to think that the world could be accessed “directly”. In the words of Thomas Sheehan [19]:
And finally phenomenology argued that the being of entities is known not by some after-the-fact reflection or transcendental construction but directly and immediately by way of a categorical intuition.
There are two important consequences of that. Firstly, it violates the primacy of interpretation8 and has to assume a world-as-such, which in other words translates into a fundamentally static world. Secondly, there is no relation between to appearances of an object across time.
Heidegger, in “Being and Time” [21] (original “Sein und Zeit” [22]), tried to correct this defect of phenomenology and ontology by a hermeneutic transformation of phenomenology. This would remove the central role of consciousness, which is replaced by the concept of the “Being-there” (Dasein) and so by the “Analysis of Subduity.” He clearly states (end of §3 in “Being and time”) that any ontology has to be fundamental ontology. The Being-there (Dasein) however needs— in order to be able to see its Being—temporality.
The fundamental ontological task of the interpretation of being as such, therefore, includes working out the Temporality of being. The concrete answer to the question of the sense of being is given for the first time in the exposition of the problematic of Temporality. ([22], p.19)
How is temporality described? In §65 Heidegger writes:
Coming back to itself futurally, resoluteness brings itself into the Situation by making present. The character of “having been” arises from the future, and in such a way that the future which “has been” (or better, which “is in the process of having been”) releases from itself the Present. This phenomenon has the unity of a future which makes present in the process of having been; we designate it as “temporality”.9
Time clearly “delimits” Being as a conditioning horizon:
[…] we require an originary explication of time as the horizon of the understanding of being in terms of temporality as the being of Dasein who understands being. ([22], p.17)
Heidegger examines thoroughly the embedding of Being-there into time and the conditioning role of “time.” For instance, we can understand a tool only with respect to its future use. Temporality itself is seen as the structure of “care”, a major constitutive of the being of Dasein, which similarly to anticipation carries a strong reference to the future:
The originary unity of the structure of care lies in temporality” ([22], p.327).
Temporality is the meaning and the foundation of Being.10 Temporality is an Existential. Existential analysis claims that Being-there does not fill space, it is not within spatiality (towards the end of §70):
Only on the basis of its ecstatico-horizontal temporality is it possible for Dasein to break into space. The world is not present-at-hand in space; yet, only within a world does space let itself be discovered. The ecstatical temporality of the spatiality that is characteristic of Dasein, makes it intelligible that space is independent of time; but on the other hand, this same temporality also makes intelligible Dasein’s ‘dependence’ on space—a ‘dependence’ which manifests itself in the well-known phenomenon that both Dasein’s interpretation of itself and the whole stock of significations which belong to language in general are dominated through and through by ‘spatial representations’. This priority of the spatial in the Articulation of concepts and significations has its basis not in some specific power which space possesses, but in Dasein’s kind of Being. Temporality is essentially deterioriating11, and it loses itself in making present; […]
This concept of temporality could have been used to overcome the difference between “vulgar time” (chronos) and experiential time, to which he clearly sub-ordinated the former. Well, “could have been” if Heidegger’s program would have been completable. But Heidegger finally failed, “Being and Time” remained fragmentary. There are several closely related aspects for this failure. Ultimately, perhaps, as Cristina Lafont [24] argues, it is impossible to engage in a radical program of detranscendentalization and at the same time to try to achieve a fundamental foundation. This pairs with the inherited phenomenological habit to disregard the primacy of interpretation. The problem for Heidegger now is that the sign in the language is already in the world which has to be subdued. As Lafont brilliantly revealed, Heidegger still adheres to the concept of language as an “ontic” instrument, as something that is found in the outer world. Yet, this must count simply as a highly inappropriate reduction. Language constantly and refracted points towards the inwardly settled translation between body and thought and the outward directed translation between thought and community (of speakers), while translation is also kind of a rooting. Such we can conclude that ultimately Heidegger therefore still follows the phenomenological subject-object scheme. His attempt for a fundamental foundation while avoiding any reference to transcendent horizons must fail, even if this orientation towards the fundamental pretends to just serve as an indirect “foundation” (see below).
There is a striking similarity between Augustine and Heidegger. We could call it metaphysical linearity as a cosmological element. In case of Augustine it is induced by the believe in Salvation, in case of Heidegger by the believe into an absolute beginning paired with a (implicit) believe to step out of language. In a lecture held in 1963, that is 36 years after Being and Time, titled “Time and Being”, Heidegger revisits the issue of time. Yet, he simply capitulated from the problem of foundations, referring to “intuitional insight” as a foundation. In the speech “Time and Being” hold in 1962 [25], he said
To think the Being in its own right requires to dismiss Being as the originating reason of being-Being (des Seienden), in favor of the Giving that is coveredly playing in its Decovering (Entbergen), i.e. of the “There is as giving fateness.”12 (p.10)13
Here, Heidegger refutes foundational ontology in favour of the communal and external world by he concept of the Giving14. Yet, the step towards the communal still remains a very small step, since now not only the Other gets depersonalized as far as possible. The real serious issue here is that Heidegger now replaces the ontological conception of “ontic” language by the “ontic” communal. He still does not understand the double-articulation of the communal through language. We may say that Heidegger is struck by blindness (on his right eye).
Inga Römer [47] detects a certain kind of archaism throughout the philosophy of Heidegger, which comes along as a still not defeated thinking about origins.
Finally, in „Being and Time“ Heidegger detects the origin of time in the event, which he dedicatedly determines as the provider [m: the Giving] of Being and time. This Giving is seen as being divested from itself. The event, determined by Heidegger elsewhere as a singular tantum, is eliminated from itself—and nevertheless the event is conceived as the origin of time.15 (p.289)
Many years after the publication of “Being and Time”, in the context of the seminar “Time and Being” Heidegger claimed that he did not conceive fundamental ontology as kind of a foundation. He described the role of the Daseins-analytics as proposed in “Being and Time” in the following way [23]:
Being and Time is in fact on the way to find, taking the route through the timeness of Dasein in the interpretation of Being as temporality, a conception of time, that Owned of “time”, whence “Being” reveals itself as Presenting. Such however it is said that the fundamental mentioned in the fundamental ontology can’t take reference and synthesis. Instead, the whole analytics of Dasein ought to be repeated, subsequent of possibly having thrown light upon the sense of Being, in a more pristinely and completely different manner.16
Indeed, “Being and Time” remained fragmentary, Heidegger recognized the inherent incompatibility of the still transcendental alignment with the conception of the Dasein and was hence forced to shift the target of the Daseins-analytics [26](p.99). Being is not addressed from the vantage point of being-Being (Seiendes) anymore. It resulted in a replacement of the sense of Being by the question about the historical truth of Being as fateness. In the course of that shift, however, temporality lost its role, too, and was replaced by a thinking of a historized event. This event is conceived as kind of a non-spatial endurance [25]:
Time-Space (m: endurance) now denotes the open that in the mutually-serving-one-another of arrival, having been (Gewesenheit) and present clears itself. Only this open spacingly allows (räumt ein) the ordinarily known space its propagation. (p.19)17
As far as this move could be taken as a cure of the methodological problems in “Being and Time,” it turned out, however, to be far detrimental for Heidegger’s whole philosophy. He was forced to determine man by his ecstatic exposition and being-thrown (tossed?) into nothingness. Care as kind of cautious anticipation was replaced first by angst, then by incurable disgust through Sartre. While the early Heidegger precisely tried to cure the missing of primal relationality in phenomenology, the later Heidegger got trapped by an even more aggressive form of singularization and denial of relationality at all. This whole enterprise of existential philosophy suffers from this same deep disrespect if not abhorrence of the communal, of the practice of sharing joyfully a common language that turns into the Archimedic Point of being human. Well, how could he think differently given his particular political aberrancy?
Anyway, Heidegger’s shift to endurance brings us directly to the next candidate.
Politically, in real life, Heidegger and Bergson could not be more different. The former more than sympathizing (up to open admiration) with totalitarianism in the form of Hitlerism and fascism, thereby matching his performative rejection of relationality, the latter engaging internationally in forming the precursor of the UN.
But, how does Bergson’s approach to time look like? For Bergson, logicism and the subject-object dichotomy are thoughts that are alien to him. Both actually have to assume a sequential order that yet have to be demonstrated in its genesis.18 The starting point for Bergson is the diagnosis that measurable time, or likewise measuring time, as it is done in physics as well by any clock-time introduces homogeneity, which in turn translates into quantificability [31]. As such, time is converted into a spatial concept, as these properties are also properties of space as physics conceives it. The consequence is that we create pseudo-paradoxes like that which has been explicated by Augustine. To this factum of quantificability Bergson then opposes qualitability. For him, quality and quantity remain incommensurable throughout his works.
At any rate, we cannot finally admit two forms of the homogeneous, Time and Space, without first seeking whether one of them cannot be reduced to the other […] Time, conceived under the form of an unbounded and homogeneous medium, is nothing but the ghost of space, haunting the reflective consciousness. ([32] p. 232)
So we can fix that time is essential a qualitative entity, or in other words, an intensity that is, according to Bergson, opposed to the extensity of spatial entities. Spatial entities are always external to each other, while for intensive entities—such as time—such an externalization is not possible. They can be thought only as a mutually interpenetrating beside-one-another, which however should be thought as an aterritorial “beside”. As Friedrich Kuemmel puts it, intensity, for Bergson, can be detached from extensity.19 Intensity then is being equipped by Bergson with a manifoldness or multiplicity that consequently establishes a reality apart from physical spatiality with its measurable time. This reality is the reality of consciousness and the soul. Bergson calls it “durée”, which of course must not be translated into “duration” (or into the German “Dauer”). Durée is more like the potential for communicable time, or in Deleuze’s words, a “potential number” ([33] p.45), to which we can refer in language literally as “referential time.”
Bergson’s notion of durée is quite easily determined (p.37)
It [durée] is a case of “transition,” of a “change,” a becoming, but it is a becoming that endures, a change that is substance itself. […] Bergson has no difficulty in reconciling the two fundamental characteristics of duration; continuity and heterogeneity. However, defined in this way, duration is not merely lived experience; […] it is already a condition of experience.
As a qualitative multiplicity, durée is opposed to quantitative multiplicity. For Bergson, this duality is a strict and unresolvable one, yet it does not set up an opposition, it is not subject of dialectic. It does, however, follow the leitmotif of Bergson, according to Deleuze ([33] p.23): People see quantitative differences where actually are differences in kind. (RRR)
Deleuze emphasizes that the two multiplicities have to be strictly distinguished ([33] p.38).
[…] the decomposition of the composite reveals to us two types of multiplicity. One is represented by space […]: it is a multiplicity of exteriority, of simultaneity, of juxtaposition, of order, of quantitative differentiation, of difference in degree; it is a numerical multiplicity, discontinuous and actual. The other type of multiplicity appears in pure duration: It is an internal multiplicity of succession, of fusion, of organization, of heterogeneity, of qualitative discrimination, or of difference in kind; it is a virtual and continuous multiplicity that cannot be reduced to numbers.
Here we may recall Aristotle’s notion of time as kind of order. This poses the question whether duration itself is a multiplicity. As Deleuze carves it out ([33] p.85):
At the heart of the question “Is duration one or multiple?” we find a completely different problem: Duration is a multiplicity, but of what type? Only the hypothesis of a single Time can, according to Bergson, account for the nature of virtual multiplicities. By confusing the two types – actual spatial multiplicity and virtual temporal multiplicity- Einstein has merely invented a new way of spatializing time.
Pushing Bergson’s architecture of time further, Deleuze develops his first accounts on virtuality. It becomes clear, that durée is a virtual entity. As such, it is outside of the realm of numbers, even outside of quantificability or quantitability. Speaking in Aristotelian terms we could say that time is a smooth manifold of kinds of orders. Again Deleuze (p.85):
Being, or Time, is a multiplicity. But it is precisely not “multiple”; it is One, in conformity with its type of multiplicity.
For Bergson, tenses are already actualizations of durée. The past is conceived as being different from the present in kind, and could not be compared to it. There is also possibility for a transition from a “past” to a “present.” It is the work of memory (as an abstract entity) that creates the link. Memory extends completely into present, though. Its main effect is to recollect the past. In this sense, memory is stepping forward. Durée and memory are co-extensive.
As we have seen, Bergson’s conception of time is strongly linked to consciousness and its particular memory. We also have seen that he considers physical time as a kind of a secondary phenomenon. He thinks that things surely have no endurance in the sense of a capability to actualize durée into an extended present.
This poses a problem: What is time in our outside? In Time and Free Will he writes [32],
Although things do not endure as we do ourselves, nevertheless, there must be some incomprehensible reason why phenomena are seen to succeed one another instead of being set out all at once. (p.227)
Well, what does this claim “things do not endure as we do ourselves” refer to? Is there endurance of things at all? And what about animals, thinking animals, or epistemic machines? As Deleuze explains, Bergson is able to solve this puzzle only by extending his durée into a cosmic principle ([33], pp.51). Yet, I think that in this case he mixes immaterial and material aspects in a quite inappropriate manner.
Bergson’s conception of time certainly has some appealing properties. But just as its much less potent rival phenomenology it is strongly anthropocentric. It can’t be generalized enough for our purposes that follow the question of time in architecture. Of course, we could conceive of architecture as a thing that is completely passive if nobody looks onto it or thinks about it. But what is then about cities? The perspective of passive things has been largely refuted, first by Heidegger through his hermeneutic perspective, and in a much more developed manner, by Bruno Latour and his Agent-Network-Theory.
In still other terms, we could say that Bergson’s philosophy suffers from a certain binding problem. I think it was precisely the binding problem that caused the hefty dispute between Einstein and Bergson. Just to be clear, in my opinion both of them failed.
Thus we need a perspective that allows to overcome the binding problem without sacrificing either the experiential time, or durée or the measurability of referential time. This perspective is provided by the semiotics of Charles Sanders Peirce.
Peirce was an engineer, his formal accounts thus always pragmatic. This sets him apart from Bergson and his early devotion to mathematics. Where the former sees processes in which various parts engage, the latter sees abstract structures.
Being an engineer, Peirce looked at thought and time in a completely different manner. He starts with referential time, with clock-time. He does not criticize it at first hand as Bergson would later do.
The first step in our reconstruction of Peircean time is his move to show that neither thought nor, of course, consciousness can take place in an instant. Consciousness must be a process. Else, thought is a sign. One has to know that for Peirce, a sign is not to be mistaken as a symbol. For him it is an enduring situation. We will return to this point later.
In MS23720 (chapter IV in Writings 3) his primary concern is to explain how thinking could take place
A succession in time among ideas is thus presupposed in time-conception of a logical mind; but need this time progress by a continuous flow rather than by discrete steps?
Of course, he concludes that a “continuous time” is needed. Yet, at this point, Peirce starts to depart from a single, univoke time. He continues
Not only does it take time for an idea to grow but after that process is completed the idea cannot exist in an instant. During the time of its existence it will not be always the same but will undergo changes. […] It thus appears that as all ideas occupy time so all ideas are more or less general and indeterminate, the wider conceptions occupying longer intervals.
This way he arrives at a time conception that could be characterized as a multiplicity of continua. Even if it would be possible to determine a starting time and a time of completion for any of those intervals, it still remains that all those overlapping thoughts form a single consciousness.
Chapter 5 in “Writings 3” (MS239), titled “That the significance of thought lies in reference to the future” [35], starts in the following way.
In every logical mind there must be 1st, ideas; 2nd, general rules according to which one idea determines another, or habits of mind which connect ideas; and, 3rd, processes whereby such habitual connections are established.
The second aspect strongly reminds to our orthoregulation and the underlying “paradox of rule-following” first clearly stated by Ludwig Wittgenstein in the 1930ies [36]. The section ends with the following reasoning:
It appears then that the intellectual significance of all thought ultimately lies in its effect upon our actions. Now in what does the intellectual character of conduct consist? Clearly in its harmony to the eye of reason; that is in the fact that the mind in contemplating it shall find a harmony of purposes in it. In other words it must be capable of rational interpretation to a future thought. Thus thought is rational only so far as it recommends itself to a possible future thought. Or in other words the rationality of thought lies in its reference to a possible future.
In this brief paragraph we may find several resemblances to what we have said earlier, and elsewhere. First, Peirce’s conception of time within his semiotics provide us a means for referring to the binding problem. More precisely, thought as sign process is itself the mechanism to relate ideas and actions, where actions are always preceded, but never succeeded by their respective ideas.
Second, Peirce rejects the idea that a single purpose could be considered as reasonable. Instead, in order to justify reasonability, a whole population of remindable purposes, present and past, is required; all of them overlapping, at least potentially, all of them once pointing to the future. This multiplicity of overlapping and unmeasurable intervals creates a multiplicity of continuations. Even more important, this continuation is known before it happens. Hence, the present extends into the past as well as into the future. Given the fact that firstly the immediate effect of an action is rarely the same as the ultimate effect, and secondly the ultimate effect is often quite different to the expectation related to the purpose, we often do even not know “what” happened in the past. So, by applying ordinary referential time, our ignorance stretches to both sides of present, though not in the same way. It even exceeds the period of time of what could be called event.
Yet, by applying Peirce’s continuity, we find a possibility to simplify the description. For we then are faced by a single kind of ignorance that results in the attitude that Heidegger called “care” (Sorge).
The mentioned extension of the experienced ignorance as an ignorance within the present into the past and the future does not mean, of course, to propose a symmetry between the past and the future with respect to present, as we will see in a moment. Wittgenstein [40] is completely right in his diagnosis that
[…] in the grammar of future tense the conception of “memory” does not occur, even not with inverted sign.21 (p. 159)
The third issue, finally, concerns the way re relates rationality to the notion of “possible future.” This rationality is not claiming absolute objectivity, since it creates its own conditions as well as itself. Peirce’s rationality is a local one, at least at first sight. It is just this creating of the possible future that provides the conditions for the possibility of the experiencibility of future affairs.
The most important (methodological) feature of Peircean semiotics is, however, the possibility to jump out of consciousness, so to speak. Sign situations occur not only within the mind, they are also ubiquitous in interpersonal exchange, and even in the absorption of energy by different kinds of matter. Semiotics provides a cross-medial continuity. This argument has been extended later by John Dewey [37][38], Peirce’s pragmatist disciple .
Such we could say that, if (1) thought comprises signs, and (2) signs are sign situations, then it does not make sense to speak about “instantaneous” time regarding thought and consciousness in particular, but also regarding any interpretation in general, as interpretation is always part of a sign (-situation). Then, we also can say that presence lasts as long as a particular interpretation is “running”. Yet, signs refer to signs only. Interpretations are fundamentally open in its beginning as well as in its end. They are nested and occur in parallel, and are more broken than finished just contingently. Once the time string, or the interpretive chain, respectively, has been broken, past and future appear literally in their own right, i.e. de iure, and only by a formal act.22
The consequence of all that the probabilistic network of interpretations gives rise to a cloud of time strings, any of them with indeterminable ends. It is clear that signs and thus thinking would be absolutely impossible if there would be just one referential clock-time. But even more important, without the inner multiplicity of “sign time” there would be only the cold world of a single strictly causal process. There would be no life and no information. Only a single, frozen black hole.
Given the primacy of the cloud of time strings, it is easy to construct referential time as a clock-time. One just needs to enumerate the overlapping time strings in such a way that enumeration and counting coincide. Once this is done it is possible to refer to a clock. Yet, the clock would be without any meaning without such a enumerative counting. The clock the is suitably actualized in a more simple way by a perfectly repetitive process, that is, a process that actually is outside of time, much as Aristotle thought it is the case for celestial bodies. And once we have established clock time we can engage in interpersonal synchronization of our individual time string populations.
Peircean sign time thus not only allows to reconcile the two modi of time, the experiential time and referential time. It is also possible to extend the same process into historical time, rooting historicity in an alternative and much more appealing manner than it was proposed by Heidegger.
All the positions we met so far can be split into two sets. In the first part we find fundamental ontology and existential philosophy (Heidegger), analytic ontology (Oaklander), “folk approaches” (Augustine), idealistic conceptions (McTaggart) and physics with its reductionist perspective . In the second subset we find Aristotle, Bergson and Peirce.
The difference between the two parties lies in the way they root the concept of time. The former party roots it in reality; hence they ask about the inner structure of time, much like one would ask about the inner structure of wood. For the proponents of the second class time is primary experiential time and such always rooted in the interpretant, i.e. some kind of active observer, whether this refers to observers with or without consciousness. For all of them, though in different ways, the present is primary. For Aristotle it is kind of a substance, for Bergson durée, for Peirce the sign as process.
Wittgenstein does not say much time, since he seems to be convinced that there is not so much to say. He simply accepts the distinction between referential time of physics and experiential time and considers them to be incommensurable. [39]
Both ways of expressing it are okay and equitable, yet not blendable.23 ([40], p.81-82)
Already in the Tractatus, Wittgenstein wrote
We cannot compare any process with the “passage of time”—there is no such thing—but only with another process (say, with the movement of the chronometer).24 (TLP 6.3611)
Here it becomes clear that clock-time is nothing “built into matter”, but rather a communally negotiated reference, or in short, referential time. We all refer to the same particular process, whether this is length of a day or the number of state changes in Cs-133.25 Experiential time, on the other hand, can’t be considered as a geometrical entity, hence there is no such thing as a “point” in present. In experience, there is nothing to measure. The main reason for this being that experience is tightly linked to (abstract) modeling, and thus to the choreosteme. In short, experience is a self-generating process without an Archimedean Point.
“Now” does not denote a point in time. It is not a “name of a moment in time.”26 ([43], 157)
[…] yet it is nonsense to say ‚This is this‘, or ‚This is now‘.27 ([43], 159)
„Now“ is an indexical term, just as „I“, „this“ or „here“. Indexical terms do not refer to an index. Quite in contrast, sometimes, in more simple cases, they are setting an index, in more complicated cases indexical terms just denote the possibility for imposing an index onto a largely indeterminate context. Hence, it is for grammatical reasons that we can’t say “this is now.” Time is not an object. Time is nothing of which we could say that it does exist. Thus we also can not ask “What is time?” as this implies the existentialist perspective. The question about the reality of time is ungrammatical, it is like trying to play Chinese checkers28 on a chess board, or chess on a soccer field.
More precisely, there is no possibility to speak about “time as an object” in meaningful terms. For language is (i) a process itself, (ii) a process that intrinsically relates to the communal (there is no private language), and (iii) language is a strongly singular term. Thus we can conclude that there is no such thing as the objectification of time, or objective time.
Examples for such an objectification are easy to find. For instance, it is included in the question posed by Augustine “What is time?” (Wittgenstein’s starting point for the Philosophical Investigations.) It is also included in the misunderstanding of an objective referential time. Or in the claim that time itself is flowing (like a river). Or in the attempt to proof that time itself is continuous.29
Instead, “now” is used as an indication of—or a pointer to—the present and the presence of the speaker. Its duration in terms of clock-time is irrelevant. It would be nonsense to attempt to measure this duration, because it would mean to measure the speaker and his act itself.
Accordingly, the temporal modi in language, the tenses, such as past, present time, future, reflect to the temporal modi of actions—including speech acts—, which take place in the “now” and are anchored in the future through their purpose ([42] p.142).
Confusing and mixing the two conceptions of time—referential time and experiential time—is the main reason, according to Wittgenstein, for enigmas and paradoxes regarding time (such as the distinction of A-series and B-series by McTaggart and in ontology).
For there is no such thing as the objectification of time, time is intrinsically a relational “entity”. As Deleuze brilliantly demonstrates in his reflections about Bergson [33], time can be thought only as durée, or in my words, as a manifold of anobjected time strings, that directly points to the virtual, which in turn is not isolated, but rather an intensity within the choreosteme.
The idealistic, phenomenological and existential approaches to temporality are deeply flawed, because it is not possible to take time apart, or to take time out of the game. Wittgenstein considers such attempts as a misuse of language. Expressions like „time itself“ or questions like “What is time?” are outside of any possible language.
In the ‘Philosophical Remarks’ he says
What belongs to the essence of the world could not be expressed by language. Only what we could imagine as being different language is able to tell.30 ([40] p.84).
Everything which we are able to describe at all, could also be different.31 ([45],p .173).
In order to play the game of “questioning reality of X” in a meaningful manner it has to be possible that it is not real, or partially. An alternative is needed, which however is missing in existential questions or attempts to find the essence. Thus it is meaningless (free of sense) to doubt (even implicitly) the reality of time, whether as present, as past or as future. It is similar to Moore’s paradox of doubting of having an arm. In the end, at least after Wittgenstein, one always have to begin with language. It is nonsense to begin with existence, or likewise essence.
Wittgenstein rejects the traditional philosophical reflection that always tried to find the eternal, necessary and general truth, essence or “true nature” as opposed to empirical—and pragmatical—impressions. The attempt to determine the reality of X as a being-X-as-such is a misuse of language, it is outside of the logic of language.
For Wittgenstein, the more interesting part of time points to memory, as clock-time is a mere convention. For him, memory is the sourcing wellspring (“Quelle”) of time, since the past is experienceable just as a recall of the past ([40] p.81f). Bergson called it recollection.
I think that there are one major consequence of Wittgenstein’s considerations. Time can be comprehended only as a transcendent structural condition of establishing a relation, hence also acting, speaking and thinking. Without such conditioning it is simply not possible to establish a relation. This extends, of course, also to the realm of the social [46]. Here we could even point to physics, particularly to the maximum speed of light, that is the maximum speed of exchanging information, which translates to the “establishment of time” as soon as a relation has been built. This includes that this building of a relation is irreversible. Within reversibility it does not make sense to speak about time. Even shorter, we could be tempted to say that within information there is no time, if it would be meaningful to think something like “within information”. Information itself is strictly bound to interpretation, which brings us back to Peircean semiotics.
Such we could say that we as humans “create” time mainly by means of language, albeit it is not the only possibility to “create” time. Yet, for us humans (as a collective individual beings32) there is hardly another possibility, for we can’t step out of language. Different languages and different uses of language “create” different times. It is this what Helga Nowotny calls “Eigenzeit” [46] (“self-owned time”).
It is rather important to understand that by means of these argument we don’t refer any more to something like “historical time” or “natural time”. Our argument is much more general.
Secondarily, then, we may conclude that we have to ask about the different ways we use the language game “time”.
As other authors Paul Ricoeur proposes a strict discontinuity between historical time (“historicality”) and physical time. The former he also calls “time with present”, the latter “time without present.” Yet, unlike other authors he also proposes that this discontinuity can’t be reconciled or bridged. This hypothesis he proceeds to formulate by means of three aporias [47].
• – Aporia 1, duality: Subjective time and objective time can’t be thought together in a single conception, and even more, they obscure them mutually.
• – Aporia 2, false unity: Despite we take it for granted that there is one single time, we can’t justify it. We even contradict the insight—which appears as trivial—that there is subjective and objective time.
• – Aporia 3, inscrutability: Thought can not comprehend time, since its origin can’t be grasped. Conceptually, time is ineluctable. Whenever philosophical thought starts to think about time, this thinking is already too late.
Ricoeur is the second author in our selection who takes a phenomenological stance. Heidegger’s “Being and Time” serves as his point of reference. Yet, Ricoeur is neither interested in the analysis of Being nor of the having-Been. The topic to which he refers in Heidegger, and at the same time his vantage point, is historicality, which he approaches in a very different manner. For Ricoeur, history and historicality can not only be understood just through narrativity; there is even a mutual structural determination. Experience of time as the source of historicality as well as the soil of it gets refigurated through narration. In the essay “On Narrative” [49] that he published while his major work “Time and Narration” [48] was in the making we can find his main hypothesis:
My […] working hypothesis is that narrativity and temporality are closely related—as closely as, in Wittgenstein’s terms, a language game and a form of life. Indeed, I take temporality to be that structure of existence that reaches language in narrativity and narrativity to be the language structure that has temporality as its ultimate referent. Their relationship is therefore reciprocal. (p.169)
Concerning narrativity, Ricoeur draws a lot, of course, on the structure of language and the structure of stories. On both levels various degrees of temporality and nonchronological proportions appear. On the level of language, we find short-range and long-range indicators of temporality, beyond mere grammar. Long-range indicators such as “while” or adverbs of time (“earlier”) do not have a clear boundary, neither structurally nor semantically. The same can be found on the level of the story, the plot as Ricoeur calls it. Here he distinguishes a episodic from a configurational dimension, the former presupposing ordinary, i.e. referential time. Taking into account that
To tell and to follow a story is already to reflect upon events in order to encompass them in successive wholes. (p.178)
it follows that any story comprises a
[…] twofold characteristic of confronting and combining both sequence and pattern in various ways.
In other words, a story creates a multiplicity of possible sequences and times, thereby opening a multiplicity of “planes of manifestation,” or in other words, a web of metaphors33.
[…] the narrative function provides a transition from within-time-ness to historicality.
Yet, according to Ricoeur the configurational dimension of the story has a particular effect on the ordinary temporality of a story as it is transported by the episodics. Through the triggered reflective act, the whole story may condense into a single “thought”.
Finally, the recollection of the story governed as a whole by its way of ending constitutes an alternative to the representation of time as moving from the past forward into the future, according to the well-known metaphor of the arrow of time. It is as though recollection inverted the so-called natural order of time. […] A story is made out of events to the extent that plot makes events into a story. The plot, therefore, places us at the crossing point of temporality and narrativity.
This single thought, the plot of a story as whole now is confronted particularly with the third aporia of inscrutability. Basically, for Ricoeur “not really thinking time” when thinking about time is aporetic. (fTR III 467/dZE III, 417) The aporia
[…] emerges right in that moment, where time, which eludes any attempt to be constituted, turns out to be associated to a constitutive order, which in turn always and already is assumed by the work of that constitution.
Any conception that we could propose about time is confronted with the impossibility of integrating this reflexively ineluctable reason. We never can subject time as an object of our reflexions completely. Inga Römer emphasizes (p.284)
Yet, and this is a crucial point for Ricoeur, “what is brought to its failure here is not thinking, in all its meanings, but rather the drive, better the hubris that our thinking seduces to attempt to dominate sense”. For this failure is only a relative one, the inscrutability is not faced with a lapse into silence, but rather with a polymorphy of arrangements and valuations.34
The items of this polymorphy are incommensurable for Ricoeur. Now, for Ricoeur this polymorphy of time experience is situated in a constitutive and reciprocal relationship with narrativity (see his main hypothesis in “On Narrative” that we cited above). Thereby, our experience of time refigurates and reconfigurates itself continuously. In other words, narration represents a practical and poetic mediation of heterogeneous experiences of time. This interplay, so Ricoeur, can overcome the limitations of philosophical inquiries of time.
Interestingly, Ricoeur rejects any systematicity of his arguments, as Römer points out: (p.454)
This association of withdrawal of grounds at the one hand and the challenge for a thinking-more and thinking-different is the strongest argument for Ricoeur’s explicit refusal of a system regarding the three aporias of time as well as their narrative answers.35 (p.454)
The result of this is pretty clear. The Ricoeurean aporetics starts to molt itself into a narration, constantly staggering and oscillating between its claiming, its negation, its negative positivity and its positive negativity, beginning to dazzle and getting incomprehensible.
Temporality tends to get completely merged in narrativity, which in turn becomes synonymous with the experience of time. Such, there are only two possibilities for Ricoeur, neither of which he actually did follow. The first is the denial of temporality that could be thought independent of narration. The second would be that life is equated with narration.
I think, Ricoeur would favour the second alternative. As Römer summarizes:
Historical practice allows us to mediate experienced time with linear time in its own creation, the historical time.36 (p.326)
Such, however, Ricoeur would introduce a secondary re-mystification, which actually is even an autolog one, since Ricoeur has been starting with it as an inscrutability. At this point, all his arguments vanish and turn into a simple pointing to experience.
In the end, the notion of historical practice remains rather questionable. Ricoeur uses the concepts of witness or testimony as well as “trace,” which of course reminds to Derrida’s infamous trace: an uninterpretable remnant of history. Despite Ricoeur emphasizes the importance of the reader as the situs of the completion of text, he never seems to accept interpretation as a primacy. Here, he closely follows the inherited phenomenological misconceptions of the object that exists independent from and outside of the subject. Other difficulties of it is the denial of transcendence and abstraction, which together with its logicism causes the wrong problem of freedom. Phenomenology never understood, whether in Husserl, Heidegger, Derrida, Ricoeur or analytic philosophy, that comparing things can’t take place on the same level as the compared things. Even the most simple comparison implies the Differential, requiring a considerable amount of constructive activity.
Outside phenomenology, Ricoeur’s attempt is only little convincing, albeit he describes many interesting observations around narration and texts. His aporetics of time appears half-baked, through and through, so to speak. Poisoned by phenomenology, and strangely enough forgetting about language in the formulation of his aporias, he commits almost all of the possible mistakes already in his premises. He objectifies time and he treats it as an existential, which could be explained. After all, his main objection that we “can’t really think time”, does not hit a unique. case. Any thinking of any concept is unable to “really think it.”
Our conclusion is not a rejection of Ricoeur’s basic idea of a mutual relationship between “thinking time” and narration. Yet, obviously thinking about narration and phenomenology is an impossibility itself.
One of interesting observations around narration is the distinction between the episodic and the configurational dimension of a plot. This introduces multiplicity, reversibility, and extended present as well as an additional organizational layer. Yet, Ricoeur failed to step out of his affections with narration in order to get aware of the opportunities attached to it.
Introducing transcendence into our game, we have to refer to Kant, of course, and his conception of time in his “Transzendentale Ästhetik der Kritik der reinen Vernunft”. Kant’s merit is the emancipation of transcendental thinking from the imagined divinity, albeit he did not push this move far enough.
By no means Kant demonstrated the irreality of time, as Einstein as well as McTaggard boldly claim. Kant just demonstrated that time can’t “have” a reality independent from a subject. Accordingly, the idea of an illusionary or irreal time itself is based on a fiction: the fiction of naïve realism. It claims that there is the possibility of an access to “nature” in a way that is independent of subject. Conversely, this does not mean that time as a reality is constructed by human thinking, of course.
The reason for misunderstanding Kant lies in the fact that Kant still argues completely within the realm of the human, while physicists like Einstein talk about the fiction of primarily unrelated entities. It is a major methodological element of the theoretic constitution of physics to assume so, in order to become able, so the fiction, to describe the relations then objectively. Well, actually this does not make much sense, yet physicists usually believe in it.
Far from showing that time is illusionary, Kant tried to secure the objectivity of time under conditions of empirical constitutions, that is, after the explicit and final departure from still scholastic pre-established harmonies that are guaranteed by God. In order to accomplish that he had to invent kind of an intrinsic transcendentality of empirical arrangements. This common basis he found in the transcendent sensual intuition.
For Kant time is a form of intuition (Anschauung), or more precisely, a transcendental and insofar pure form of sensual intuition. It is however of utmost importance, as Mike Sandbothe writes, that Kant himself relativized the universality that is introduced by the transcendentality of time, or in still other words, the intuition of the transcendental subject.
[…] die Form der Anschauung bloss Mannigfaltiges, die formale Anschauung aber Einheit der Vorstellung gibt.” ([47]p.154, B 160f)
The formal account in the intuition now refers to the use of symbols. Thus, it can’t be covered completely as a subject by the pure reason. Here, we find a possible transition to Wittgenstein, since symbols are symbols by convention. Note that this does not refer to a particular symbol, of course, but to the symbolicity that accompanies any instance of talking about time. On the one hand this points towards the element of historicity, which has been developed by Heidegger in a rather limited manner (because he restricted history to the realm of the Dasein, i.e. consciousness).
On the other hand, however, we could extend Kant’s insight of a two-fold constitution of time into more abstract, and this means a-human regions. In a condensed way Kant shows that we need sensual intuitions and symbolicity in order to access temporal aspects of the world. Sensual intuitions, then, require, in the widest sense, kind of match between sensed and the sensing. In human thinking these are the schemata, in particle physics it is the filter built deeply into matter. We could call this transverse excitability. In physics, it is called quantum.
Yet, the important thing is the symbolicity. We can immediately translate this into quantificability and quantitability. And again we are back at Aristotle’s conception.
2. Synopsis
So, after having visited some of the most important contributions to the issue of time we may try to approach a synopsis of those. Again, we have to emphasize that we disregarded many highly interesting ideas, among others those of Platon in his Timaios with his three “transcendental” categories of Being, Space and Becoming, or those of Schelling (cf. in [31]); or those of Deleuze in his cinema books, where he distinguished the “movement image” (presupposing clock time) from the “time image” that is able to provide a grip onto “time itself,” which, for Deleuze, is the virtual to which Bergson’s durée points to; likewise, any of the works by the authors we referred to should have been discussed in much more detail in order to do justice to them. Anyway.
Our intermediate goal was to liberate time from its human influences without sacrificing the applicability of the respective conception to the realm of the human. We need to do so in order to investigate the relation between time and architecture. This liberation, however, still has to obey to the insight of Wittgenstein that we must not expect to find an “essence” of time. Taking all the aspects together, we indeed may ask, as careful as possible,
How should we conceive of time?
The answer is pretty clear, yet, it comes as a compound consisting of three parts. And above all it is also pretty simple.
(1) Time is best conceived as a transcendent condition for the possibility of establishing a relation.
This “transcendent condition” is not possible without a respective plane of immanence, which in turn comprises the unfolding of virtuality. Much could be said about that, of course, with respect to the philosophical implications, its choreostemic references, or its architectonic vicinity. For instance, this determination of time suggests a close relationship to the issue of information and its correlate, causality. Or we could approach other conceptions of time by means of something like a “reverse synthesis.”
It is perhaps at least indicated to emphasize—particularly for all those that are addicted to some kind of science—that this transcendent condition does not, by no means, exclude any consideration of “natural” systems, even not in its material(ist) contraction. On the other hand, this in turn does not mean, of course, that we are doing “Naturphilosophie” here, neither of the ancient nor the scholastic type.
It is clear that we need to instantiate the subjects of this conception in order to achieve a practical relevance of it. It is in this instantiation that different forms of temporality appear, i.e. durée on the one hand and clock-time on the other. Nothing could be less surprising, now, as an incompatibility of the two forms of temporality. Actually, the expectation of a compatibility is already based on the misunderstanding that claims the possibility of a “direct” comparison (which is an illusion). Quite to the contrast, we have to understand that the phenomenal incommensurability just points to a differential of time, which we formulated as a transcendent condition above.
Now, one of the instantiations, clock-time, or referential time, is pretty trivial. We don’t need to deal with it any further. The other branch, where we find Peirce and Bergson, is more interesting.
As we have seen in our discussion about their works, multiplicity is an essential ingredient of relational time. Peirce and Bergson arrived at it on different ways, though. For Peirce it is a consequence of the multiplicity of thoughts about something, naturally derived from his semiotics. For Bergson, it is a multiplicity within experience, or better the experiencing consciousness. So to speak, they take inverse positions regarding the mediality. We already said that we prefer the Peircean perspective due to its more prominent potential for generalization. Yet, I think the two perspectives could be reconciled quite easily. Both conceptions conceive primal time as “experiential” time (in the widest sense).
Our instantiation of time as a transcendent condition is thus:
(2) Transcendent time gets instantiated as a probabilistic, distributed and manifold multiplicity of—topologically spoken—open time strings.
Each time string represents a single and local present, where “local” does not refer to a “spatial place”, but rather to a particular sign process.
This multiplicity is not an external multiplicity, despite it is triggered or filled from the external. It is also not possible to “count” the items in it, without loosing present. If we count, we destroy the coherence between the overlapping strings of present, thus creating countable referential time. This highlights a further step of instantiation, the construction of expressibility.
(3) The pre-specific multiplicity of time strings decoheres by symbolization into a specific space of expressibility.
Symbolization may be actualized by means of numbers, as already mentioned before. This would allow us to comprehend and speak of movement. We also have seen that we could construct a web of proceeding metaphors and their virtual movement. This would put us in midst the narration and into metaphoricology, as I call it, which refers to the perspective that conceives of being human and of human beings as parts of lively metaphors. In other words, culture itself becomes the story and the narrative.
As still another possibility we could address the construction of a space of expressibility of temporality quite directly. Such a space need to be an aspectional space, of course. Just keep in mind that the aspectional space is not a space of quantities, as it is the case for a Cartesian space. The aspectional space is a space that is characterized by a “smooth” blending of intensity and quantity. We may call it intensive quantities, or quantitable intensities. It is a far-reaching generalization of the “ordinary” space conceptions that we know from mathematics. As the aspects —the replacement of dimensions—of that space we could choose the modes of temporality—such as past, present, future—, the durée, the referential time, or implicit time as it occurs and shows up in behavior or choreostemic space. We also could think of an aspection that is built by a Riemannian manifold, allowing to comprise linearity and circularity as just a single aspect.
The tremendous advantage of such a space is manifold, of course, because an infinite amount of particular time practices can be constructed, even as a continuum. This contiguous and continuous plurality is of a completely different kind as the unmediatable items in the plurality of time conceptions that has been proposed by Mike Sandbothe [8].
The aspectional space of transcendent time offers, I mentioned it, the possibility for expressing time, or more precisely, a particular image of time. There are several of those spaces, and each of them is capable to hold an infinite number of different images of time.
It is now easy to understand that collapsing the conditions for building relations with the instantiation into a concrete time form, or even with the action (or the “phenomenon”) results in nothing else than a terrible mess. Actually, it is precisely the mess that physicists or phenomenology create in different ways. “Phenomenal” observables of this mess are pseudo-paradoxes or dualities. We also could say that such mess is created due to a wrong application of the grammar of time.
There is one important aspect of time and temporality, or perspective onto them, that we mentioned only marginally so far, the event. We met it in Heidegger’s “Being and Time” as the provider [m: the Giving] and insofar also the origin of Being and time. We also saw that Ricoeur uses them as building bricks for stories that combine them into successive wholes. For Dewey (“Time and Individuality”, “Context of Thought”) the concept of an event involves both the individual pattern of growth and the environmental conditions. Dewey, as Ricoeur, emphasizes that there is no geometrical sequence, no strict seriality to which events could be arranged. Dewey calls it concurrence, which could not be separated from occurrence of an event.
Yet, for both of them time remains something external to the conception of event, while Heidegger conceives it as the source of time. Considering our conception of time as a proceeding actualization of Differential Time we could say the the concept of event relates to the actualization of the relation within the transcendence of its conditions. Such it could be said to accompany creation of time, integrating transcendent and practical conditions as well as all the more or less contingent choices associated with it. In some way we can see that we have proceduralized (differentiated) Heidegger’s “point of origin”.37. Marc Rölli [52] sharpens this point by referring to Deleuze’s conception as “radically empiricist”, dismissing Heidegger through the concepts of actuality and virtuality. Such we can see that the immediate condition that is embedding the possibility of experience is the “event,” which in turn can be traced back to a proper image of time. Time, as a condition, is mediated towards experience by the event, as a condition. Certainly, however, the “event” could not be thought without an explicitly formulated conception of time. Without it, a multitude of misunderstandings must be expected. If we accept the perspective that insofar time is preceding substance, which resolves of course into a multiplicity in a Deleuzean perspective, we also could say that the trinity of time, event and experience contributes to the foil of immanence, or even builds it up, where experience in turn refers to the choreostemic constitution of being in the world.
In order to summarize our conception as an overview… here is how we propose to conceive of time
• (1) Time is a transcendent condition for the possibility of establishing a relation, or likewise a quality.
• (2) It gets instantiated as a probabilistic multiplicity of open time strings that, by the completion of all instantiations, present presence.
• (3) The pre-specific multiplicity of time strings decoheres by symbolization into a specific, aspectional space of expressibility.
• (4) Any particular “choice” of a situs in this space of intensive quantities represents the respective image of time, which then may emerge in worldly actualizations.
Particularly regarding this last element we have to avoid the misunderstanding of a seriality of the kind “I choose then I get”. This choice is an implicit one, just as the other instantiations, and can be “observed” only in hindsight, or more precise, they show themselves only within performance. Only in this way we can say that it brings time into a particular Lebenswelt and its contexts as a matter (or subject) of design.
Nevertheless, we now could formulate kind of a recipe for creating a particular “time”, form of temporality, or “time quality.” This would work also in the reverse direction, of course. It is possible to construct a comparative of time qualities across authors, architects or urban neighborhoods. Hopefully, this will help to improve urban practices. In order to make this creational aspect more clear, we now have to investigate the possibilities to create time “itself”.
to be continued …
(The next part will deal with the question whether it could be possible to identify the mechanisms needed to create time…)
1. “Living City” was Archigram’s first presentation to the public, which has been curated by Ron Herron in 1963.
2. German orig.: „Zuletzt markiert die Zeit für Ricoeur das “Mysterium” unseres Denkens, das sich der Repräsentation verweigert, indem es unser Dasein auf eine für das Denken uneinholbare Weise umgreift.“
3. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept in the vicinity of Urban Reason, in order to distinguish it from the ordinary adjective that refers to common sense understanding.
4. remark about state and development.
5. We discussed them in the essay about growth patterns. The fractal is a consequence of self-affine mapping, roughly spoken, a local replacement by a minor version of the original diagram.
6. It is tempting to relate this position to Heisenberg’s uncertainty principle. Yet, we won’t deal with contemporary physics here, even as it would be interesting to investigate the deficiencies of physical conceptions of time.
7. McTaggart paper about time that has been cited over and over again and became unfortunately very influential. Yet, it is nothing but a myth. For a refutation see Tegtmeier [18]. For reasons of its own stupidity and the boldly presented misinterpretation of the work of Kant, McTaggart’s writing deserves the title of a “most developed philanosy” (Grk: anoysia ανοησία, nonsense, or anosia, immunity). It is not even worthwhile, as we will see later through our discussion of Wittgenstein’s work regarding time, to consider it seriously, as for instance Sean Power does .
8. There is a distant resemblance to Georg Berkley’s “esse est percipi.” [20] Yet, in contrast to Berkley, we conceive of interpretation as an activity that additionally is deeply rooted in the communal.
9. German original: SZ: 326: „Zukünftig auf sich zurückkommend, bringt sich die Entschlossenheit gegenwärtigend in die Situation. Die Gewesenheit entspringt der Zukunft, so zwar, dass die gewesene (besser gewesende) Zukunft die Gegenwart aus sich entlässt. Dies dergestalt als gewesend-gegenwärtigende Zukunft einheitliche Phänomen nennen wir die Zeitlichkeit.
10. One has to consider that Heidegger conceives of Being only in relation to the Being-there (“Dasein”), while the “Being-there” is confined to conscious beings.
11. The translators used ”falling”, which however does not match the German “verfallend”. (Actually, I consider it as a mistake.) Hence, I replaced it by a more appropriate verb.
12. Note that Heidegger always used to write in a highly ambigue fashion, which makes it nearly impossible to translate him literally from German to English. In everyday language “Es gibt” is surely well translated by “There is.” Yet, in this text he repeatedly refers to “giving”. Turning perspective to “giving” opens the preceding “Es” away from its being as impersonate corpuscle towards impersonal “fateness”. This interpretation matches the presentation of the affair in [24].
13. German original: “Das Sein eigens denken, verlangt, das Sein als den Grund des Seienden fahren zu lassen zugunsten des im Entbergen verborgen spielenden Gebens, d.h. des „Es gibt“.“
14. see also: Marcel Mauss, Die Gabe. Form und Funktion des Austauschs in archaischen Gesellschaften. Suhrkamp, Frankfurt 2009 [1925].
15. German orig.: „In “Zeit und Sein” schliesslich sieht Heidegger den Ursprung der Zeit im Ereignis, welches er ausdrücklich als den [sich ] selbst entzogenen Geber von Sein und Zeit bestimmt. Das Ereignis, von Heidegger andernorts bestimmt als singulare tantum, ist selbst grundsätzlich entzogen – und dennoch ist das Ereignis der Ursprung der Zeit.“
16. German original (my own translation): “Sein und Zeit ist vielmehr dahin unterwegs, auf dem Wege über die Zeitlichkeit des Daseins in der Interpretation des Seins als Temporalität einen Zeitbegriff, jenes Eigene der “Zeit” zu finden, von woher sich “Sein” als Anwesen er-gibt. Damit ist aber gesagt, daß das in der Fundamentalontologie gemeinte Fundamentale kein Aufbauen darauf verträgt. Stattdessen sollte, nachdem der Sinn von Sein erhellt worden wäre, die ganze Analytik des Daseins ursprünglicher und in ganz anderer Weise wiederholt werden.“ [21]
17. German original (my translation): “Zeit-Raum nennt jetzt das Offene, das im Einander-sich-reichen von Ankunft, Gewesenheit und Gegenwart sich lichtet. Erst dieses Offene und nur es räumt dem uns gewöhnlich bekannten Raum seine mögliche Ausbreitung ein.“
18. This also holds for any of the attempts hat can be found in physics. The following sources may be considered as the most prominent sources, though they are not undisputed: Carroll [22], Price [23][24], Penrose [25]. Physics always and inevitably conceives of time as a measurable “thing”, i.e. as something which already has been negotiated in its relevance for the communal aspects of thinking. See Aristotle’s conception of time.
19. hint to Schelling, for whom intensity is not accessible at all, but could be conceived only as a force that expands into extensity.
20. You will find Peirce’s writings online here: http://www.cspeirce.com/; the parts reference here for instance at http://www.cspeirce.com/menu/library/bycsp/logic/ms237.htm,
21. German original (my transl.): „Denn in der Grammatik der Zukunft tritt der Begriff des ,Gedächtnis’ nicht auf, auch nicht mit umgekehrten Vorzeichen.“
22. In meditational practices one can extend the interpretive chain in various ways. The result is simply the stopping of referential time.
23. German orig.: „Beide Ausdrucksweisen sind in Ordnung und gleichberechtigt, aber nicht miteinander vermischbar“.
24. German orig.: „Wir können keinen Vorgang mit dem ,Ablauf der Zeit’ vergleichen – diesen gibt es nicht – sondern nur mit einem anderen Vorgang (etwa mit dem Gang des Chronometers).“ translation taken from here.
25. 1 second is currently defined as the duration of 9192631770 transitions between two energy levels of the caesium-133 atom. [39] Interestingly, this fits nicely to Aristotle’s conception of time. The reason to take the properties of Cs-133 as a reference is generality. The better the resolution of the referential scale the more general it could be applied.
26. German orig.: „„Jetzt“ bezeichnet keinen Zeitpunkt. Es ist kein „Name eines Zeitmomentes“.“
27. German orig.: „[…] es ist aber Unsinn zu sagen ‘Dies ist dies’, oder ‘Dies ist jetzt’.“
28. In German “Halma”.
29. Much could be said about physics here, regarding the struggling of physicists to “explain” the so-called arrow of time, or regarding the theory of relativity or quantum physics with its Planck time, but it is not close enough to our interests here. Physics always tries to objectify time, which happens through claiming an universally applicable scale, hence they run into paradoxes. In other terms, the fact of the necessity of conceptions like Planck time, or time dilatation, is precisely that without observer there is nothing. The mere possibility of observation (and the observer) vanishes at the light of speed, or at the singularity “within” black holes”. In some way, physics all the time (tries to) proof(s) their own nonsensical foundations.
30. German orig.: „Was zum Wesen der Welt gehört, kann die Sprache nicht ausdrücken. (…) Nur was wir uns auch anders vorstellen können, kann die Sprache sagen.”
31. German orig.: ,,Alles was wir überhaupt beschreiben können, könnte auch anders sein”.
32. Note that in case of a city we meet somewhat the inverse of it. We could conceive of a city as “an individual being made from a collective.”
33. see also Paul Ricoeur (1978), “The Metaphorical Process as Cognition, Imagination, and Feeling,” Critical Inquiry, 1978.
34. German orig.: „Aber, und das ist für Ricoeur entscheidend, “was hier zum Scheitern gebracht wird, ist nicht das Denken, in allen Bedeutungen des Wortes, sondern der Trieb, besser die hybris, die unser Denken dazu verleitet, sich zu Herrn des Sinns zu machen“. Aufgrund dieses nur relativen Scheiterns stehe der Unerforschlichkeit kein Verstummen, sondern vielmehr eine Polymorphie der Gestaltungen und Bewertungen der Zeit gegenüber.“
35. German orig.: „Diese Zusammengehörigkeit von Entzug des Grundes und Herausforderung um Mehr- und Andersdenken ist der stärkste Grund für Ricoeurs explizite Ablehnung eines Systems sowohl der drei Aporien der Zeit selbst wie auch ihrer narrativen Antworten.“
36. German orig.: „Historische Praxis erlaubt uns, die erlebt Zeit mit der linearen Zeit in einer ihr eigenen Schöpfung, der historischen Zeit, zu vermitteln.“
37. Much more would be to say about the event, of course (cf. [51]). Yet, I think that our characterization not only encompasses most conceptions or fits to most of the contribution to the “philosophy of the event,” it also clarifies and sheds light (kind of x-rays?) on them.
• [1] Simon Sadler, Archigram – Architecture without Architecture. MIT Press, Boston 2005.
• [2] Koolhaas, Junkspace
• [3] Robert Venturi, Complexity and Contradiction in Architecture. 1977 [1966].
• [4] Bernard Tschumi, Architecture and Disjunction. MIT Press, Boston 1996.
• [5] Franz Oswald and Peter Baccini, Netzstadt: Einführung zum Stadtentwerfen. Birkhäuser, Basel 2003.
• [6] Sigfried Giedion, Space, Time and Architecture: The Growth of a New Tradition. 1941.
• [7] Aldo Rossi, The Architecture of the City. Oppositions 1984 [1966].
• [8] Mike Sandbothe, „Die Verzeitlichung der Zeit in der modernen Philosophie.“ in: Antje Gimmler, Mike Sandbothe und Walther Ch. Zimmerli (eds.), Die Wiederentdeckung der Zeit. Primus & Wissenschaftliche Buchgesellschaft, Darmstadt 1997. available online.
• [9] Mary Louise Gill, Aristotle’s Distinction between Change and Activity. in: Johanna Seibt (ed.), Process Theories: Crossdisciplinary Studies in Dynamic Categories. p.3-22.
• [10] Yeonkyung Lee and Sungwoo Kim (2008). Reinterpretation of S. Giedion’s Conception of Time in Modern Architecture – Based on his book, Space, Time and Architecture. Journal of Asian Architecture and Building Engineering 7(1):15–22.
• [11] Tony Roark, Aristotle on Time: A Study of the Physics.
• [12] Werner Heisenberg, Physics and Philosophy. The Revolution in Modern Science. Harper, New York 1962.
• [13] Ursula Coope, Time for Aristotle, Oxford University Press, 2005.
• [14] John Ellis McTaggart (1908). The Unreality of Time. Mind: A Quarterly Review of Psychology and Philosophy 17: 456-473.
• [15] L. Nathan Oaklander, Quenin Smith (eds.), The New Theory of Time. Yale University Press, New Haven (CT) 1994.
• [16] L. Nathan Oaklander (2004). The Ontology of Time (Studies in Analytic Philosophy)
• [17] Sean Power, The Metaphysics of Temporal Experience. forthcoming.
• [18] Erwin Tegtmeier (2005). Three Flawed Distinctions in the Philosophy of Time. IWS 2005.
• [19] Thomas Sheehan, “Heidegger, Martin (1889-1976)” in: Edward Craig (ed.), Routledge Encyclopedia of Philosophy, Routledge, New York 1998, IV, p.307-323.
• [20] George Berkley, Eine Abhandlung über die Prinzipien der menschlichen Erkenntnis (1710). Vgl. vor allem die ‚Sectionen‘ III-VII und XXV, Übers. F. Überweg, Berlin 1869.
• [21] Martin Heidegger, Being and Time. transl. John Macquarrie & Edward Robinson (based on 7th edition of “Sein und Zeit”), Basil Blackwell, Oxford 1962. available online.
• [22] Martin Heidegger, Sein und Zeit. Tübingen 1979 [1927].
• [23] Martin Heidegger, Protokoll zu einem Seminar über den Vortrag “Zeit und Sein”. in: Zur Sache des Denkens. Gesamtausgabe Band 14, p.34. Klostermann, Frankfurt 2007 [1967].
• [24] Cristina Lafont (1993). Die Rolle der Sprache in Sein und Zeit. Zeitschrift für philosophische Forschung, Band 47, 1.
• [25] Martin Heidegger, Zur Sache des Denkens. Gesamtausgabe Band 14. Klostermann, Frankfurt 2007.
• [26] Christian Bermes, Ulrich Dierse (eds.), Schlüsselbegriffe der Philosophie des 20. Jahrhunderts. Meiner, Hamburg 2010.
• [27] Sean Carroll, From Eternity to Here: The Quest for the Ultimate Theory of Time. Oneworld, Oxford 2011.
• [28] Huw Price, Time’s Arrow and Archimedes’ Point: New Directions. Oxford University Press, Oxford 1996.
• [29] Huw Price (1994). Reinterpreting the Wheeler-Feynman Absorber Theory: Reply to Leeds. The British Journal for the Philosophy of Science 45 (4), pp. 1023-1028.
• [30] Roger Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage, London 2004.
• [31] Friedrich Kuemmel, Über den Begriff der Zeit. Niemeyer, Tübingen 1962.
• [32] Time and Free Will: An Essay on the Immediate Data of Consciousness, transl., F.L. Pogson, Montana: Kessinger Publishing Company, original date, 1910 (orig. 1889).
• [33] Gilles Deleuze, Bergsonism.
• [34] Lawlor, Leonard and Moulard, Valentine, “Henri Bergson”, in: Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), available online.
• [35] Charles Sanders Peirce, Writings 3, 107-108, MS239 (Robin 392, 371), 1873. available online.
• [36] Ludwig Wittgenstein, Philosophical Investigations. §201
• [37] John Dewey, “Time and Individuality,” in: Jo Ann Boydston (ed.), Later Works of John Dewey, Vol.14. Southern Illinois University Press, Carbondale 1988.
• [38] John Dewey, “Experience and Nature,” in: Jo Ann Boydston (ed.), Later Works of John Dewey, Vol.1. Southern Illinois University Press, Carbondale 1981 , p. 92.
• [39] Rudolf F. Kaspar und Alfred Schmidt (1992). Wittgenstein über Zeit. Zeitschrift für philosophische Forschung, Band 46(4): 569-583.
• [40] Ludwig Wittgenstein, Philosophische Bemerkungen. in: Werkausgabe Bd. 2. Frankfurt 1984.
• [41] “International System of Units (SI)”. Bureau International des Poids et Mesures. 2006.
• [42] Peter Janich (1996). Die Konstitution der Zeit durch Handeln und Reden. Kodikas/Code Ars Semeiotica 19, 133-147.
• [43] Ludwig Wittgenstein, Eine Philosophische Betrachtung (Das Braune Buch). in: Suhrkamp Werkausgabe Bd. 5. Frankfurt 1984.
• [44] Andrea A. Reichenberger, „Was ist Zeit?“ Wittgensteins Kritik an Augustinus kritisch betrachtet. in: Friedrich Stadler, Michael Stöltzner (eds.), Papers of the 28th International Wittgenstein Symposium 7-13 August 2005. Zeit und Geschichte – Time and History. ALWS, Kirchberg am Wechsel 2005.
• [45] Tagebücher 1924-1916. in: Ludwig Wittgenstein, Werkausgabe Bd.1, Frankfurt 1984.
• [46] Helga Nowotny, Eigenzeit: Entstehung und Strukturierung eines Zeitgefühls. Suhrkamp 1993.
• [47] Inga Römer, Das Zeitdenken bei Husserl, Heidegger und Ricoeur. Springer, Dordrecht & Heidelberg 2010.
• [48] Paul Ricoeur, Zeit und Erzählung, Bd. 3: Die erzählte Zeit, München, Fink , München 1991. (zuerst frz.: Paris 1985).
• [49] Paul Ricoeur (1980). On Narrative. Critical Inquiry, Vol. 7, No. 1, pp. 169-190.
• [50] Immanuel Kant, Kritik der reinen Vernunft, in: Wolfgang Weischedel (ed.), Immanuel Kant., Werke in sechs Bänden, Bd. 2, Wissenschaftliche Buchgesellschaft, Darmstadt 1983.
• [51] Marc Rölli, Ereignis auf Französisch. Von Bergson bis Deleuze. Fink, München 2004.
• [52] Marc Rölli, “Begriffe für das Ereignis: Aktualität und Virtualität. Oder wie der radikale Empirist Gilles Deleuze Heidegger verabschiedet”, in: Marc Rölli (ed.), Ereignis auf Französisch. Von Bergson bis Deleuze. Fink, München 2004
Growth Patterns
November 29, 2012 § Leave a comment
Growing beings and growing things, whether material
or immaterial, accumulate mass or increase their spreading. Plants grow, black holes grow, a software program grows, economies grow, cities grow, patterns grow, a pile of sand grows, a text grows, the mind grows and even things like self-confidence and love are said to grow. On the other hand, we do not expect that things like cars or buildings “grow.”
Despite the above mentioned initial “definition” might sound fairly trivial, the examples demonstrate that growth itself, or more precisely, the respective language game, is by far not a trivial thing. Nevertheless, when people start to talk about growth or if they invoke the concept of growth implicitly, they mostly imagine a smooth and almost geometrical process, a dilation, a more or less smooth stretching. Urbanists and architects are no exception to this undifferentiated and prosy perspective. Additionally, growth is usually not con- sidered seriously beyond its mere wording, probably due to the hasty prejudgment about the value of biological principles. Yet, if one can’t talk appropriately about growth—which includes differentiation—one also can’t talk about change. As a result of a widely (and wildly) applied simplistic image of growth, there is a huge conceptual gap in many, if not almost all works about urban conditions, in urban planning, and about architecture.1 But why talking about change, for in architecture and urbanism is anyway all about planning…
The imprinting by geometry often entails another prejudice: that of globality. Principles, rules, structures are thought to be necessarily applied to the whole, whatever this “wholeness” is about. This is particularly problematic, if these rules refer more or less directly to mere empirical issues. Such it frequently goes unnoticed that maintaining a particular form or keeping position in a desired region of the parameter space of a forming process requires quite intense interconnected local processes, both for building as well as destroying structures.
It was one of the failures in the idea of Japanese Metabolism not to recognize the necessity for deep integration of this locality. Albeit they followed the intention to (re-)introduce the concept of “life cycle” into architecture and urbanism, they kept aligned to cybernetics. Such, Metabolism failed mainly for two reasons. Firstly, they attempted to combine incommensurable mind sets. It is impossible to amalgamate modernism and the idea of bottom-up processes like self-organization or associativity, and the Metabolists always followed the modernist route. Secondly, the movement has been lacking a proper structural setup: the binding problem remained unresolved. They didn’t develop a structural theory of differentiation that would have been suitable to derive appropriate mechanisms.
This Essay
Here in this piece we just would like to show some possibilities to enlarge the conceptual space and the vocabulary that we could use to describe (the) “growing” (of) things. We will take a special reference to architecture and urbanism, albeit the basics would apply to other fields as well, e.g. to the growth and the differentiation of organizations (as “management”) or social forms, but also of more or even “completely” immaterial entities. In some way, this power is even mandatory, if we are going to address the Urban6, for the Urban definitely exceeds the realm of the empirical.
We won’t do much of philosophical reflection and embedding, albeit it should be clear that these descriptions don’t make sense without proper structural, i.e. theoretical references as we have argued in the previous piece. “As such” they would be just kind of a pictorial commentary, mistaking metaphor as allegory. There are two different kinds of important structural references. One is pointing to the mechanisms2, the abstract machinery with its instantiation on the micro-level or with respect to the generative processes. The other points to the theoretico-structural embedment, which we have been discussing in the previous essay. Here, it is mainly the concept of generic differentiation that provides us the required embedding and the power to overcome the binding problem in theoretical work.
The remainder of this essay comprises the following sections (active links):
1. Space
Growth concerns space, both physical and abstract space. Growth concerns even the quality of space. The fact of growth is incompatible with the conception of space as a container. This becomes obvious in case of the fractals, which got their name due to their “broken” dimensionality. A fractal could be 2.846-dimensional. Or 1.2034101 dimensional. The space established by the “inside” of a fractal is very different from the 3-dimensional space. Astonishingly, the dimensionality even need not be constant at all while traveling through a fractal.
Abstract spaces, on the other hand, can be established by any set of criteria, just by interpreting criteria as dimensions. Such, one gets a space for representing and describing items, their relations and their transformations. In mathematics, a space is essentially defined as the possibility to perform a mapping from one set to another, or in other terms, by the abstract (group-theoretic) symmetry properties of the underlying operations on the relations between any entities.
Strangely enough, in mathematics spaces are almost exclusively conceived as consisting from independent dimensions. Remember that “independence” is the at the core of the modernist metaphysical belief set! Yet, they need neither be Euclidean nor Cartesian as the generalization of the former. The independence of descriptive dimensions can be dropped, as we have argued in an earlier essay. The resulting space is not a dimensional space, but rather an aspectional space, which can be conceived as a generalization of dimensional space.
In order to understand growth we should keep in contact with a concept of space that is as general as possible. It would be really stupid for instance, to situate growth restrictively in a flat 2-dimensional Euclidean space. At least since Descartes’ seminal work “Regulae” (AT X 421-424) it should be clear that any aspect may be taken as a contribution to the cognitive space [8].
The Regulae in its method had even allowed wide latitude to the cognitive use of fictions for imagining artificial dimensions along which things could be grasped in the process of problem solving. Natures in the Meditations, however, are no longer aspects or axes along which things can be compared, evaluated, and arrayed, but natures in the sense that Rule 5 had dismissed: natures as the essences of existing things.
At the same time Descartes also makes clear that these aspects should not be taken as essences of existing things. In other words, Descartes has been ahead of 20ieth century realism and existentialism! Aspects do not represent things in their modes of existence, they represent our mode of talking about the relations we establish to those things. Yet, these relations are more like those threads as String Theory describes them: without fixed endings on either side. All we can say about the outer world is that there is something. Of course, that is far to little to put it as a primacy for human affairs.
The consequence of such a dimensional limitation would be a blind spot (if not a population of them), a gap in the potential to perceive, to recognize, to conceive of and to understand. Unfortunately, the gaps themselves, the blind spots are not visible for those who suffer from them. Nevertheless, any further conceptualization would remain in the state of educated nonsense.
Growth is established as a transformation of (abstract) space. Vice versa, we can conceive of it also as the expression of the transformation of space. The core of this transformation is the modulation of the signal intensity length through the generation of compartments, rendering abstract space into a historical, individual space. Vice versa, each transformation of space under whatsoever perspective can be interpreted as some kind of growth.
The question is not any more to be or not to be, as ontologists tried to proof since the first claim of substance and the primacy of logics and identity. What is more, already Shakespeare demonstrated the pen-ultimate consequences of that question. Hamlet, in his mixture of being realist existentialist (by that very question) and his like of myths and (use) of hidden wizards, guided by the famous misplaced question, went straight into his personal disaster, not without causing a global one. Shakespeare’s masterfully wrapped lesson is that the question about Being leads straight to disaster. (One might add that this holds also for ontology and existentialism: it is consequence of ethical corruption.)
Substance has to be thought of being always and already a posteriori to change, to growth. Setting change as a primacy means to base thought philosophically on difference. While this is almost a completely unexplored area, despite Deleuze’s proposal of the plane of immanence, it is also clear that starting with identity instead causes lots of serious troubles. For instance, we would be forced to acknowledge that the claim of the possibility that a particular interpretation indeed could be universalized. The outcome? A chimaera of Hamlet (the figure in the tragedy!) and Stalin.
Instead, the question is one of growth and the modulation of space: Who could reach whom? It is only through this question that we can integrate the transcendence of difference, its primacy, and to secure the manifold of the human in an uncircumventable manner. Life in all of its forms, with all its immanence, always precedes logic.3 Not only for biological assemblages, but also for human beings and all its produces, including “cities” and other forms of settlements.
Just to be clear: the question of reaching someone else is not dependent on anything given. The given is a myth, as philosophers from Wittgenstein to Quine until Putnam and McDowell have been proofing. Instead, the question about the possibility to reach someone else, to establish a relation between any two (at least) items is one of activity, design, and invention, targeting the transformation of space. This holds even in particle physics.
2. Modes of Talking
Traditionally spoken, the result of growth is formed matter. More exactly, however, it is transformed space. We may distinguish a particular form, morphos, or with regard to psychology also a “Gestalt,” and form as an abstractum. The result of growth is form. Thus, form actually does not only concern matter, it always concerns the potential relationality.
For instance, growing entities never interact “directly”. They, that is, also: we, always interact through their spaces and the mediality that is possible within them.4 Otherwise it would be completely impossible for a human individual to interact with a city. Before any semiotic interpretive relation it is the individual space that enables incommensurable entities to relate.
If we consider the growth of a plant, for instance, we find a particular morphology. There are different kinds of tissues and also a rather typical habitus, i.e. a general appearance. The underlying processes are of biological nature, spanning from physics and bio-chemistry to information and the “biological integration” of those.
Talking about the growth of a building or the growth of a city we have to spot the appropriate level of abstraction. There is no 1:1 transferability. In a cell we do neither find craftsmen nor top-down-implementations of plans. In contrast, rising a building apparently does not know anything about probabilistic mechanisms. Just by calling something intentionally “metabolism” (Kurokawa) or “fractal” (Jencks), invoking thereby associations of organisms and their power to maintain themselves in physically highly unlikely conditions, we certainly do not approach or even acquire any understanding.
The key for any growth model is the identification of mechanisms (cf. [4]). Biology is the science that draws most on the concept of mechanism (so far), while physics does so for the least. The level of mechanism is already an abstraction, of course. It needs to be completed, however, by the concept of population, i.e. a dedicated probabilistic perspective, in order to prevent falling back to the realm of trivial machines. In a cross-disciplinary setting we have to generalize the mechanisms into principles, such that these provide a shared differential entity.5
Well, we already said that a building is rarely raised by a probabilistic process. Yet, this is only true if we restrict our considerations to the likewise abstract description of the activities of the craftsmen. Else, the building process starts long before any physical matter is touched.
Secondly, from the perspective of abstraction we never should forget—and many people indeed forget about this—that the space of expressibility and the space of transformation also contains the nil-operator. From the realm of numbers we call it the zero. Note that without the zero many things could not be expressed at all. Similarly, the negative is required for completing the catalog of operations. Both, the nil-operator and the inverse element are basic constituents of any mathematical group structure, which is the most general way to think about the conditions for operations in space.
The same is true for our endeavor here. It would be impossible to construct the possibility for graded expressions, i.e. the possibility for a more or less smooth scale, without the nil and the negative. Ultimately, it is the zero and the nil-operation together with the inverse that allows to talk reflexively at all, to create abstraction, in short to think through.
3. Modes of Growth
Let us start with some instances of growth from “nature”. We may distinguish crystals, plants, animals and swarms. In order to compare even those trivial and quite obviously very different “natural” instances with respect to growth, we need a common denominator. Without that we could not accomplish any kind of reasonable comparison.
Well, initially we said that growth could be considered as accumulation of mass or as an increase of spread. After taking one step back we could say that something gets attached. Since crystals, plants and animals are equipped with different capabilities, and hence mechanisms, to attach further matter, we choose the way of organizing the attachment as the required common denominator.
Given that, we can now change the perspective onto our instances. The performance of comparing implies an abstraction, hence we will not talk about crystals etc. as phenomena, as this would inherit the blindness of phenomenology against its conditions. Instead, we conceive of them as models of growth, inspired by observations that can be classified along the mode of attachment.
Morphogenesis, the creation of new instances of formed matter, or even the creation of new forms, is tightly linked to complexity. Turing titled his famous article the “Chemical Basis of Morphogenesis“. This, however, is not exactly what he invented, for we have to distinguish between patterns and forms, or likewise, between order and organization. Turing described the formal conditions for emergence of order from a noisy flow of entropy. Organization, in contrast, also needs the creation of remnants, partial decay, and it is organization that brings in historicity. Nevertheless, the mechanisms of complexity of which the Turing-patterns and -mechanisms are part of, are indispensable ingredients for the “higher” forms of growth, at least, that is, for anything besides crystals (but probably even for for them in same limited sense). Note that morphogenesis, in neither of its aspects, should not be conceived as something “cybernetical”!
3.1. Crystals
Figure 1a: Crystals are geometric entities out of time.
Crystals are geometrical entities. In the 19th century, the study of crystals and the attempt to classify them inspired mathematicians in their development of the concept of symmetry and group theory. Crystals are also entities that are “structurally flat”. There are no levels of integration, their macroscopic appearance is a true image of their constitution on the microscopic level. A crystal looks exactly the same on the level of atoms up to the scale of centimeters. Finally, crystals are outside of time. For their growth is only dependent on the one or two layers of atoms (“elementary cells”) that had been attached before at the respective site.
There are two important conditions in order to grow a 3-dimensional crystal. The site of precipitation and attachment need to be (1) immersed in a non-depletable solution where (2) particles can move through diffusion in three dimensions. If these conditions are not met, mineral depositions look very different. As far as it concerns the global embedding conditions, the rules have changed. More abstractly, the symmetry of the solution is broken, and so the result of the process is a fractal.
Figure 1b. Growth in the realm of minerals under spatial constraints, particularly the reduction of dimensionality. The image does NOT show petrified plants! It is precipitated mineral from a solution seeped into a nearly 2-dimensional gap between two layers of (lime) rock. The similarity of shapes points to a similarity of mechanisms.
Both examples are about mineralic growth. We can understand now that the variety of resulting shapes is highly dependent on the dimensional conditions embedding the growth process.
Figure 1c. Crystalline buildings. Note that it is precisely and only this type of building that actualizes a “perfect harmony” between the metaphysics of the architect and the design of social conditions. The believe in independence and the primacy of identity has been quite effectively delivered into the habit of the everyday housing conditions.
Figure 1d. Crystalline urban layout, instantiated as “parametrism”. The “curvy” shape should not be misinterpreted as “organic”. In this case it is just a little dose of artificial “erosion” imposed as a parametric add-on to the crystalline base. We again meet the theme of the geological. Nothing could be more telling than the claim of a “new global style”: Schumacher is an arch-modernist, a living fossil, mistaking design as religion, who benefits from advanced software technology. Who is Schumacher that he could decree a style globally?
The growth of crystals is a very particular transformation of space. It is the annihilation of any active part of it. The relationality of crystals is completely exhausted by resistance and the spread of said annihilation.
Regarding the Urban6, parametrism must be considered as being deeply malignant. As the label says, it takes place within a predefined space. Yet, who the hell Schumacher (and Hadid, the mathematician) thinks s/he is that s/he is allowed, or even being considered as being able, to define the space of the Urban? For the Urban is a growing “thing,” it creates its own space. Consequently all the rest of the world admits not to “understand” the Urban, yet Hadid and her barking Schumacher even claim to be able to define that space, and thus also claim that this space shall be defined. Not surprisingly, Schumacher is addicted to the mayor of all bureaucrats of theory, Niklas Luhman (see our discussion here), as he proudly announces in his book “The Autopoiesis of Architecture” that is full of pseudo- and anti-theory.
The example of the crystal clearly shows that we have to consider the solution and the deposit together as a conditioned system. The forces that rule their formation are a compound setup. The (electro-chemical) properties of the elementary cell on the microscopic level, precisely where it is in contact with the solution, together with the global, macroscopic conditions of the immersing solution determine the instantiation of the basic mechanism. Regardless the global conditions, basic mechanism for the growth of crystals is the attachment of matter is from the outside.
In crystals, we do not find a separated structural process layer that would be used for regulation of the growth. The deep properties of matter determine their growth. Else, only the outer surface is involved.
3.2. Plants
With plants, we find a class of organisms that grow—just as crystals—almost exclusively at their “surface”. With only a few exceptions, matter is almost exclusively attached at the “outside” of their shape. Yet, matter is also attached from their inside, at precisely defined locations, the meristemes. Else, there is a dedicated mechanism to regulate growth, based on a the diffusion of certain chemical compounds, the phyto-hormones, e.g. auxin. This regulation emancipates the plant in its growth from the properties of the matter it is built from.
Figure 2a. Growth in Plants. The growth cone is called apical meristeme. There are just a handful of largely undifferentiated cells that keep dividing almost infinitely. The shape of the plant is largely determined by a reaction-diffusion-system in the meristem, based on phyto-hormones that determine the cells. Higher plants can build secondary meristemes at particular locations, leading to a characteristic branching pattern.
Figure 2b. A pinnately compound leaf of a fern, showing its historical genesis as attachment at the outside (the tip of the meristeme) from the inside. If you apply this principle to roots, you get a rhizome.
Figure 2c. The basic principle of plant growth can be mapped into L-Grammars, n order to create simulations of plant-like shapes. This makes clear that fractal do not belong to geometry! Note that any form creation that is based on formal grammars is subject to the representational fallacy.
Instead of using L-grammars as a formal reference we could also mention self-affine mapping. Actually, self-affine mapping is the formal operation that leads to perfect self-similarity and scale invariance. Self-affine mapping projects a minor version of the original, often primitive graph onto itself. But let us inspect two examples.
Figure 2d.1. Scheme showing the self-affine mapping that would create a graph that looks like a leaf of a fern (image from wiki).
self-affine Fractal fern scheme
Figure 2d.2. Self-affine fractal (a hexagasket) and its neighboring graph, which encodes its creation [9].
self-affine fractals hexagasket t
Back to real plants! Nowadays, most plants are able to build branches. Formally, they perform a self-affine mapping. Bio-chemically, the cells in their meristeme(s) are able to respond differentially to the concentration of one (or two) plant hormones, in this case auxine. Note, that for establishing a two component system you won’t necessarily need two hormones! The counteracting “force” might be realized by some process just inside the cells of the meristeme as well.
From this relation between the observable fractal form, e.g. the leaf of the fern, or the shape of the surrounding of a city layout, and the formal representation we can draw a rather important conclusion. The empirical analysis of a shape should never stop with the statement that the respective shape shows scale-invariance, self-similarity or the like. Literally nothing is gained by that! It is just a promising starting point. What one has to do subsequently is to identify the mechanisms leading to the homomorphy between the formal representation and the particular observation. If you like, the chemical traces of pedestrians, the tendency to imitate, or whatever else. Even more important, in each particular case these actual mechanisms could be different, though leading to the same visual shape!!!
In earlier paleobiotic ages, most plants haven’t been able to build branches. Think about tree ferns, or the following living fossile.
Figure 2d. A primitive plant that can’t build secondary meristemes (Welwitschia). Unlike in higher plants, where the meristeme is transported by the growth process to the outer regions of the plant (its virtual borders), here it remains fixed; hence, the leaf is growing only in the center.
Figure 2e. The floor plan of Guggenheim Bilbao strongly reminds to the morphology of Welwitschia. Note that this “reminding” represents a naive transfer on the representational level. Quite in contrast, we have to say that the similarity in shape points to a similarity regarding the generating mechanisms. Jencks, for instance, describes the emanations as petals, but without further explanation, just as metaphor. Gehry himself explained the building by referring to the mythology of the “world-snake”, hence the importance of the singularity of the “origin”. Yet, the mythology does not allow to say anything about the growth pattern.
Figure 2f. Another primitive plant that can’t build secondary apical meristems. common horsetail (Equisetum arvense). Yet, in this case the apical meristeme is transported.
Figure 2g. Patrick Schumacher, Hadid Office, for the master plan of the Istanbul project. Primitive concepts lead to primitive forms and primitive habits.
Many, if not all of the characteristics of growth patterns in plants are due to the fact that they are sessile life forms. Most buildings are also “sessile”. In some way, however, we consider them more as geological formations than as plants. It seems to be “natural” that buildings start to look like those in fig.2g above.
Yet, in such a reasoning there are even two fallacies. First, regarding design there is neither some kind of “naturalness”, nor any kind of necessity. Second, buildings are not necessarily sessile. All depends on the level of the argument. If we talk just about matter, then, yes, we can agree that most buildings do not move, like crystals or plants. Buildings could not be appropriately described, however, just on the physical level of their matter. It is therefore very important to understand that we have to argue on the level of structural principles. Later we will provide an impressive example of an “animal” or “animate” building.7
As we said, plants are sessile, all through, not only regarding their habitus. In plants, there are no moving cells in the inside. Thus, plants have difficulties to regenerate without dropping large parts. They can’t replace matter “somewhere in between”, as animals can do. The cells in the leafs, for instance, mature as cells do in animals, albeit for different reasons. In plants, it is mainly the accumulation of calcium. Such, even in tropical climates trees drop off their leaves at least once a year, some species all of them at once.
The conclusion for architecture as well as for urbanism is clear. It is just not sufficient to claim “metabolism” (see below) as a model. It is also appropriate to take “metabolism” as a model, not even if we would avoid the representational fallacy to which the “Metabolists” fell prey. Instead, the design of the structure of growth should orient itself in the way animals are organized, at the level of macroscopic structures like organs, if we disregard swarms for the moment, as most of them are not able to maintain persistent form.
This, however, brings immediately the problematics of territorialization to the fore. What we would need for our cities is thus a generalization towards the body without organs (Deleuze), which orients towards capabilities, particularly the capability to choose the mode of growth. Yet, the condition for this choosing is the knowledge about the possibilities. So, let us proceed to the next class of growth modes.
3.3. Swarms
In plants, the growth mechanisms are implemented in a rather deterministic manner. The randomness in their shape is restricted to the induction of branches. In swarms, we find a more relaxed regulation, as there is only little persistent organization. There is just transient order. In some way, many swarms are probabilistic crystals, that is, rather primitive entities. Figures 3a thru 3d provide some examples for swarms.
From the investigation of swarms in birds and fishes it is known that any of the “individual” just looks to the movement vector of its neighbors. There is no deep structure, precisely because there is no persistent organization.
Figure 3a. A flock of birds. Birds take the movement of several neighbors into account, sometimes without much consideration of their distance.
Figure 3b. A swarm of fish, a “school”. It has been demonstrated that some fish not only consider the position or the direction of their neighbors, but also the form of the average vector. A strong straight vector seems to be more “convincing” for the neighbors as a basis for their “decision” than one of unstable direction and scalar.
Figure 3c. The Kaaba in Mekka. Each year several persons die due to panic waves. Swarm physics helped to improve the situation.
Figure 3d. Self-ordering in a pedestrians population at Shibuya, Tokyo. In order to not crash into each other, humans employ two strategies. Either just to follow the person ahead, or to consider the second derivative of the vector, if the first is not applicable. Yet, it requires a certain “culture”, an unspoken agreement to do so (see this for what happens otherwise)
A particularly interesting example for highly developed swarms that are able to establish persistent organization is provided by Dictyostelium (Fig 4a), in common language called a slime-mold. In biological taxonomy, they form a group called Mycetozoa, which indicates their strangeness: Partly, they behave like fungi, partly like primitive animals. Yet, they are neither prototypical fungi nor prototypical animals. in both cases the macroscopic appearance is a consequence of (largely) chemically organized collaborative behavior of a swarm of amoeboids. Under good environmental conditions slime-molds split up into single cells, each feeding on their own (mostly on bacteria). Under stressing conditions, they build astonishing macroscopic structures, which are only partially reversible as parts of the population might be “sacrificed” to meet the purpose of non-local distribution.
Figure 4a. Dictyostelium, “fluid” mode; the microscopic individuals are moving freely, creating a pattern that optimizes logistics. Individuals can smoothly switch roles from moving to feeding. It should be clear that the “arrangement” you see is not a leaf, nor a single organism! It is a population of coordinating individuals. Yet, the millions of organisms in this population can switch “phase”… (continue with 4b…)
Figure 4b. Dictyostelium, in “organized” mode, i.e. the “same” population of individuals now behaving “as if” it would be an organism, even with different organs. Here, individuals organize a macroscopic form, as if they were a single organism. There is irreversible division of labor. Such, the example of Dictyostelium shows that the border between swarms and plants or animals can be blurry.
The concept of swarms has also been applied to crowds of humans, e.g. in urban environments [11]. Here, we can observe an amazing re-orientation. Finally, after 10 years or so of research on swarms and crowds, naïve modernist prejudices are going to be corrected. Independence and reductionist physicism have been dropped, instead, researchers get increasingly aware of relations and behavior [14].
Trouble is, the simulations treat people as independent particles—ignoring our love of sticking in groups and blabbing with friends. Small groups of pedestrians change everything, says Mehdi Moussaid, the study’s leader and a behavioral scientist at the University of Toulouse in France. “We have to rebuild our knowledge about crowds.”
Swarms solve a particular class of challenges: logistics. Whether in plants or slime-molds, it is the transport of something as an adaptive response that provides their framing “purpose”. This something could be the members of the swarm itself, as in fish, or something that is transported by the swarm, as it is the case in ants. Yet, the difference is not that large.
Figure 5: Simulation of foraging raid patterns in army ants Eciton. (from [12]) The hive (they haven’t a nest) is at the bottom, while the food source is towards thr top. The only difference between A and B is the number of food sources.
When compared to crystals, even simple swarms show important differences. Firstly, in contrast to crystals, swarms are immaterial. What we can observe at the global scale, macroscopically, is an image of rules that are independent of matter. Yet, in simple, “prototypical” swarms the implementation of those rules is still global, just like in crystals. Everywhere in the primitive swarm the same basic rules are active. We have seen that in Dictyostelium, much like in social insects, rules begin to be active in a more localized manner.
The separation of immaterial components from matter is very important. It is the birth of information. We may conceive information itself as a morphological element, as a condition for the probabilistic instantiation. Not by chance we assign the label “fluid” to large flocks of birds, say starlings in autumn. On the molecular level, water itself is organized as a swarm.
As a further possibility, the realm of immaterial rules provides allows also for a differentiation of rules. For in crystals the rule is almost synonymic to the properties of the matter, there is no such differentiation for them. They are what they are, eternally. In contrast to that, in swarms we always find a setup that comprises attractive and repellent forces, which is the reason for their capability to build patterns. This capability is often called self-organization, albeit calling it self-ordering would be more exact.
There is last interesting point with swarms. In order to boot a swarm as swarm, that is, to effectuate the rules, a certain, minimal density is required. From this perspective, we can recognize also a link between swarms and mediality. The appropriate concept to describe swarms is thus the wave of density (or of probability).
Not only in urban research the concept of swarms is often used in agent-based models. Unfortunately, however, only the most naive approaches are taken, conceiving of agents as entities almost without any internal structure, i.e. also without memory. Paradoxically, researchers often invoke the myth of “intelligent swarms”, overlooking that intelligence is nothing that is associated to swarms. In order to find appropriate solutions to a given challenge, we simply need an informational n-body system, where we find emergent patterns and evolutionary principles as well. This system can be realized even in a completely immaterial manner, as a pattern of electrical discharges. Such a process we came to call a “brain”… Actually, swarms without an evolutionary embedding can be extremely malignant and detrimental, since in swarms the purpose is not predefined. Fiction authors (M.Crichton, F.Schätzing) recognized this long ago. Engineers seem to still have difficulties with that.
Such, we can also see that swarms actualize the most seriously penetrating form of growth.
3.4. Animals
So far, we have met three models of growth. In plants and swarms we find different variations of the basic crystalline mode of growth. In animals, the regulation of growth acquired even more degrees of freedom.
The major determinant of the differences between the forms of plants and animals is movement. This not only applies to the organism as a whole. We find it also on the cellular level. Plants do not have blood or an immune system, where cells of a particular type are moving around. Once they settled, they are fixed.
The result of this mobility is a greatly diversified space of possibilities for instantiating compartmentalization. Across the compartments, which we find also in the temporal domain, we may even see different modes of growth. The liver of the vertebrates, for instance, grows more like a plant. It is somehow not surprising that the liver is the organ with the best ability for regeneration. We also find interacting populations of swarms in animals, even in the most primitive ones like sponges.
The important aspects of form in animals are in their interior. While for crystals there is no interiority, plants differ in their external organization, their habitus, with swarms somewhere in between. Animals, however, are different due to their internal organization on the level of macroscopic compartments, which includes their behavioral potential. (later: remark about metabolism, as taking the wrong metaphorical anchor) Note that the cells of animals look quite similar, they are highly standardized, even between flies and humans.
Along with the importance of the dynamics and form of interior compartments, the development of animals in their embryological phase8 is strictly choreographed. Time is not an outer parameter any more. Much more than plants, swarms or even crystals, of course, animals are beings in and of time. They have history, as individual and as population, which is independent of matter. In animals, history is a matter of form and rules, of interior, self-generated conditions.
During the development of animal embryos we find some characteristic operations of form creating, based on the principle of mobility, additionally to the principles that we can describe for swarms, plants and crystals. These are
• – folding, involution and blastulation;
• – melting, and finally
• – inflation and gastrulation;
The mathematics for describing these operations is not geometry any more. We need topology and category theory in order to grasp it, that is the formalization of transformation.
Folding brings compartments together that have been produced separately. It breaks the limitations of signal horizons by initiating a further level of integration. Hence, the role of folding can be understood as a way as a means to overcome or to instantiate dimensional constraints and/or modularity. While inflation is the mee accumulation of mass and amorphous enlargement of a given compartment by attachment from the interior, melting may be conceived as a negative attachment. Abstractly taken, it introduces the concept of negativity, which in turn allows for smooth gradation. Finally, involution, gastrulation and blastulation introduce floating compartments, hence swarm-like capabilities in the interior organization. It blurs the boundaries between structure and movement, introducing probabilism and reversibility into the development and the life form of the being.
Figure 6a. Development in Embryos. Left-hand, a very early phase is shown, emphasizing the melting and inflating, which leads to “segments”, called metamers. (red arrows show sites of apoptosis, blue arrows indicate inflation, i.e. ordinary increase of volume)
Figure 6b. Early development phase of a hand. The space between fingers is melted away in order to shape the fingers.
Figure 6c. Rem Koolhaas [16]. Inverting the treatment of the box, thereby finding (“inventing”?) the embryonic principle of melting tissue in order to generate form. Note that Koolhaas himself never referred to “embryonic principles” (so far). This example demonstrates clearly where we have to look for the principles of morphogenesis in architecture!
In the image 6a above we can not only see the processes of melting and attaching, we also can observe another recipe of nature: repetition. In case of the Bauplan of animal organisms the result is metamery.9 While in lower animals such as worms (Annelidae), metamers are easily observed, in higher animals, such as insects or vertebrates, metamers are often only (clearly) visible in the embryonal phase. Yet, in animals metamers are always created through a combination of movement or melting and compartmentalization in the interior of the body. They are not “added” in the sense of attaching—adding—them to the actual border, as it is the case in plants or crystals. In mathematical terms, the operation in animals’ embryonic phase is multiplication, not addition.
Figure 6d. A vertebrate embryo, showing the metameric organization of the spine (left), which then gets replicated by the somites (right). In animals, metamers are a consequence of melting processes, while in plants it is due to attachment. (image found here)
The principles of melting (apoptosis), folding, inflating and repetition can be used to create artificial forms, of course. The approach is called subdivision. Note that the forms shown below have nothing to do with geometry anymore. The frameworks needed to talk about them are, at least, topology and category theory. Additionally, they require an advanced non-Cartesian conception of space, as we have been outlining one above.
Figure 7. Forms created by subdivision (courtesy Michael Hansmeyer). It is based on a family of procedures, called subdivision, that are directed towards the differentiation of the interior of a body. It can’t be described by geometry any more. Such, it is a non-geometrical, procedural form, which expresses time, not matter and its properties. The series of subdivisions are “breaking” the straightness of edges and can be seen also as a series of nested, yet uncompleted folds (See Deleuze’s work on the Fold and Leibniz). Here, in Hansmeyer’s work, each column is a compound of three “tagmata”, that is, sections that have been grown “physically” independently from each other, related just by a similar dynamics in the set of parameters.
subdivision columns
Creating such figurated forms is not fully automatic, though. There is some contingency, represented by the designer’s choices while establishing a particular history of subdivisions.
Animals employ a wide variety of modes in their growing. They can do so due to the highly developed capability of compartmentalization. They gain almost complete independence from matter10 , regarding their development, their form, and particularly regarding their immaterial setup, which we can observe as learning and the use of rules. Learning, on the other hand, is intimately related to perception, in other words, configurable measurement, and data. Perception, as a principle, is in turn mandatory for the evolution of brains and the capability to handle information. Thus, staffing a building with sensors is not a small step. It could take the form of a jump into another universe, particularly if the sensors are conceived as being separate from the being of the house, for instance in order to facilitate or modify mental or social affairs of their inhabitants.
3.5. Urban Morphing
On the level of urban arrangements, we also can observe different forms of differentiation on the level of morphology.
Figure 8. Urban Sprawl, London (from [1]). The layout looks like a slime-mold. We may conclude that cities grow like slime-molds, by attachment from the inside and directed towards the inside and the outside. Early phases of urban sprawl, particularly in developing countries, grow by attachment form the outside, hence they look more like a dimensionally constrained crystal (see fig.1b).
The concept of the fractal and the related one of self-similarity entered, of course, also the domain of urbanism, particularly an area of interest which is called Urban Morphology. This has been born as a sub-discipline of geography. It is characterized by a salient reductionism of the Urban to the physical appearance of a city and its physical layout, which of course is not quite appropriate.
Given the mechanisms of attachment, whether it is due to interior processes or attachment from the outside (through people migrating to the city), it is not really surprising to find similar fractal shapes as in case of (dimensionally) constrained crystalline growth, or in the case of slime-molds with their branching amoeba highways. In order to understand the city, the question is not whether there is a fractal or not, whether there is a dimensionality of 1.718 or one of 1.86.
The question is about the mechanisms that show up as a particular material habitus, and about the actual instantiation of these mechanisms. Or even shorter: the material habitus must be translated into a growth model. In turn, this would provide the means to shape the conditions of the cities own unfolding and evolution. We already know that dedicated planning and dedicated enforcement of plans will not work in most cities. It is of utmost importance here, not to fall back into representationalist patterns, as for instance Michael Batty sometimes falls prey to [1]. Avoiding representationalist fallacies is possible only if we embed the model about abstract growth into a properly bound compound which comprises theory (methodology and philosophy) and politics as well, much like we proposed in the previous essay.
Figure 9a. In former times, or as a matter of geographical facts, attachment is excluded. Any growth is directed towards the inside and shows up as a differentiation. Here, in this figure we see a planned city, which thus looks much like a crystal.
Figure 9b. A normally grown medieval city. While the outer “shell” looks pretty standardized, though not “crystalline”, the interior shows rich differentiation. In order to describe the interior of such cities we have to use the concept of type.
Figure 10a. Manhattan is the paradigmatic example for congestion due to a severe (in this case: geographical) limitation of the possibility to grow horizontally. In parallel, the overwhelming interior differentiation created a strong connectivity and abundant heterotopias. This could be interpreted as the prototype of the internet, built in steel and glass (see Koolhaas’ “Delirious New York” [15]).
Figure 10b. In the case of former Kowloon (now torn down), it wasn’t geological, but political constraints. It was a political enclave/exclave, where actually no legislative regulations could be set active. In some way it is the chaotic brother of Manhattan. This shows Kowloon in 1973…
Figure 10c. And here the same area in 1994.
Figure 10d. Somewhere in the inside. Kowloon developed more and more into an autonomous city that provided any service to its approx. 40’000 inhabitants. On the roof of the buildings they installed the play grounds for the children.
The medieval city, Manhattan and Kowloon share a particular growth pattern. While the outer shape remains largely constant, their interior develops any kind of compartments, any imaginable kind of flow and a rich vertical structure, both physical and logical. This growth pattern is the same as we can observe in animals. Furthermore, those cities, much like animals, start to build an informational autonomy, they start to behave, to build an informational persistence, to initiate an intense mediality.
3.6. Summary of Growth Modes
The following table provides a brief overview about the main structural differences of growth models, as they can be derived from their natural instantiations.
Table 1: Structural differences of the four basic classes of modes of growth. Note that the class labels are indeed just that: labels of models. Any actual instantiation, particularly in case of real animals, may comprise a variety of compounds made from differently weighted classes.
Aspect \ Class crystal plant swarm animal
Mode of Attachment passive positive active positive active positive and negative active positive and negative
Direction from outside from inside from inside towards outside or inside from & towards the inside
Morphogenetic Force as a fact by matter explicitly produced inhibiting fields implicit and explicit multi-component fields 11 explicitly produced multi-component fields
Status of Form implicitly templated by existing form beginning independence from matter independence from matter independence from matter
Formal Tools geometric scaling, representative reproduction, constrained randomness Fibonacci patterns, fractal habitus, logistics fractal habitus, logistics metamerism, organs, transformation, strictly a-physical
Causa Finalis(main component) actualization of identity space filling logistics mobile logistics short-term adaptivity
4. Effects of Growth
Growth increases mass, spread or both. Saying that doesn’t add anything, it is an almost syntactical replacement of words. In Aristotelian words, we would get stuck with the causa materialis and the causa formalis. The causa finalis of growth, in other words its purpose and general effect, besides the mere increase of mass, is differentiation12, and we have to focus the conditions for that differentiation in terms of information. For the change of something is accessible only upon interpretation by an observing entity. (Note that this again requires relationality as a primacy)
The very possibility of difference and consequently of differentiation is bound to the separation of signals.13 Hence we can say that growth is all about the creation of a whole bouquet of signal intensity lengths, instantiated on a scale that stretches from as morpho-physical compartments through morpho-functional compartments to morpho-symbolic specializations.14
Inversely we may say that abstract growth is a necessary component for differentiation. Formally, we can cover differentiation as an abstract complexity of positive and negative growth. Without abstract growth—or differentiation—there is no creation or even shaping of space into an individual space with its own dynamical dimensionality, which in turn would preclude the possibility for interaction. Growth regulates the dimensionality of the space of expressibility.
5. Growth, an(d) Urban Matter
5.1. Koolhaas, History, Heritage and Preservation
From his early days as urbanist and architect, Koolhaas has been fascinated by walls and boxes [16], even with boxes inside boxes. While he conceived the concept of separation first in a more representational manner, he developed it also into a mode of operation later. We now can decode it as a play with informational separation, as an interest in compartments, hence with processes of growth and differentiation. This renders his personal fascinosum clearly visible: the theory and the implementation of differentiation, particularly with respect to human forms of life. It is probably his one and only subject.
All of Koolhaas’ projects fit into this interest. New York, Manhattan, Boxes, Lagos, CCTV, story-telling, Singapore, ramps, Lille, empirism, Casa da Musica, bigness, Metabolism. His exploration(s) of bigness can be interpreted as an exploration of the potential of signal intensity length. How much have we to inflate a structure in order to provoke differentiation through the shifting the signal horizon into the inside of the structure? Remember, that the effective limit of signal intensity length manifests as breaking of symmetry, which in turn gives rise to compartmentalization, opposing forces, paving the way for complexity, emergence, that is nothing else than a dynamic generation of patterns. BIG BAG. BIG BANG. Galaxies, stardust, planets, everything in the mind of those crawling across and inside bigness architecture. Of course, it appears to be more elegant to modulate the signal intensity length through other means than just by bigness, but we should not forget about it. Another way for provoking differentiation is through introducing elements of complexity, such as contradictory elements and volatility. Already in 1994, Koolhaas wrote [17]15
But in fact, only Bigness instigates the regime of complexity that mobilizes the full intelligence of architecture and its related fields. […] The absence of a theory of Bigness–what is the maximum architecture can do?–is architecture’s most debilitating weakness. […] By randomizing circulation, short-circuiting distance, […] stretching dimensions, the elevator, electricity, air-conditioning,[…] and finally, the new infrastructures […] induced another species of architecture. […] Bigness perplexes; Bigness transforms the city from a summation of certainties into an accumulation of mysteries. […] Bigness is no longer part of any urban tissue. It exists; at most, it coexists. Its subtext is fuck context.
The whole first part of this quote is about nothing else than modulating signal intensity length. Consequently, the conclusion in the second part refers directly to complexity that creates novelty. An artifice that is double-creative, that is creative and in each of its instances personalized creative, how should it be perceived other than as a mystery? No wonder, modernists get overcharged…
The only way to get out of (built) context is through dynamically creating novelty., by creating an exhaustively new context outside of built matter, but strongly building on it. Novelty is established just and only by the tandem of complexity and selection (aka interpretation). But, be aware, complexity here is fully defined and not to be mistaken with the crap delivered by cybernetics, systems theory or deconstructivism.
The absence of a theory of Bigness—what is the maximum architecture can do? —is architecture’s most debilitating weakness. Without a theory of Bigness, architects are in the position of Frankenstein’s creators […] Bigness destroys, but it is also a new beginning. It can reassemble what it breaks. […] Because there is no theory of Bigness, we don’t know what to do with it, we don’t know where to put it, we don’t know when to use it, we don’t know how to plan it. Big mistakes are our only connection to Bigness. […] Bigness destroys, but it is also a new beginning. It can reassemble what it breaks. […] programmatic elements react with each other to create new events- Bigness returns to a model of programmatic alchemy.
All this reads like a direct rendering of our conceptualization of complexity. It is, of course, nonsense to think that
[…] ‘old’ architectural principles (composition, scale, proportion, detail) no longer apply when a building acquires Bigness. [18]
Koolhaas sub-contracted Jean Nouvel for caring of large parts of Euro-Lille. Why should he do so, if proportions wouldn’t be important? Bigness and proportions are simply on different levels! Bigness instantiates the conditions for dynamic generation of patterns, and those patters, albeit volatile and completely on the side of the interpreter/observer/user/inhabitant/passer-by, deserve careful thinking about proportions.
Bigness is impersonal: the architect is no longer condemned to stardom.
Here, again, the pass-porting key is the built-in creativity, based on elementarized, positively defined complexity. We thus would like to propose to consider our theory of complexity—at least—as a theory of Bigness. Yet, the role of complexity can be understood only as part of generic differentiation. Koolhaas’ suggestion for Bigness does not only apply for architecture. We already mentioned Euro-Lille. Bigness, and so complexity—positively elementarized—is the key to deal with Urban affairs. What could be BIGGER than the Urban? Koolhaas concludes
Bigness no longer needs the city, it is the city.’ […]
Bigness = urbanism vs. architecture.
Of course, by “architecture” Koolhaas refers to the secretions by the swarm architects’ addiction to points, lines, forms and apriori functions, all these blinkers of modernism. Yet, I think, urbanism and a re-newed architecture (one htat embraces complexity) may be well possible. Yet, probably only if we, architects and their “clients”, contemporary urbanists and their “victims,” start to understand both as parts of a vertical, differential (Deleuzean) Urban Game. Any comprehensive apprehension of {architecture, urbanism} will overcome the antipodic character of the relations between them. Hope is that it also will be a cure for junkspace.
There are many examples from modernism, where architects spent the utmost efforts to prevent the “natural” effect of bigness, though not always successful. Examples include Corbusier as well as Mies van der Rohe.
Koolhaas/OMA not only uses assemblage, bricolage and collage as working techniques, whether as “analytic” tool (Delirious New York) or in projects, they also implement it in actual projects. Think of Euro-Lille, for instance. Implementing the conditions of or for complexity creates a never-ending flux of emergent patterns. Such an architecture not only keeps being interesting, it is also socially sustainable.
Such, it is not really a surprise that Koolhaas started to work on the issue and the role of preservation during the recent decade, culminating in the contribution of OMA/AMO to the Biennale 2010 in Venice.
In an interview given there to Hans Ulrich Obrist [20] (and in a lecture at the American University of Beirut), Koolhaas mentioned some interesting figures about the quantitative consequences of preservation. In 2010, 3-4% of the area of the earths land surface has been declared as heritage site. This amounts to a territory larger than the size of India. The prospects of that have been that soon up to 12% are protected against change. His objection was that this development can lead to kind of a stasis. According to Koolhaas, we need a new vocabulary, a theory that allows to talk about how to get rid of old buildings and to negotiate of which buildings we could get rid of. He says that we can’t talk about preservation without also talking about how to get rid of old stuff.
There is another interesting issue about preservation. The temporal distance marked by the age of the building to be preserved and the attempt to preserve the building constantly decreased across history. In 1800 preservation focused on buildings risen 2000 years before, in 1900 the time distance shrunk to 300 years, and in 2000 it was as little as 30 years. Koolhaas concludes that we obviously are entering a phase of prospective preservation.
There are two interpretations for this tendency. The first one would be, as a pessimistic one, that it will lead to a perfect lock up. As an architect, you couldn’t do anything anymore without being engaged in severely intensified legislation issues and a huge increase in bureaucrazy. The alternative to this pessimistic perspective is, well, let’s call it symbolic (abstract) organicism, based on the concept of (abstract) growth and differentiation as we devised it here. The idea of change as a basis of continuity could be built so deeply into any architectural activity, that the result would not only comprise preservation, it would transcend it. Obviously, the traditional conception of preservation would vanish as well.
This points to an important topic: Developing a theory about a cultural field, such as it is given by the relation between architecture and preservation, can’t be limited to just the “subject”. It inevitably has to include a reflection about the conceptual layer as well. In the case of preservation and heritage, we simply find that the language game is still of an existential character, additionally poisoned by values. Preservation should probably not target the material aspects. Thus, the question whether to get rid of old buildings is inappropriate. Transformation should not be regarded as a question of performing a tabula rasa.
Any well-developed theory of change in architectural or Urban affairs brings a quite important issue to the foreground. The city has to decide what it wants to be. The alternatives are preformed by the modes of growth. It could conceive of itself as an abstract crystal, as a plant, a slime-mold made from amoeboids, or as an abstract animal. Each choice offers particular opportunities and risks. Each of these alternatives will determine the characteristics and the quality of the potential forms of life, which of course have to be supported by the city. Selecting an alternative also selects the appropriate manner of planning, of development. It is not possible to perform the life form of an animal and to plan according to the characteristics of a crystal. The choice will also determine whether the city can enter a regenerative trajectory, whether it will decay to dust, or whether it will be able to maintain its shape, or whether it will behave predatory. All these consequences are, of course, tremendously political. Nevertheless, we should not forget that the political has to be secured against the binding problem as much as conceptual work.
In the cited interview, Koolhaas also gives a hint about that when he refers to the Panopticum project, a commission to renovate a 19th century prison. He mentions that they discovered a rather unexpected property of the building: “a lot of symbolic extra-dimensions”. These symbolic capital allows for “much more and beautiful flexibility” to handle the renovation. Actually, one “can do it in 50 different ways” without exhausting the potential, something, which according to Koolhaas is “not possible for modern architecture”.
Well, again, not really a surprise. Neither function, nor functionalized form, nor functionalized fiction (Hollein) can bear symbolic value except precisely that of the function. Symbolic value can’t be implanted as little as meaning can be defined apriori, something that has not been understood, for instance, by Heinrich Klotz14. Due to the deprivation of the symbolic domain it is hard to re-interpret modernist buildings. Yet, what would be the consequence for preservation? Tearing down all the modernist stuff? Probably not the worst idea, unless the future architects are able to think in terms of growth and differentiation.
Beyond the political aspects the practical question remains, how to decide on which building, or district, or structure to preserve? Koolhaas already recognized that the politicians started to influence or even rule the respective decision-making processes, taking responsibility away from the “professional” city-curators. Since there can’t be a rational answer, his answer is random selection.
Figure 11: Random Selection for Preservation Areas, Bejing. Koolhaas suggested to select preservation areas randomly, since it can’t be decided “which” Bejing should be preserved (there are quite a few very different ones).
Yet, I tend to rate this as a fallback into his former modernist attitudes. I guess, the actual and local way for the design of the decision-making process is a political issue, which in turn is dependent on the type of differentiation that is in charge, either as a matter of fact, or as a subject of political design. For instance, the citizens of the whole city, or just of the respective areas could be asked about their values, as it is a possibility (or a duty) in Switzerland. Actually, there is even a nice and recent example for it. The subject matter is a bus-stop shelter designed by Santiago Calatrava in 1996, making it to one of his first public works.
Figure 12: Santiago Calatrava 1996, bus stop shelter in St.Gallen (CH), at a central place of the city; there are almost no cars, but every 1-2 minutes a bus, thus a lot of people are passing even several times per day. Front view…
…and rear view
In 2011, the city parliament decided to restructure the place and to remove the Calatrava shelter. It was considered by the ‘politicians’ to be too “alien” for the small city, which a few steps away also hosts a medieval district that is a Unesco World Heritage. Yet, many citizen rated the shelter as something that provides a positive differential, a landmark, which could not be found in other cities nearby, not even in whole Northern Switzerland. Thus, a referendum has been enforced by the citizens, and the final result from May 2012 was a clear rejection of the government’s plans. The effect of this recent history is pretty clear: The shelter accumulates even more symbolic capital than before.
Back to the issue of preservation. If it is not the pure matter, what else should be addressed? Again, Koolhaas himself already points to the right direction. The following fig.13 shows a scene from somewhere in Bejing. The materials of the dwelling are bricks, plastic, cardboard. Neither the site nor the matter nor the architecture seems to convey anything worthwhile to be preserved.
Figure 13: When it comes to preservation, the primacy is about the domain of the social, not that of matter.
Yet, what must be preserved mandatorily is the social condition, the rooting of the people in their environment. Koolhaas, however, says that he is not able to provide any answer to solve this challenge. Nevertheless it s pretty clear, that “sustainability” start right here, not in the question of energy consumption (despite the fact that this is an important aspect too).
5.2. Shrinking. Thinning. Growing.
Cities have been performances of congestion. As we have argued repeatedly, densification, or congestion if you like, is mandatory for the emergence of typical Urban mediality. Many kinds of infrastructures are only affordable, let alone be attractive, if there are enough clients for it. Well, the example of China—or Singapore—and its particular practices of implementing plans demonstrate that the question of density can take place also in a plan, in the future, that is, in the domain of time. Else, congestion and densification may actualize more and more in the realm of information, based on the new medially active technologies. Perhaps, our contemporary society does not need the same corporeal density as it was the case in earlier times. There is a certain tendency that the corporeal city and the web amalgamate into something new that could be called the “wurban“. Nevertheless, at the end of the day, some kind of density is needed to ignite the conditions for the Urban.
Such, it seems that the Urban is threatened by the phenomenon of thinning. Thinning is different from shrinking, which appears foremost in some regions of the U.S. (e.g. Detroit) or Europe (Leipzig, Ukrainia) as a consequence of monotonic, or monotopic economic structure. Yet, shrinking can lead to thinning. Thinning describes the fact that there is built matter, which however is inhabited only for a fraction of time. Visually dense, but socially “voided”.
Thinning, according to Koolhaas, considers the form of new cities like Dubai. Yet, as he points out, there is also a tendency in some regions, such as Switzerland, or the Netherlands, that approach the “thinned city” from the other direction. The whole country seems to transform itself into something like an urban garden, neither of rural nor of urban quality. People like Herzog & deMeuron lament about this form, conceiving it as urban sprawl, the loss of distinct structure, i.e. the loss of clearly recognizable rural areas on the one hand, and the surge of “sub-functional” city-fragments on the other. Yet, probably we should turn perspective, away from reactive, negative dialectics, into a positive attitude of design, as it may appear a bit infantile to think that a palmful of sociologists and urbanists could act against a gross cultural tendency.
In his lecture at the American University in Beirut in 2010 [19], Koolhaas asked “What does it [thinning] mean for the ‘Urban Condition’?”
Well, probably nothing interesting, except that it prevents the appearance of the Urban16 or lets it vanish, would it have been present. Probably cities like Dubai are just not yet “urban”, not to speak of the Urban. From the distant, Dubai still looks like a photomontage, a Potemkin village, an absurdity. The layout of the arrangement of the high-rises remembers to the small street villages, just 2 rows of cottages on both sides of a street, arbitrarily placed somewhere in the nowhere of a grassland plain. The settlement being ruled just by a very basic tendency for social cohesion and a common interest for exploiting the hinterland as a resource. But there is almost no network effect, no commonly organized storage, no deep structure.
Figure 14a: A collage shown by Koolhaas in his Beirut lecture, emphasizing the “absurdity” (his words) of the “international” style. Elsewhere, he called it an element of Junkspace.
The following fig 14b demonstrates the artificiality of Dubai, classifying more as a lined village made from huge buildings than actually as a “city”.
Figure 14b. Photograph “along” Dubai’s main street taken in late autumn 2012 by Shiva Menon (source). After years of traffic jamming the nomadic Dubai culture finally accepted that something like infrastructure is necessary in a more sessile arrangement. They started to build a metro, which is functional with the first line since Sep 2010.
dubai fog 4 shiva menon
Figure 14c below shows the new “Simplicity ™”. This work of Koolhaas and OMA oscillates between sarcasm, humor pretending to be naive, irony and caricature. Despite a physical reason is given for the ability of the building to turn its orientation such as to minimize insulation, the effect is a quite different one. It is much more a metaphor for the vanity of village people, or maybe the pseudo-religious power of clerks.
Figure 14c-1. A proposal by Koolhaas/OMA for Dubai (not built, and as such, pure fiction). The building, called “Simplicity”, has been thought to be 200m wide, 300m tall and measuring only 21m in depth. It is placed onto a plate that rotates in order to minimize insulation.
Figure 14b-2. The same thing a bit later the same day
Yet, besides the row of high-rises we find the dwellings of the migration workers in a considerable density, forming a multi-national population. However, the layout here remembers more to Los Angeles than to any kind of “city”. Maybe, it simply forms kind of the “rural” hinterland of the high-rise village.
Figure 15. Dubai, “off-town”. Here, the migration workers are housing. In the background the skyscrapers lining the infamous main street.
For they, for instance, also started to invest into a metro, despite the (still) linear, disseminated layout of the city, which means that connectivity, hence network effects are now recognized as a crucial structural element for the success of the city. And this then is not so different anymore from the classical Western conception. Anyway, even the first cities of mankind, risen not in the West, provided certain unique possibilities, which as a bouquet could be considered as urban.
There is still another dimension of thinning, related to the informatization of presence via medially active technologies. Thinning could be considered as an actualization of the very idea of the potentiality of co-presence, much as it is exploited in the so-called “social media”. Of course, the material urban neighborhood, its corporeality, is dependent on physical presence. Certainly, we can expect either strong synchronization effects or negative tipping points, demarcating a threshold towards sub-urbanization. On the other hand, this could give rise to new forms of apartment sharing, supported by urban designers and town officials…
On the other hand, we already mentioned natural structures that show a certain dispersal, such as the blood cells, the immune system in vertebrates, or the slime-molds. These structures are highly developed swarms. Yet, all these swarms are highly dependent on the outer conditions. As such, swarms are hardly persistent. Dubai, the swarm city. Technology, however, particularly in the form of the www and so-called social media could stabilize the swarm-shape.17
From a more formal perspective we may conceive of shrinking and thinning simply as negative growth. By this growth turns, of course, definitely into an abstract concept, leaving the representational and even the metaphorical far behind. Yet, the explication of a formal theory exceeds the indicated size of this text by far. We certainly will do it later, though.
5.3. In Search for Symbols
What turns a building into an entity that may grow into an active source for symbolization processes? At least, we can initially know that symbols can’t be implanted in a direct manner. Of course, one always can draw on exoticism, importing the cliché that already is attached to the entity from abroad. Yet, this is not what we are interested in here.The question is not so dissimilar to the issue of symbolization at large, as it is known from the realm of language. How could a word, a sign, a symbol gain reference, and how could a building get it? We could even take a further step by asking: How could a building acquire generic mediality such that it could be inhabited not only physically, but also in the medial realm? [23] We can’t answer the issues around these questions here, as there is a vast landscape of sources and implications, enough for filling at least a book. Yet, conceiving buildings as agents in story-telling could be a straightforward and not too complicated entry into this landscape.
Probably, story-telling with buildings works like a good joke. If they are too direct, nobody would laugh. Probably, story-telling has a lot to do with behavior and the implied complexities, I mean, the behavior of the building. We interpret pets, not plants. With plants, we interpret just their usage. We laugh about cats, dogs, apes, and elephants, but not about roses and orchids, and even less about crystals. Once you have seen one crystal, you have seen all of them. Being inside a crystal can be frightening, just think about Snow White. While in some way this holds even for plants, that’s certainly not true for animals. Junkspace is made from (medial) crystals. Junkspace is so detrimental due to the fundamental modernist misunderstanding that claims the possibility of implementing meaning and symbols, if these are regarded as relevant at all.
Closely related to the issue of symbols is the issue of identity.
Philosophically, it is definitely highly problematic to refer to identity as a principle. It leads to deep ethical dilemmata. If we are going to drop it, we have to ask immediately about a replacement, since many people indeed feel that they need to “identify” with their neighborhood.
Well, first we could say that identification and “to identify” are probably quite different from the idea of identity. Every citizen in a city could be thought to identify with her or his city, yet, at the same time there need not be such a thing as “identity”. Identity is the abstract idea, imposed by mayors and sociologists, and preferably it should be rejected just for that, while the process of feeling empathy with one’s neighborhood is a private process that respects plurality. It is not too difficult to imagine that there are indeed people that feel so familiar with “their” city, the memories about experiences, the sound, the smell, the way people walk, that they feel so empathic with all of this such that they source a significant part of their personality from it. How to call this inextricable relationship other than “to identify with”?
The example of the Calatrava-bus stop shelter in St.Gallen demonstrates one possible source of identification: Success in collective design decisions. Or more general: successfully finished negotiations about collective design issues, a common history about such successful processes. Even if the collective negotiation happens as a somewhat anonymous process. Yet, the relative preference of participation versus decreed activities depends on the particular distribution of political and ethical values in the population of citizens. Certainly, participatory processes are much more stable than top-down-decrees, not only in the long run, as even the Singaporean government has recognized recently. But anyway, cities have their particular personality, because they behave18 in a particular manner, and any attempt to get clear or to decide about preservation must respect this personality. Of course, it also applies that the decision-making process should be conscious enough to be able to reflect about the metaphysical belief set, the modes of growth and the long-term characteristics of the city.
5.4. The Question of Implementation
This essay tries to provide an explication of the concept of growth in the larger context of a theory of differentiation in architecture and urbanism. There, we positioned growth as one of four principles or schemata that are constitutive for generic differentiation.
In this final section we would like to address the question of implementation, since only little has been said so far about how to deal with the concept of growth. We already described how and why earlier attempts like that of the Metabolists dashed against the binding problem of theoretical work.
If houses do not move physically, how then to make them behaving, say, similar to the way an animal does? How to implement a house that shares structural traits with animals? How to think of a city as a system of plants and animals without falling prey to utter naivity?
We already mentioned that there is no technocratic, or formal, or functionalist solution to the question of growth. At first, the city has to decide what it wants to be, which kind of mix of growth modes should be implemented in which neighborhoods.
Let us first take some visual impressions…
Figure 16a,b,c. The Barcelona Pavilion by Mies van der Rohe (1929 [1986]).
This pavilion is a very special box. It is non-box, or better, it establishes a volatile collection of virtual boxes. In this building, Mies reached the mastery of boxing. Unfortunately, there are not so much more examples. In some way, the Dutch Embassy by Koolhaas is the closest relative to it, if we consider more recent architecture.
Just at the time the Barcelona pavilion has been built, another important architect followed similar concepts. In his Villa Savoye, built 1928-31, LeCorbusier employed and demonstrated several new elements in his so-called “new architecture,” among others the box and the ramp. Probably the most important principle, however, was to completely separate construction and tectonics from form and design. Such, he achieved a similar “mobility” as Mies in his Pavilion.
Figure 17a: La Villa Savoye, mixing interior and exterior on the top-roof “garden”. The other zone of overlapping spaces is beneath the house (see next figure 17b).
corbusier Villa Savoye int-ext
Figure 17b: A 3d model of Villa Savoye, showing the ramps that serve as “entrance” (from the outside) and “extrance” (towards the top-roof garden). The principle of the ramp creates a new location for the creation and experience of duration in the sense of Henri Bergson’s durée. Both the ramp and the overlapping of spaces creates a “zona extima,” which is central to the “behavioral turn”.
Corbusier Villa Savoye 06 small model
Comparing La Villa Savoye with the Barcelona pavilion regarding the mobility of space, it is quite obvious, that LeCorbusier handled the confluence and mutual penetration of interior and exterior in a more schematic and geometric manner.19
The quality of the Barcelona building derives from the fact that its symbolic value is not directly implemented, it just emerges upon interaction with the visitor, or the inhabitant. It actualizes the principle of “emerging symbolicity by induced negotiation” of compartments. The compartments become mobile. Such, it is one of the roots of the ramp that appeared in many works of Koolhaas. Yet, its working requires a strong precondition: a shared catalog of values, beliefs and basic psychological determinants, in short, a shared form of life.
On the other hand, these values and beliefs are not directly symbolized, shifting them into their volatile phase, too. Walking through the building, or simply being inside of it, instantiates differentiation processes in the realm of the immaterial. All the differentiation takes place in the interior of the building, hence it brings forth animal-like growth, transcending the crystal and the swarm.
Thus the power of the pavilion. It is able to transform and to transcend the values of the inhabitant/visitor. The zen of silent story-telling.
This example demonstrates clearly that morphogenesis in architecture not only starts in the immateriality of thought, it also has to target the immaterial.
It is clear that such a volatile dynamics, such a active, if not living building is hard to comprehend. In 2008, the Japanese office SANAA has been invited for contributing the annual installation in the pavilion. They explained their work with the following words [24].
“We decided to make transparent curtains using acrylic material, since we didn’t want the installation to interfere in any way with the existing space of the Barcelona Pavilion,” says Kazuyo Sejima of SANAA.
Figure 18. The installation of Japanese office SANAA in the Barcelona Pavilion. You have to take a careful look in order to see the non-interaction.
Well, it certainly rates as something between bravery and stupidity to try “not to interfere in any way with the existing space“. And doing so by highly transparent curtains is quite to the opposite of the buildings characteristics, as it removes precisely the potentiality, the volatility, virtual mobility. Nothing is left, beside the air, perhaps. SANAA committed the typical representational fault, as they tried to use a representational symbol. Of course, the walls that are not walls at all have a long tradition in Japan. Yet, the provided justification would still be simply wrong.
Instead of trying to implement a symbol, the architect or the urbanist has to care about the conditions for the possibility of symbol processes and sign processes. These processes may be political or not, they always will refer to the (potential) commonality of shared experiences.
Above we mentioned that the growth of a building has its beginning in the immateriality of thought. Even for the primitive form of mineralic growth we found that we can understand the variety of resulting shapes only through the conditions embedding the growth process. The same holds, of course, for the growth of buildings. For crystals the outer conditions belong to them as well, so the way of generating the form of a building belongs to the building.
Where to look for the outer conditions for creating the form? I suppose we have to search for them in the way the form gets concrete, starting from a vague idea, which includes its social and particularly its metaphysical conditions. Do you believe in independence, identity, relationality, difference?
It would be interesting to map the difference between large famous offices, say OMA and HdM.
According to their own words, HdM seems to treat the question of material very differently from OMA, where the question of material comes in at later stage [25]. HdM seems to work much more “crystallinic”, form is determined by the matter, the material and the respective culture around it. There are many examples for this, from the wine-yard in California, the “Schaulager” in Basel (CH), the railway control center (Basel), up to the “Bird’s Nest” in Bejing (which by the way is an attempt for providing symbols that went wrong). HdM seem to try to rely to the innate symbolicity of the material, of corporeality itself. In case of the Schaulager, the excavated material have been used to raise the building, the stones from the underground have been erected into a building, which insides looks like a Kafkaesque crystal. They even treat the symbols of a culture as material, somehow counterclockwise to their own “matérialisme brut”. Think about their praise of simplicity, the declared intention to avoid any reference beside the “basic form of the house” (Rudin House). In this perspective, their acclaimed “sensitivity” to local cultures is little more than the exploitation of a coal mine, which also requires sensitivity to local conditions.
Figure 18: Rudin House by Herzog & deMeuron
HdM practice a representationalist anti-symbolism, leaning strongly to architecture as a crystal science, a rather weird attitude to architecture. Probably it is this weirdness that quite unintentionally produces the interest in their architecture through a secondary dynamics in the symbolic. Is it, after all, Hegel’s tricky reason @ work? At least this would explain the strange mismatch of their modernist talking and the interest in their buildings.
6. Conclusions
In this essay we have closed a gap with respect to the theoretical structure of generic differentiation. Generic Differentiation may be displayed by the following diagram (but don’t miss the complete argument).
Figure 19: Generic Differentiation is the key element for solving the binding problem of theory works. This structure is to be conceived not as a closed formula, but rather as a module of a fractal that is created through mutual self-affine mappings of all of the three parts into the respective others.
basic module of the fractal relation between concept/conceptual, generic differentiation/difference and operation/operational comprising logistics and politics that describes the active subject
In earlier essays, we proposed abstract models for probabilistic networks, for associativity and for complexity. These models represent a perspective from the outside onto the differentiating entity. All of these have been set up in a reflective manner by composing certain elements, which in turn can be conceived as framing a particular space of expressibility. Yet, we also proposed the trinity of development, evolution and learning (chp.10 here) for the perspective from the inside of the differentiation process(es), describing different qualities of differentiation.
Well, the concept of growth20 is now joining the group of compound elements for approaching the subject of differentiation from the outside. In some way, using a traditional and actually an inappropriate wording, we could say that this perspective is more analytical than synthetical, more scientific than historiographical. This does not mean, of course, that the complementary perspective is less scientific, or that talking about growth or complexity is less aware of the temporal domain. It is just a matter of weights. As we have pointed out in the previous essay, the meta-theoretical conception (as a structural description of the dynamics of theoretical work) is more like a fractal field than a series of activities.
Anyway, the question is what can we do with the newly re-formulated concept of growth?
First of all, it completes the concept of generic differentiation, as we already mentioned just before. Probably the most salient influence is the enlarged and improved vocabulary to talk about change as far as it concerns the “size” of the form of a something, even if these something is something immaterial. For many reasons, we definitely should resist the tendency to limit the concept of growth to issues of morphology.
Only through this vocabulary we can start to compare the entities in the space of change. Different things from different domains or even different forms of life can be compared to each other, yet not as those things, but rather as media of change. Comparing things that change means to investigate the actualization of different modes of change as this passes through the something. This move is by no means eclecticist. It is even mandatory in order to keep aligned to the primacy of interpretation, the Linguistic Turn, and the general choreostemic constitution.
By means of the new and generalized vocabulary we may overcome the infamous empiricist particularism. Bristle counting, as it is called in biology, particularly entomology. Yes, there are around 450’000 different species of beetles… but… Well, overcoming particularism means that we can spell out new questions: about regulative factors, e.g. for continuity, melting and apoptosis. Guided by the meta-theoretical structure in fig.19 above we may ask: How would a politics of apoptosis look like? What about recycling of space? How could infrastructure foster associativity, learning and creativity of the city, rather than creativity in the city? What is epi/genetics of the growth and differentiation processes in a particular city?
Such questions may appear as elitary, abstract, of only little use. Yet, the contrary is true, as precisely such questions directly concern the productivity of a city, the speed of circulation of capital, whether symbolic or monetary (which anyway is almost the same). Understanding the conditions of growth may lead to cities that are indeed self-sustaining, because the power of life would be a feature deeply built into them. A little, perhaps even homeopathic dose of dedetroitismix, a kind of drug to cure the disease that infected the city of Detroit as well as the planners of Detroit or also all the urbanists that are pseudo-reasoning about Detroit in particular and sustainability in general. Just as Paracelsus mentioned that there is not just one kind of stomach, instead there are hundreds of kinds of stomach, we may recognize how to deal with the thousands of different kinds of cities that all spread across thousands of plateaus, if we understand of how to speak and think about growth.
1. This might appear a bit arrogant, perhaps, at first sight. Yet, at this point I must insist on it, even as I take into account the most advanced attempts, such as those of Michael Batty [1], Luca D’Acci or Karl Kropf [2]. The proclaimed “science of cities” is in a bad state. Either it is still infected by positivist or modernist myths, or the applied methodological foundations are utterly naive. Batty for instance embraces full-heartedly complexity. But how could one use complexity other as a mere label, if he is going to write such weird mess [3], mixing wildly concepts and subjects?
“Complexity: what does it mean? How do we define it? This is an impossible task because complex systems are systems that defy definition. Our science that attempts to understand such systems is incomplete in the sense that a complex system behaves in ways that are unpredictable. Unpredictability does not mean that these systems are disordered or chaotic but that defy complete definition.
Of course, it is not an impossible task to conceptualize complexity in a sound manner. This is even a mandatory precondition to use it as a concept. It is a bit ridiculous to claim the impossibility and then writing a book about its usage. And this conceptualization, whatsoever it would look like, has absolutely nothing to do with the fact that complex systems may behave unpredictable. Actually, in some way they are better predictable than complete random processes. It remains unclear which kind of unpredictability Batty is referring to? He didn’t disclose anything about this question, which is a quite important one if one is going to apply “complexity science”. And what about the concept of risk, and modeling, then, which actually can’t be separated at all?
His whole book [1] is nothing else than an accumulation of half-baked formalistic particulars. When he talks about networks, he considers only logistic networks. Bringing in fractals, he misses to mention the underlying mechanisms of growth and the formal aspects (self-affine mapping). In his discussion of the possible role of evolutionary theory [4], following Geddes, Batty resorts again to physicalism and defends it. Despite he emphasizes the importance of the concept of “mechanism”, despite he correctly distinguishes development from evolution, despite he demands an “evolutionary thinking”, he fails to get to the point: A proper attitude to theory under conditions of evolution and complexity, a probabilistic formulation, an awareness for self-referentiality, insight to the incommensurability of emergent traits, the dualism of code and corporeality, the space of evo-devo-cogno. In [4], one can find another nonsensical statement about complexity on p.567:
“The essential criterion for a complex system is a collection of elements that act independently of one another but nevertheless manage to act in concert, often through constraints on their actions and through competition and co-evolution. The physical trace of such complexity, which is seen in aggregate patterns that appear ordered, is the hallmark of self-organisation.” (my emphasis).
The whole issue with complex systems is that there is no independence… they do not manage to act in concert… wildly mixing with concepts like evolution or competition… physics definitely can nothing say about the patterns, and the hallmark of self-organizing systems is not surely not just the physical trace: it is the informational re-configuration.
Not by pure chance therefore he is talking about “tricks” ([5], following Hamdi [7]): “The trick for urban planning is to identify key points where small change can lead spontaneously to massive change for the better.” Without a proper vocabulary of differentiation, that is, without a proper concept of differentiation, one inevitably has to invoke wizards…
But the most serious failures are the following: regarding the cultural domain, there is no awareness about the symbolic/semiotic domain, the disrespect of information, and regarding methodology, throughout his writings, Batty mistakes theory for models and vice versa, following the positivist trail. There is not the slightest evidence in his writing that there is even a small trace of reflection. This however is seriously indicated, because cities are about culture.
This insensitivity is shared by talented people like Luca D’Acci, who is still musing about “ideal cities”. His procedural achievements as a craftsman of empirism are impressive, but without reflection it is just threatening, claiming the status of the demiurg.
Despite all these failures, Batty’s approach and direction is of course by far more advanced than the musings of Conzen, Caniggia or Kropf, which are intellectually simply disastrous.There are numerous examples for a highly uncritical use of structural concepts, for mixing of levels of arguments, crude reductionism, a complete neglect of mechanisms and processes etc. For instance, Kropf in [6]
A morphological critique is necessarily a cultural critique. […] Why, for example, despite volumes of urban design guidance promoting permeability, is it so rare to find new development that fully integrates main routes between settlements or roads directly linking main routes (radials and counter-radials)?” (p.17)
The generic structure of urban form is a hierarchy of levels related part to whole. […] More effective and, in the long run, more successful urbanism and urban design will only come from a better understanding of urban form as a material with a range of handling characteristics.” (p.18)
It is really weird to regard form as matter, isn’t it? The materialist final revenge… So, through the work of Batty there is indeed some reasonable hope for improvement. Batty & Marshall are certainly heading to the right direction when they demand (p.572 [4]):
“The crucial step – still to be made convincingly – is to apply the scientifically inspired understanding of urban morphology and evolution to actual workable design tools and planning approaches on the ground.
But it is equally certain that an adoption of evolutionary theory that seriously considers an “elan vital” will not be able to serve as a proper foundation. What is needed instead is a methodologically sound abstraction of evolutionary theory as we have proposed it some time ago, based on a probabilistic formalization and vocabulary. (…end of the longest footnote I have ever produced…)
2. The concept mechanism should not be mistaken as kind of a “machine”. In stark contrast to machines, mechanisms are inherently probabilistic. While machines are synonymic to their plan, mechanisms imply an additional level of abstraction, the population and its dynamics. .
3. Whenever it is tried to proof or implement the opposite, the primacy of logic, characteristic gaps are created, more often than not of a highly pathological character.
4. see also the essay about “Behavior”, where we described the concept of “Behavioral Coating”.
5. Deleuzean understanding of differential [10], for details see “Miracle of Comparison”.
6. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept, in order to distinguish it from the ordinary adjective that refers to common sense understanding.
7. Only in embryos or in automated industrial production we find “development”.
8. The definition (from Wiki) is: “In animals, metamery is defined as a mesodermal event resulting in serial repetition of unit subdivisions of ectoderm and mesoderm products.”
9. see our essay about Reaction-Diffusion-Systems.
10. To emancipate from constant and pervasive external “environmental” pressures is the main theme of evolution. This is the deep reason that generalists are favored to the costs of specialists (at least on evolutionary time scales).
11. Aristotle’s idea of the four causes is itself a scheme to talk about change. .
12. This principle is not only important for Urban affairs, but also for a rather different class of arrangements, machines that are able to move in epistemic space.
13. Here we meet the potential of symbols to behave according to a quasi-materiality.
14. Heinrich Klotz‘ credo in [21] is „not only function, but also fiction“, without however taking the mandatory step away from the attitude to predefine symbolic value. Such, Klotz himself remains a fully-fledged modernist. see also Wolfgang Welsch in [22], p.22 .
15. There is of course also Robert Venturi with his “Complexity and Contradiction in Architecture”, or Bernard Tschumi with his disjunction principle summarized in “Architecture and Disjunction.” (1996). Yet, both went as far as necessary, for “complexity” can be elementarized and generalized even further as he have been proposing it (here), which is, I think a necessary move to combine architecture and urbanism regarding space and time.
16. see footnote 5.
17. ??? .
18. Remember, that the behavior of cities is also determined by the legal setup, the traditions, etc.
19.The ramp is an important element in contemporary architecture, yet, often used as a logistic solution and mostly just for the disabled or the moving staircase. In Koolhaas’ works, it takes completely different role as an element of story-telling. This aspect of temporality we will investigate in more detail in another essay. Significantly, LeCorbusier used the ramp as a solution for a purely spatial problem.
20. Of course, NOT as a phenomenon!
• [1] Michael Batty, Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and Fractals. MIT Press, Boston 2007.
• [2] Karl Kropf (2009). Aspects of urban form. Urban Morphology 13 (2), p.105-120.
• [3] Michael Batty’s website.
• [4] Michael Batty and Stephen Marshall (2009). The evolution of cities: Geddes, Abercrombie and the new physicalism. TPR, 80 (6) 2009 doi:10.3828/tpr.2009.12
• [5] Michael Batty (2012). Urban Regeneration as Self-Organization. Architectural Design, 215, p.54-59.
• [6] Karl Kropf (2005). The Handling Characteristics of Urban Form. Urban Design 93, p.17-18.
• [7] Nabeel Hamdi, Small Change: About the Art of Practice and the Limits of Planning, Earthscan, London 2004.
• [8] Dennis L. Sepper, Descartes’s Imagination Proportion, Images, and the Activity of Thinking. University of California Press, Berkeley 1996. available online.
• [9] C. Bandt and M. Mesing (2009). Self-affine fractals of finite type. Banach Center Publications 84, 131-148. available online.
• [9] Gilles Deleuze, Difference & Repetition. [1967].
• [10] Moussaïd M, Perozo N, Garnier S, Helbing D, Theraulaz G (2010). The Walking Behaviour of Pedestrian Social Groups and Its Impact on Crowd Dynamics. PLoS ONE 5(4): e10047. doi:10.1371/journal.pone.0010047.
• [11] Claire Detrain, Jean-Louis Deneubourg (2006). Self-organized structures in a superorganism: do ants “behave” like molecules? Physics of Life Reviews, 3(3)p.162–187.
• [12] Dave Mosher, Secret of Annoying Crowds Revealed, Science now, 7 April 2010. available online.
• [13] Charles Jencks, The Architecture of the Jumping Universe. Wiley 2001.
• [14] Rem Koolhaas. Delirious New York.
• [15] Markus Heidingsfelder, Rem Koolhaas – A Kind of Architect. DVD 2007.
• [16] Rem Koolhaas, Bigness – or the problem of Large. in: Rem Koolhaas, Bruce Mau & OMA, S,M,L,XL. p.495-516. available here (mirrored)
• [17] Wiki entry (english edition) about Rem Koolhaas, http://en.wikipedia.org/wiki/Rem_Koolhaas, last accessed Dec 4th, 2012.
• [18] Rem Koolhaas (2010?). “On OMA’s Work”. Lecture as part of “The Areen Architecture Series” at the Department of Architecture and Design, American University of Beirut. available online. (the date of the lecture is not clearly identifiable on the Areen AUB website).
• [19] Hans Ulrich Obrist, Interview with Rem Koolhaas at the Biennale 2010, Venice. Produced by the Institute of the 21st Century with support from ForYourArt, The Kayne Foundation. available online on youtube, last accessed Nov 27th, 2012.
• [20] Heinrich Klotz, The history of postmodern architecture, 1986.
• [21] Wolfgang Welsch, Unsere postmoderne Moderne. 6.Auflage, Oldenbourg Akademie Verlag, Berlin 2002 [1986].
• [22] Vera Bühlmann, inahbiting media. Thesis, University of Basel 2009. (in german, available online)
• [23] Report in de zeen (2008). available online.
• [24] Jacques Herzog, Rem Koolhaas, Urs Steiner (2000). Unsere Herzen sind von Nadeln durchbohrt. Ein Gespräch zwischen den Architekten Rem Koolhaas und Jacques Herzog über ihre Zusammenarbeit. Aufgezeichnet von Urs Steiner.in: Marco Meier (Ed.). Tate Modern von Herzog & de Meuron. in: Du. Die Zeitschrift der Kultur. Vol. No. 706, Zurich, TA-Media AG, 05.2000. pp. 62-63. available online.
Modernism, revisited (and chunked)
July 19, 2012 § Leave a comment
There can be no doubt that nowadays “modernism”,
due to a series of intensive waves of adoption and criticism, returning as echoes from unexpected grounds, is used as a label, as a symbol. It allows to induce, to claim or to disapprove conformity in previously unprecedented ways, it helps to create subjects, targets and borders. Nevertheless, it is still an unusual symbol, as it points to a complex history, in other words to a putative “bag” of culture(s). As a symbol, or label, “modernity” does not point to a distinct object, process or action. It invokes a concept that emerged through history and is still doing so. Even as a concept, it is a chimaera. Still unfolding from practice, it did not yet move completely into the realm of the transcendental, to join other concepts in the fields most distant from any objecthood.
This Essay
Here, we continue the investigation of the issues raised by Koolhaas’ “Junkspace”. Our suggestion upon the first encounter has been that Koolhaas struggles himself with his attitude to modernism, despite he openly blames it for creating Junkspace. (Software as it is currently practiced is definitely part of it.) His writing bearing the same title thus gives just a proper list of effects and historical coincidences—nothing less, but also nothing more. Particularly, he provides no suggestions about how to find or construct a different entry point into the problematic field of “building urban environments”.
In this essay we will try to outline how a possible—and constructive—archaeology of modernism could look like, with a particular application to urbanism and/or architecture. The decisions about where to dig and what to build have been, of course, subjective. Of course, our equipment is, as almost always in archaeology, rather small, suitable for details, not for surface mining or the like. That is, our attempts are not directed towards any kind of completeness.
We will start by applying a structural perspective, which will yield the basic set of presuppositions that characterizes modernism. This will be followed by a discussion of four significant aspects, for which we will hopefully be able to demonstrate the way of modernist thinking. These four areas concern patterns and coherence, meaning, empiricism and machines. The third major section will deal with some aspects of contemporary “urbanism” and how Koolhaas relates to that, particularly with respect to his “Junkspace”. Note, however, that we will not perform a literary study of Koolhaas’ piece, as most of his subjects there can be easily deciphered on the basis of the arguments as we will show them in the first two sections.
The final section then comprises a (very) brief note about a possible future of urbanism, which actually, perhaps, already has been lifting off. We will provide just some very brief suggestions in order to not appear as (too) presumptuous.
Table of Content (active links)
1. A structural Perspective
According to its heterogeneity, the usage of that symbol “modernity” is fuzzy as well. While the journal Modernism/modernity, published by John Hopkins University Press, concentrates „on the period extending roughly from 1860 to the mid-twentieth century,“ while galleries for “Modern Art” around the world consider the historical period since post-Renaissance (conceived as the period between 1400 to roughly 1900) up today, usually not distinguishing modernism from post-modernism.
In order to understand modernism we have to take the risk of proposing a structure behind the mere symbolical. Additionally, and accordingly, we should resist the abundant attempt to define a particular origin of it. Foucault called those historians who were addicted to the calendar and the idea of the origin, the originator, or more abstract the “cause”, “historians in short trousers” (meaning a particular intellectual infantilism, probably a certain disability to think abstractly enough) [1]. History does not realize a final goal either, and similarly it is bare nonsense to claim that history came to an end. As in any other evolutionary process historical novelty builds on the leftover of preceding times.
After all, the usage of symbols and labels is a language game. It is precisely a modernist misunderstanding to dissect history into phases. Historical phases are not out there, or haven’t been there. It is by far more appropriate to conceive it as waves, yet not of objects or ideas, but of probabilities. So, the question is what happened in the 19th century that it became possible to objectify a particular wave? Is it possible to give any reasonable answer here?
Following Foucault, we may try to reconstruct the sediments that fell out from these waves like the cripples of sand in the shallow water on the beach. Foucault’s main invention put forward then in his “Archaeology” [1] is the concept of the “field of proposals”. This field is not 2-dimensional, it is high-dimensional, yet not of a stable dimensionality. In many respects, we could conceive it as a historian’s extension of the Form of Life as Wittgenstein used to call it. Later, Foucault would include the structure of power, its exertion and objectifications, the governmentality into this concept.
Starting with the question of power, we can see an assemblage that is typical for the 19th century and the latest phase of the 18th. The invention of popular rights, even the invention of the population as a conscious and a practiced idea, itself an outcome of the French revolution, is certainly key for any development since then. We may even say that its shockwaves and the only little less shocking echoes of these waves haunted us till the end of the 20th century. Underneath the French Revolution we find the claim of independence that traces back to the Renaissance, formed into philosophical arguments by Leibniz and Descartes. First, however, it brought the Bourgeois, a strange configuration of tradition and the claim of independence, bringing forth the idea of societal control as a transfer from the then emerging intensification of the idea of the machine. Still exhibiting class-consciousness, it was at the roots of the modernists rejection of tradition. Yet, even the Bourgeois builds on the French Revolution (of course) and the assignment of a strictly positive value to the concept of densification.
Without the political idea of the population, the positive value of densification, the counter-intuitive and prevailing co-existence of the ideas of independence and control neither the direction nor the success of the sciences and their utilization in the field of engineering could have been emerging as it actually did. Consequently, right to the end of the hot phase of French Revolution, it was argued by Foucroy in 1794 that it would be necessary to found a „Ecole Polytechnique“1. Densification, liberalism and engineering brought another novelty of this amazing century: the first spread of mass media, newspapers in that case, which have been theorized only approx. 100 years later.
The rejection of tradition as part of the answer to the question “What’s next?” is perhaps one of the strongest feelings for the modernist in the 19th century. It even led to considerable divergence of attitudes across domains within modernism. For instance, while the arts rejected realism as a style building on “true representation,” technoscience embraced it. Yet, despite the rejection of immediate visual representations in the arts, the strong emphasis on objecthood and apriori objectivity remained fully in charge. Think of Kandinsky’s “Punkt und Linie zu Fläche“ (1926), or the strong emphasis of pure color (Malevich), even of the idea of purity itself, then somewhat paradoxically called abstractness, or the ideas of the Bauhaus movement about the possibility and necessity to objectify rules of design based on dot, line, area, form, color, contrast etc.. The proponents of Bauhaus, even their contemporary successors in Weimar (and elsewhere) never understood that the claim for objectivity particularly in design is impossible to be satisfied, it is a categorical fault. Just to avoid a misunderstanding that itself would be a fault of the same category: I personally find Kandinsky’s work mostly quite appealing, as well as some of the work by the Bauhaus guys, yet for completely different reasons that he (they) might have been dreaming of.
Large parts of the arts rejected linearity, while technoscience took it as their core. Yet, such divergences are clearly the minority. In all domains, the rejection of tradition was based on an esteem of the idea of independence and resulted predominantly in the emphasis of finding new technical methods to produce unseen results. While the emphasis of the method definitely enhances the practice of engineering, it is not innocent either. Deleuze sharply rejects the saliency of methods [10]:
Method is the means of that knowledge which regulates the collaboration of all the faculties. It is therefore the manifestation of a common sense or the realisation of a Cogitatio natura, […] (p.165)
Here, Deleuze does not condemn methods as such. Undeniably, it is helpful to explicate them, to erect a methodology, to symbolize them. Yet, culture should not be subordinated to methods, not even sub-cultures.
The leading technoscience of these days had been physics, closely followed by chemistry, if it is at all reasonable to separate the two. It brought the combustion engine (from Carnot to Daimler), electricity (from Faraday to Edison, Westinghouse and Tesla), the control of temperature (Kelvin, Boltzmann), the elevator, and consequently the first high-rise buildings along with a food industry. In the second half of 19th century it was fashionable for newspapers to maintain a section showing up the greatest advances and success of technoscience of the last week.
In my opinion it is eminently important to understand the linkage between the abstract ideas, growing from a social practice as their soil-like precursory condition, and the success of a particular kind of science. Independence, control, population on the one side, the molecule and its systematics, the steam and the combustion engine, electricity and the fridge on the other side. It was not energy (in the form of wood and coals) that could be distributed, electricity meant an open potential for an any of potential [2]. Together they established a new Form of Life which nowadays could be called “modern,” despite the fact that its borders blur, if we could assume their existence at all. Together, combined into a cultural “brown bag,” these ingredients led to an acceleration, not to the least also due to the mere physical densification, an increase of the mere size of the population, produced (literally so) by advances in the physical and biomedical sciences.
At this point we should remind ourselves that factual success does neither legitimize to expect sustainable success nor to reason about any kind of universal legitimacy of the whole setup. The first figure would represent simply naivety, the second the natural fallacy, which seduces us to conclude from the actual (“what is”) to the deontical and the normative (“what should be”).
As a practice, the modern condition is itself dependent on a set of beliefs. These can neither be questioned nor discussed at all from within the “modern attitude,” of course. Precisely this circumstance makes it so difficult to talk with modernists about their beliefs. They are not only structurally invisible, something like a belief is almost categorically excluded qua their set of conditioning beliefs. Once accepted, these conditions can’t be accessed anymore, they are transcendental to any further argument put forward within the area claimed by these conditions. For philosophers, this figure of thought, the transcendental condition, takes the role of a basic technique. Other people like urbanists and architects might well be much less familiar with it, which could explain their struggling with theory.
What are these beliefs to which a proper modernist adheres to? My list would look like as that given below. The list itself is, of course, neither a valuation nor an evaluation.
• – independence, ultimately taken as a metaphysical principle;
• – belief in the primacy of identity against the difference, leading to the primacy of objects against the relation;
• – linearity, additivity and reduction as the method of choice;
• – analyticity and “lawfulness” for descriptions of the external world;
• – belief in positively definable universals, hence, the rejection of belief as a sustaining mental figure;
• – the belief in the possibility of a finally undeniable justification;
• – belief that the structure of the world follows a bi-valent logic2, represented by the principle of objective causality, hence also a “logification” and “physicalization” of the concept of information as well as meaning; consequently, meaning is conceived as being attached to objects;
• – the claim of a primacy of ontology and existential claims—as highlighted by the question “What is …?”—over instances of pragmatics that respect Forms of Life—characterized by the question “How to use …?”;
• – logical “flatness” and the denial of creativity of material arrangements; representation
• – belief in the universal arbitrariness of evolution;
• – belief in the divine creator or some replacement, like the independent existence of ideas (here the circle closes).
It now becomes even more clear that is not quite reasonable to assign a birth date to modernism. Some of those ideas and beliefs haven been around for centuries before their assembly into the 19th century habit. Such, modernism is nothing more, yet also nothing less than a name for the evolutionary history of a particular arrangement of attitudes, believes and arguments.
From this perspective it also becomes clear why it is somewhat difficult to separate so-called post-modernism from modernism. Post-modernism takes a yet undecided position to the issue of abstract metaphysical independence. Independence and the awareness for the relations did not amalgamate yet, both are still, well, independent in post-modernism. It makes a huge, if not to say cosmogonic difference to set the relation as the primary metaphysical element. Of course, Foucault was completely right in rejecting the label of being a post-modernist. Foucault dropped the central element of modernism—independence—completely, and very early in his career as author, thinking about the human world as horizontal (actual) and vertical (differential) embeddings. The same is obviously true for Deleuze, or Serres. Less for Lyotard and Latour, and definitely not for Derrida, who practices a schizo-modernism, undulating between independence and relation. Deleuze and Foucault never have been modern, in order to paraphrase Latour, and it would be a serious misunderstanding to attach the label of post-modernism to their oeuvre.
As a historical fact we may summarize modernism by two main achievements: first, the professionalization of engineering and its rhizomatically pervasive implementation, and second the mediatization of society, first through the utilization of mass media, then by means of the world wide web. Another issue is that many people confess to follow it as if they would follow a program, turning it into a movement. And it is here where difficulties start.
2. Problems with Modernism
We are now going to deal with some of the problems that are necessarily associated to the belief set that is so typical for modernism. In some way or another, any basic belief is burdened by its own specific difficulties. There is no universal or absolute way out of that. Yet, modernism is not just an attitude, up to now it also has turned into a large-scale societal experiment. Hence, there are not only some empirical facts, we also meet impacts onto the life of human beings (before any considerations of moral aspects). Actually, Koolhaas provided precisely a description of them in his “Junkspace” [3]. Perhaps, modernism is also more prone to the strong polarity of positive and negative outcomes, as its underlying set of believes is also particularly strong. But this is, of course, only a quite weak suggestion.
In this section we will investigate four significant aspects. Together they hopefully provide kind of a fingerprint of “typical” modernist thinking—and its failure. These four areas concern patterns and coherence, empiricism, meaning and machines.
Before we start with that I would like to visit briefly the issue raised by the role of objects in modernism. The metaphysics of objects in modernism is closely related to the metaphysical belief of independence as a general principle. If you start to think “independence” you necessarily end up with separated objects. “Things” as negotiated entities do barely exist in modernism, and if so, then only as kind of a error-prone social and preliminary approximation to the physical setup. It is else not possible, to balance objects and relations as concepts. One of them must take the primary role.
Setting objects as primary against the relation has a range of problematic consequences. In my opinion, these consequences are inevitable. It is important that neither the underlying beliefs nor their consequences can’t be separated from each other. For a modernist, it is impossible, to drop one of these and to keep the other ones without stepping into the tomb of internal inconsistency!
The idea of independence, whether in its implicit or its explicit version, can be traced back at least to scholastics, probably even to the classic where it appeared as Platonic idealism (albeit this would be an oversimplification). To its full extent it unfolded through the first golden age of the dogma of the machine in the early 17th century, e.g. in the work of Harvey or the philosophy of Descartes. Leibniz recognized its difficulties. For him perception is an activity. If objects would be conceived as purely passive, they would not be able to perceive and not to build any relation at all. Thus, the world can’t be made of objects, since there is a world external to the human mind. He remained, however, being caught by theism, which brought him to the concept of monads as well as to the concept of the infinitesimal numbers. The concept of the monads should not be underestimated, though. Ultimately, they serve the purpose of immaterial elements that bear the ability to perceive and to transfer them to actual bodies, whether stuffed with a mind or not.
The following centuries brought just a tremendous technical refinement of Cartesian philosophy, despite there have been phases where people resisted its ideas, as for instance many people in the Baroque.
Setting objects as primary against the relation is at the core of phenomenology as well, and also, though in a more abstract version, of idealism. Husserl came up with the idea of the “phenomenon”, that impresses us, notably directly, or intuitively, without any interpretation. Similarly, the Kantian “Erhabenheit”, then tapered by Romanticism, is out there as an independent instance, before any reason or perception may start to work.
So, what is the significance of setting objects as primary constituents of the world? Where do we have to expect which effects?
2.1. Dust, Coherence, Patterns
When interpreted as a natural principle, or as a principle of nature, the idea of independence provokes and supports physical sciences. Independence matches perfectly with physics, yet it is also an almost perfect mismatch for biological sciences as far as they are not reducible to physics. The same is true for social sciences. Far from being able to recognize their own conditionability, most sociologist just practice methods taken more or less directly from physics. Just recall their strange addiction to statistics, which is nothing else than methodology of independence. Instead of asking for the abstract and factual genealogy of the difference between independence and coherence, between the molecule and harmony, they dropped any primacy of the relation, even its mere possibility.
The effects in architecture are well-known. On the one hand, modernism led to an industrialization, which is reaching its final heights in the parametrism of Schumacher and Hadid, among others. Yet, by no means there is any necessity that industrialization leads to parametrism! On the other hand, if in the realm of concepts there is no such thing as a primacy of relation, only dust, then there is also no form, only function, or at least a maximized reduction of any form, as it has been presented first by Mies von der Rohe. The modularity in this ideology of the absence of form is not that of living organisms, it is that of crystals. Not only the Seagram building is looking exactly like the structural model of sodium chloride. Of course, it represents a certain radicality. Note that it doesn’t matter whether the elementary cells of the crystal follows straight lines, or whether there is some curvature in their arrangements. Strange enough, for a modernist there is never a particular intention in producing such stuff. Intentions are not needed at all, if the objects bear the meaning. The modernists expectation is that everything the human mind can accomplish under such conditions is just uncovering the truth. Crystals just happen to be there, whether in modernist architecture or in the physico-chemistry of minerals.
Strictly spoken, it is deeply non-modern, perhaps ex-modern, to investigate the question why even modernists feel something like the following structures or processes mysteriously (not: mystical!) beautiful, or at least interesting. Well, I do not know, of course, whether they indeed felt like that, or whether they just pretended to do so. At least they said so… Here are the artefacts3:
Figure 1: a (left): Michael Hansmeyer column [4] ,b (right): Turing-McCabe-pattern (for details see this);
These structures are neither natural nor geometrical. Their common structural trait is the local instantiation of a mechanism, that is, a strong dependence on the temporal and spatial local context: Subdivision in case (a), and a probabilistically instantiated set of “chemical” reactions in the case of (b). For the modernist mindset they are simply annoying. They are there, but there is no analytical tool available to describe them as “object” or to describe their genesis. Yet, both examples do not show “objects” with perceivable properties that would be well-defined for the whole entity. Rather, they represent a particular temporal cut in the history of a process. Without considering their history—which includes the contingent unfolding of their deep structure—they remain completely incomprehensible, despite the fact that on the microscopical level they are well-defined, even deterministic.
From the perspective of primary objects they are separated from comprehensibility by the chasm of idealism, or should we say hyper-idealistic conditioning? Yet, for both there exists a set of precise mathematical rules. The difference to machines is just that these rules describe mechanisms, but not anything like the shape or on the level of the entirety. The effect of these mechanism on the level of the collective, however, can’t be described by those rules for the mechanism. They can’t be described at all by any kind of analytical approach, as it possible for instance in many areas in physics and, consequently in engineering, which so far is by definition always engaged in building and maintaining fully determinate machines. This notion of the mechanism, including the fact that only the concept of mechanism allows for a thinking that is capable to comprehend emergence and complexity—and philosophically potential—, is maybe one of the strongest differences between modernist thinking and “organicist” thinking (which has absolutely nothing to do with bubble architecture), as we may call it in a preliminarily.
Here it is probably appropriate to cite the largely undervalued work of Charles Jencks, who proposed as one of the first in the domain of architecture/urbanism the turn to complexity. Yet, since he had not a well-explicated formulation (based on an appropriate elementarization) at his disposal, we had neither been able to bring his theory “down to earth” nor to connect it to more abstract concepts. People like Jencks, Venturi, “parts of” Koolhaas (and me:)—or Deleuze or Foucault in philosophy—never have been modernist. Except the historical fact that they live(d) in a period that followed the blossoming of modernism, there is not any other justification to call them or their thinking “post-modern”. It is not the use of clear arguments that those reject, it is the underlying set of beliefs.
In modernism, that is, in the practice of the belief set as shown above, collective effects are excluded apriori, metaphysically as well as methodologically, as we will see. Statistics is by definition not able to detect “patterns”. It is an analytic technique, of which people believe that its application excludes any construction. This is of course a misbelief, the constructive steps are just shifted into the side-conditions of the formulas, resulting in a deep methodological subjectivity concerning the choice of a particular technique, or formula respectively.
This affects the perspective onto society as well as individual perception and thought. Slightly metaphorically spoken, everything is believed to be (conceptual) dust, and to remain dust. The belief in independence, fired perhaps by a latent skepticism since Descartes, has invaded the methods and the practices. At most, such the belief, one could find different kinds of dust, or different sizes of the hives of dust, governed by a time-inert, universal law. In turn, wherever laws are imposed to “nature”, the subject matter turns into conceptual dust.
Something like a Language Game, let it even be in combination with transcendental conditionability, must almost be incomprehensible for a modernist. I think they even do not see there possibility. While analytic philosophy is largely the philosophy that developed within modernism (one might say that it is thus not philosophy at all), the philosophical stances of Wittgenstein, Heidegger or Deleuze are outside of it. The instances of misunderstanding Wittgenstein as a positivist are countless! Closely related to the neglect of collective effects is the dismissal of the inherent value of the comparative approach. Again, that’s not an accusation. Its just the description of an effect that emerges as soon as the above belief set turns into a practice.
The problem with modernism is indeed tricky. On the one hand it blossomed engineering. Engineering, as it has been conceived since then, is a strictly modernist endeavor. With regard to the physical aspects of the world it works quite well, of course. In any other area, it is doomed to fail, for the very same reasons, unfortunately. Engineering of informational aspects is thus impossible as it is the engineering of architecture or the engineering of machine-based episteme, not to mention the attempt to enable machines to deal with language. Or to deal with the challenges emerging in the urban culture. Just to avoid misunderstandings: Engineering is helpful to find technical realizations for putative solutions, but it never can deliver any kind of solution itself, except the effect that people assimilate and re-shape the produces of urban engineering through their usage, turning them into something different than intended.
2.2. Meaning
The most problematic effects of the idea of “primary objects” are probably the following:
• – the rejection of creational power of unconscious or even purely material entities;
• – the idea that meaning can be attached to objects;
• – the idea that objects can be represented and must be represented by ideas.
These strong consequences do not concern just epistemological issues. In modernism, “objectivity” has nothing to do with the realm of the social. It can be justified universally and on purely formal grounds. We already mentioned that this may work in large parts of physics—it is challenged in quantum physics—but certainly not in most biological or social domains.
In his investigation of thought, Deleuze identifies representationalism ([9], p.167) as one of the eight major presuppositions of large parts of philosophy, especially idealism in the line from Platon, Hegel, and Frege up to Carnap.
(1) the postulate of the principle, or the Cogitatio natura universalis (good will of the thinker and good nature of thought); (2) the postulate of the ideal, or common sense (common sense as the concordia facultatum and good sense as the distribution which guarantees this concord); (3) the postulate of the model, or of recognition (recognition inviting all the faculties to exercise themselves upon an object supposedly the same, and the consequent possibility of error in the distribution when one faculty confuses one of its objects with a different object of another faculty); (4) the postulate of the element, or of representation (when difference is subordinated to the complementary dimensions of the Same and the Similar, the Analogous and the Opposed); (5) the postulate of the negative, or of error (in which error expresses everything which can go wrong in thought, but only as the product of external mechanisms); (6) the postulate of logical function, or the proposition (designation is taken to be the locus of truth, sense being no more than the neutralised double or the infinite doubling of the proposition); (7) the postulate of modality, or solutions (problems being materially traced from propositions or, indeed, formally defined by the possibility of their being solved); (8) the postulate of the end, or result, the postulate of knowledge (the subordination of learning to knowledge, and of culture to method). Together they form the dogmatic image of thought.
Deleuze by no means attacks the utility of these elements in principle. His point is just that these elements work together and should not be taken as primary principles. The effect of these presuppositions are disastrous.
They crush thought under an image which is that of the Same and the Similar in representation, but profoundly betrays what it means to think and alienates the two powers of difference and repetition, of philosophical commencement and recommencement. The thought which is born in thought, the act of thinking which is neither given by innateness nor presupposed by reminiscence but engendered in its genitality, is a thought without image.
As engineer, you may probably have been noticing issue (5). Elsewhere in our essay we already dealt with the fundamental misconception to start from an expected norm, instead from an open scale without imposed values. Only the latter attitude will allow for inherent adaptivity. Adaptive systems never will fail, because failure is conceptually impossible. Instead, they will cease to exist.
The rejection of the negative, which includes the rejection of the opposite as well as dialectics, the norm, or the exception, is particularly important if we think about foundations of whatsoever (think about Hegel, Marx, attac, etc.) or about political implications. We already discussed the case of Agamben.
Deleuze finally will arrive at this “new imageless image of thought” by understanding difference as a transcendental category. The great advantage of this move is that it does not imply a necessity of symbols and operators as primary, as it is the case if we would take identity as primary. The primary identical is either empty (a=a), that is, without any significance for the relation between entities, or it needs symbolification and at least one operator. In practice, however, a whole battery of models, classifications and the assumptions underlying them is required to support the claim of identity. As these assumptions are not justifiable within the claim of identity itself, they must be set, which results in the attempt to define the world. Obviously, attempting so would be quite problematic. It is even self-contradicting if contrasted with the modernists claim of objectivity. Setting the difference as primary, Deleuze not only avoids the trap of identity and pre-established harmony in the hive of objects, but also subordinates the object to the relation. Here he meets with Wittgenstein and Heidegger.
Together, the presupposition of identity and objecthood is necessarily and in a bidirectional manner accompanied with another quite abundant misunderstanding, according to which logic should be directly applicable to the world. World here is of course “everything” except logic, that is (claimed) objects, their relations, measurement, ideas, concepts and so on. Analytic philosophy, positivism, external realism and the larger movement of modernism all apply the concept of bi-valent logic to empirical entities. It is not really a surprise that this leads to serious problems and paradoxa, which however are pseudo-paradoxa. For instance, universal justification requires knowledge. Without logical truity in knowledge universal justification can’t be achieved. The attempt to define knowledge as consisting of positive content failed, though. Next, the formula of “knowledge as justified belief” was proposed. In order not to fall prey to the Gettier-problem, belief itself would have to be objectified. Precisely this happened in analytic philosophy, when Alchourron et al. (1985) published their dramatically (and overly) reduced operationalization of “belief”. Logic is a condition, it is transcendental to its usage. Hence, it is inevitable to instantiate it. By means of instantiation, however, semantics invades equally inevitable.
Ultimately due to the presupposed primacy of identity, modernists are faced with a particular difficulty in dealing with relations. Objects and their role should not be dependent on their interpretation. As a necessary consequence, meaning—and information—must be attached to objects as quasi-physical properties. There is but one single consequence: tyranny. Again, it is not surprising that at the heights of modernism the bureaucratic tyranny was established several times.
Some modernists would probably allow for interpretation. Yet, only as a means, not as a condition, not as a primacy. Concerning their implications, the difference between the stances is a huge one. If you take it simply as a means, keeping the belief into the primacy of objects, you still would adhere to the idea of “absolute truth” within the physical world. Ultimately, interpretation would be degraded into an error-prone “method”, which ideally should have no influence onto the recognition of truth, of course. The world, at least the world that goes beyond the mere physical aspects, appears as a completely different one if relations, and thus interpretation is set as primary. Obviously, this implies also a categorical difference regarding the way one approaches that world, e.g. in science, or the way one conceives of the possible role of design. Is a nothing else than myth that a designer, architect, or urbanist designs objects. The practitioners in these professions design potentials, namely that for the construction of meaning by the future users and inhabitants (cf. [5]). There is nothing a designer can do to prevent a particular interpretation or usage. Koolhaas concludes that regarding Junkspace this may lead to a trap, or kind of a betrayal [3]:
Narrative reflexes that have enabled us from the beginning of time to connect dots, fill in blanks, are now turned against us: we cannot stop noticing—no sequence is too absurd, trivial, meaningless, insulting… Through our ancient evolutionary equipment, our irrepressible attention span, we helplessly register, provide insight, squeeze meaning, read intention; we cannot stop making sense out of the utterly senseless… (p.188)
I think that on the one hand Koolhaas here accepts the role of interpretation, yet, and somewhat contradictory, he is not able to recognize that it is precisely the primacy of interpretation that enables for an transformation through assimilation, hence the way out of Junkspace. Here he remains modernist to the full extent.
The deep reason being that for the object-based attitude there is no possibility at all to recognize non-representational coherence. (Thus, a certain type of illiteracy regarding complex texts is prevailing among “true” modernists…)
2.3. Shades of Empiricism
Science, as we understand it today—yet at least partially also as we practice it—is based on the so-called hypothetico-deductive approach of empiricism (cf. [6]). Science is still taken as a synonym for physics by many, even in philosophy of science, with only very few exceptions. There, the practice and the theory of Life sciences are not only severely underrepresented, quite frequently biology is still reduced to physics. Physicists, and their philosophical co-workers, often claim that the whole world can be reduced to a description in terms of quantum mechanics (among many others cf. [7]). A closely related reduction, only slightly less problematic, is given by the materialist’s claim that mental phenomena should be explained completely in biological terms, that is, using only biological concepts.
The belief in empiricism is implemented into the methodological framework that is called “statistics”. The vast majority of the statistical tests rest on the assumption that observations and variables are independent from each other. Some tests are devised to test for independence, or dependence, but this alone does not help much. Usually, if dependency is detected, then the subsequent tests are rearranged as to fit again the independence assumption. In other words, any possibly actual coherence is first assumed to be nonexistent. By means of the method itself, the coherence is indeed destroyed. Yet, once it is destroyed, you never will get it back. It is quite simple: The criteria for any such construction are just missing.
From this perspective, statistics is not scientific according to science’s own measures; due to its declared non-critical and non-experimental stance it actually looks more like ideology. For a scientific method would perform an experiment for testing whether something could be assumed or not. As Nobel laureate Konrad Lorenz said: I never needed statistics to do my work. What would be needed instead is indeed a method that is structurally independent of any independence assumption regarding the observed data. Such a method would propose patterns if there are sufficiently dense hints, and not , otherwise. Without proposing one or the other apriori. From that perspective, it is more the representationalism in modernism that brings the problem.
This framework of statistics is far from being homogeneous, though. Several “interpretations” are fiercely discussed: frequentism, bayesianism, uncertainty, or propensity. Yet, any of them faces serious internal inconsistencies, as Alan Hajek convincingly demonstrated [8]. To make a long story short (the long version you can find over here), it is not possible to build a model without symbols, without concepts that require interpretation and further models, and outside a social practice, or without an embedding into such. Modernists usually reject such basics and eagerly claim even universal objectivity for their data (hives of dust). More than 50 years ago, Quine proofed that believing otherwise should be taken just as nothing else than a dogma [9]. This dogma can be conceived as a consequence of the belief that objects that are the primary constituents of the world.
Of course, the social embedding is especially important in the case of social affairs such like urbanism. The claim that any measurement of data then treated by statistical modeling (they call it wrongly “analysis”) could convey any insight per se is nothing but pretentious.
Dealing with data always results in some kind of construction, base don some methods. Methods, however, respond differentially to data, they filter. In other words, even applying “analytical” methods involves interpretation, often even a strong one. Unfortunately for the modernist, he excluded the possibility of the primacy of interpretation at all, because there are only objects out there. This hurdle is quickly solved, of course, by the belief that the meaning is outside of interpretation. As result, they believe, that there is a necessary progress towards the truth. For modernists: Here you may jump back to subsection 3.2. …
2.4. Machines
For le Corbusier a house is much like a “machine for living in”. According to him, a building has clear functions, that could be ascribed apriori, governed by universal relations, or even laws. Recently, people engaged in the building economy recognized that it may turn problematic to assign a function apriori, as it simply limits the sales arguments. As a result, any function from the building as well as from the architecture itself tends to be stripped away. The “solution” is a more general one. Yet, in contrast to an algebraic equation that will be instantiated before used, the building actually exists after building it. It is there. And up today, not in a reconfigurable form.
Actually, the problem is created not by the tendency for more general, or even pre-specific solutions. It turns critical if it generality amalgamates with the modernist attitude. The category of machines, which is synonymic to ascribing or assigning a function (understood as usage) apriori, doesn’t accept any reference to luxury. A machine that would contain properties or elements that don’t bear any function, at least temporarily, other than pleasure (which does not exist in a world that consists only of objects) would be badly built. Minimalism is not just a duty, it even belongs to the grammar of modernism. Minimalism is the actualization and representation of mathematical rigidity, which is also a necessity as it is the only way to use signs without interpretation. At least, that is the belief of modernists.
The problem with minimalism is that it effectively excludes evolution. Either the produce fits perfectly or not at all. Perfectness of the match can be expected only, if the user behaves exactly as expected, which represents nothing else than dogmatism, if not worse. Minimalism in form excludes alternative interpretations and usages, deliberately so, it even has to exclude the possibility for the alternative. How else to get rid of alternatives? Koolhaas rightly got it: by nothingness (minimalism), or by chaos.
3. Urbanism, and Koolhaas.
First, we have of course to make clear that we will be able to provide only a glimpse to the field invoked by this header. Else, our attempts here should not be understood as a proposal to separate architecture from urbanism. Both, regarding theory and implementation they more and more overlap. When Koolhaas explains the special situation of the Casa do Musica in Porto, he refers to processes like continuation of certain properties and impressions from the surround to be continued inside of the building. Inversely, any building, even any persistent object in a city shifts the qualities of its urban surround.
Rem Koolhaas, once journalist, then architect, now for more than a decade additionally someone doing comparative studies on cities has performatively demonstrated—by means of his writings such as “S,M,L,XL”, “Generic City” or “Junkspace”—that a serious engagement about the city can’t be practiced as a disciplinary endeavor. Human culture moved irrevocably into a phase where culture largely means urban culture. Urbanists may be seen as a vanishing species that became impossible due to the generality of the field. “Culturalist” is neither a proper domain nor a suitable label. Or perhaps they moult into organizers of research in urban contexts, similarly as architects are largely organizers for creating buildings. Yet, there is an important difference: Architects may still believe that they externalize something. Such a belief is impossible for urbanists, because they are part of the culture. It is thus questionable, if a project like the “Future Cities Laboratory” should indeed be called such. It is perhaps only possible to do so in Singapore, but that’s the subject of one of the next essays.
Rem Koolhaas wrote “Delirious New York” before turning to architecture and urbanism as a practitioner. There, he praised its diversity and manifoldness that, in or by means of his dreams, added up to the deliriousness of Manhattan, and probably also of his own.
Without any doubt, the particular quality of Manhattan is its empowering density, which is not actualizing as the identical, but rather as heterotopia, as divergence. In some way, Manhattan may be conceived as the urban precursor of the internet [11], built first in steel, glass and concrete. Vera Bühlmann writes:
Manhattan space is, if not yet everywhere, so at least in the internet potentially everywhere, and additionally not limited to three, probably even spatial dimensions.4
Urbanism is in urgent demand of an advanced theory that refers to the power of networks. It was perhaps this “network process” that brought Koolhaas to explore the anti-thesis of the wall and the plane, the absolute horizontal and vertical separation. I say anti-thesis, because Delirious New York itself behaves quite ambiguously, half-way between the Hegelian, (post-)structuralist dialectics and utopia on the one side and an affirmation of heterotopias on the other hand as a more advanced level of conceptualization alienating processes, which always are also processes of selection and individuation into both directions, the medium and the “individual”. Earlier scholars like Aldo Rossi have been too early to go into that direction as networks weren’t recognizable as part of the Form of Life. Even Shane is only implicitly referring to their associative power (he does not refer to complexity as well). And Koolhaas was not either, and probably is still not aware of this problematics.
Recently, I have been proposing one of the possible approaches to build such a theory, the according concepts, terms and practices (for more details see [12]). It is rather important, to distinguish two very basic forms of networks, logistic and associative networks. Logistic networks are used everywhere in modernist reasoning about cities and culture. Yet, they exclusively refer to the network as a machine, suitable to optimize the transport of anything. Associative networks are completely different. They do not transfer anything, they swallow, assimilate, rearrange, associate and, above all, they learn. Any associative network can learn anything. The challenge is, particularly for modernist attitudes, that it can’t be controlled what exactly an associative network is going to learn. The interesting thing about it is that the concept of associative networks provides a bridge to the area of advanced “machine”-learning and to the Actor-Network-Theory (ANTH) of Bruno Latour. The main contribution of ANTH is its emphasis of agency, even of those mostly mineral material arrangements that are usually believed to have no mental capacity.
It is clear, that an associative network may not be perceived at all under the strictly practiced presupposition of independence, as it is typical for modernism. Upon its implementation, the belief set of modernism tends to destroy the associativity, hence also the almost inevitable associations between the more or less mentally equipped actors in urban environments.
When applied to cities, it breaks up relations, deliberately. Any interaction of high-rise buildings, so typical for Manhattan, is precluded intentionally. Any transfer is optimized just along one single parameter: time, and secondarily, space as a resource. Note that optimization always requires the apriori definition of a single function. As soon as would allow for multiple goals, you would be faced with the necessity of weighting and assigning subjective expectations, which are subjective precisely due to the necessity of interpretation. In order to exclude even the possibility for it, modernists agree hastily to optimize time (as a resource under the assignment of scarcity and physicality), once being understood as a transcendental condition.
As Aldo Rossi remarked already in the 1960ies [13], the modernist tries to evacuate any presence of time from the city. It is not just that history is cut off and buried, largely under false premises and wrong conclusions, reducing history just to institutional traditions (remember, there is no interpretation for a modernist!). In some way, it would have been even easy to predict Koolhaas’ Junkspace already in the end of the 19th century. Well, the Futurologists did it, semi-paradoxically, though. Quite stringent, Futurism was only a short phase within modernism. This neglect of time in modernism is by no means a “value” or an intention. It is a direct logical consequence of the presupposed belief set, particularly independence, logification and the implied neglect of context.
Dis-assembling the associative networks of a city results inevitably in the modernist urban conceptual dust, ruled by the paradigm of scarce time and the blindness against interpretation, patterns and non-representational coherence. This is in a nutshell, what I would like to propose as the deep grammar of the Junkspace, as it has been described by Koolhaas. Modernism did nothing else than to build and to actualize it conceptual dust. We may call it tertiary chaos, which has been—in its primary form—equal to the initial state of indiscernability concerning the cosmos as a whole. Yet, this time it has been dictated by modernists. Tertiary chaos thus can be set equal to the attempt to make any condition for the possibility of discernability vanishing.
Modernists may not be aware that there is not only already a theory of discernability, which equals to the Peircean theory of the sign, there is also an adaptation and application to urbanism and architecture. Urbanists probably may know about the name “Venturi”, but I seriously doubt that semiotics is on their radar. If modernists talk about semiotics at all, they usually refer to the structuralist caricature of it, as it has been put forward by de Saussure, establishing a closed version of the sign as a “triangle”. Peircean signs—and these have been used by Venturi—establish as an interpretive situation. They do not refer to objects, but just to other signs. Their reference to the world is provided through instances of abstract models and a process of symbolification, which includes learning as an ability that precedes knowledge. (more detail here in this earlier essay) Unfortunately, Venturi’s concept have scarcely been updated, except perhaps in the context of media facades [14]. Yet, media facades are mostly and often vastly misunderstood as the possibility to display adverts. There are good arguments supporting the view that there is more about them [15].
Modernists, including Koolhaas employ a strange image of evolution. For him (them), evolution is pure arbitrariness, both regarding the observable entities and processes as well as regarding the future development. He supposes to detect “zero loyalty-and zero tolerance-toward configuration“ ([3] p.182). In the same passage he simultaneously and contradictory misses the „”original” condition“ and blames history for its corruptive influence: „History corrupts, absolute history corrupts absolutely.“ All of that is put into the context of a supposedly “”permanent evolution.”“ (his quot. marks). Most remarkably, even biologists as S.J. Gould, pretending to be evolutionary biologist, claims that evolution is absolutely arbitrary. Well, the only way out of the contrasting fact that there is life in the form we know about it is to assume some active divine involvement. Precisely this was the stance of Gould. People like Gould(and perhaps Koolhaas) commit the representationalist fault, which excludes them from recognizing (i) the structural tendency of any evolution towards more general solutions, and (ii) the there is an evolution of evolutionarity. The modernist attitude towards evolution can again be traced back to the belief into metaphysical independence of objects, but our interest here is different.
Understanding evolution as a concept has only little to do with biology and the biological model that is called “natural evolution”. Natural evolution is just an instance of evolution into physico-chemical and then biological matter. Bergson has been the first who addressed evolution as a concept [16], notably in the context of abstract memory. In a previous essay we formalized that approach and related it to biology and machine-learning. At its basics, it requires a strict non-representational approach. Species and organisms are expressed in terms of probability. Our conclusion was that in a physical world evolution inevitably takes place if there at least two different kinds or scales of memory. Only on that abstract level we can adopt the concept of evolution into urbanism, that is, into any cultural context.
Memory can’t be equated to tradition, institutions or even the concrete left-overs of history, of course. They are just instances of memory. It is of utmost importance here, not to contaminate the concept of memory again with representationalism. This memory is constructive. Memory that is not constructive, is not memory, but a stock, a warehouse (although these are also kinds of storage and contribute as such to memory). Memory is inherently active and associative. Such memory is the basic, non-representative element of a generally applicable evolutionary theory.
Memory can not be “deposited” into almost geological layers of sediments, quite in contrast to the suggestions of Eisenman, whom Rajchman follows closely in his “Constructions”.
The claim of “storable memory” is even more disastrous than the the claim that information could be stored. These are not objects and items that are independent of an interpretation, they are the processes of constructive of guided interpretation. Both “storages” would only become equal to the respective immaterial processes under the condition of a strictly deterministic set of commands. Even the concept of the “rule” is already too open to serve the modernist claim of storable memory.
It is immediately clear that the dynamic concept of memory is highly relevant for any theory about urban conditions. It provides a general language to derive particular models and instances of association, stocks and flows, that are not reducible to storage or transfers. We may even expect that whenever we meet kind of material storage in an urban context, we also should expect association. The only condition for that just being that there are no modernists around… Yet, storage without memory, that is, without activity remains dead, much like but even less than a crystal. Cripples in the sand. The real relevance of stocks and flows is visible only in the realm of the non-representational, the non-material, if we conceive it as waves in abstract density, that is as media, conveying the potential for activity as a differential. Physicalists and modernists like Christianse or Hillier will never understand that. Just think of the naïve empirics, calling it cartography, they are performing around the world.
This includes deconstructivism as well. Derrida’s deconstructivism can be read as a defense war against the symbolification of the new, the emerging, the complex, the paradox of sense. His main weapon is the “trail”, of which he explicitly states that it could not be interpreted at all. Such, Derrida as master of logical flatness and modernist dust is the real enemy of progress. Peter Sloterdijk, the prominent contemporary German “philosopher”5, once called Derrida the “Old Egyptian”. Nothing would fit better to Derrida, who lives in the realm of shadows and for whom life is just a short transitory phase, hopefully “survived” without too much injuries. The only metaphor being possible on that basis is titanic geology. Think of some of Eisenman’s or Libeskind’s works.
Figure 2: Geologic-titanic shifts induced by the logical flatness of deconstructivism
a: Peter Eisenman, Aronoff Center for Design and Art in Cincinnati (Ohio) (taken from [11]); the parts of building are treated blocks, whose dislocation reminds to that of geological sediments (or the work of titans).
b: Daniel Libeskind, Victoria and Albert Museum Boilerhouse Extension. Secondary chaos, inducing Junkspace through its isolationist “originality”, conveying “defunct myths” (Koolhaas in [3], p.189).
Here we finish our exploration of generic aspects of the structure of modernist thinking. Hopefully, the sections so far are sufficiently suited to provide some insights about modernism in general, and the struggles Koolhaas is fighting with in “Junkspace”.
4. Redesigning Urbanism
Redesigning urbanism, that is to unlock it from modernist phantasms is probably much more simple than it may look at first sight. Well, not exactly simple, at least for modernists. Everything is about the presuppositions. Dropping the metaphysical believe of independence without getting trapped by esotericism or mysticism might well be the cure.Of course, metaphysical independence need to be removed from any level and any aspect in urbanism, starting from the necessary empirical work, which of course is already an important part of the construction work. We already mentioned that the notion of “empirical analysis” pretends neutrality, objectivity (as independence from the author) and validity. Yet, this is pure illusion. Independence should be abandoned also in its form of searching for originality or uniqueness, trying to set an unconditional mark in the cityscape. By that we don’t refer to morphing software, of course.
The antidote against isolationism, analyticity and logic is already well-known. To provide coherence you have to defy splintering and abjure the believe in (conceptual) dust. The candidate tool for it is story-telling, albeit in a non-representational manner, respecting the difference and heterotopias from the beginning. In turn this also means to abandon utopias and a-topias, but to embrace complexity and a deep concept of prevailing differentiation (in a subsequent essay we will deal with that). As citizens, we are not interested in non-places and deserts of spasmodic uniqueness (anymore) or the mere “solution of problems” either (see Deleuze about the dogmatic image of thought as cited above). Changing the perspective from the primacy of analysis to the primacy story-telling immediately reveals the full complexity of the respective Form of Life, to which we refer here as a respectful philosophical concept.
It is probably pretentious to speak such about urbanism as a totality. There are of course, and always have been, people who engaged in the urban condition based on a completely different set of believes, righteous non-modern. Those people start with the pattern and never tear them apart. Those people are able to distinguish structure, genesis and appearance. In biology, this distinction has been instantiated into the perspectives of the genotype, the phenotype, and, in bio-slang, evo-devo, the compound made from development, growth and evolution. These are tied together (necessarily) by complexity. In philosophy, the respective concepts are immanence, the differential, and the virtual.
For urbanism, take for instance the work of David Shane (“Recombinant Urbanism“). Shane’s work, which draws much on Kelly’s, is a (very) good starting point not only for any further theoretical work, but also for practical work.
As a practitioner, one has to defy the seduction for the totality of a master plan, as the renowned parametricists actualize in Istanbul, Christianse and his office did recently in Zürich at the main station. Both are producing pure awfulness, castles of functional uniformity, because they express the totality of the approach even visually. Even in Singapore’s URA (Urban Development Authority), the master plan has been relativised in favor of a (slightly) more open conceptualization. Designer’s have to learn that not less is more, but rather that partial nothingness is more. Deliberately non-planning, as Koolhaas has repeatedly emphasized. This should not be taken representationally, of course. It does not make any sense to grow “raw nature”, jungles within the city, neither for the city, nor for the “jungle”. Before a crystal can provide soil for real life, it must decay, precisely because it is a closed system (see next figure 3). Adaptive systems replace parts, melt holes to build structures, without decaying at all. We will return to this aspect of differentiation in a later article.
Figure 3: Pruitt-Igoe (St.Louis), getting blasted in 1972. Charles Jencks called this event “one of the deaths of modernism”. This had not been the only tear-down there. Laclede, a neighborhood nearby Pruitt-Igoe, made from small, single-flat houses failed as well, the main reasons being an unfortunate structure of the financial model and political issues, namely separation of “classes” and apartheid. (see this article).
The main question for finding a practicable process therefore is: How to ask, which questions should we address in order to build an analytics under the umbrella of story-telling, that avoids the shortfalls of modernism?
We might again take a look to biology (as a science). As urbanism, biology is also confronted with a totality. We call it life. How to address reasonable, that is fruitful questions to that totality? Biology already found a set of answer, which nevertheless are not respected by the modernist version of this science, mainly expressed as genetics. The first insight was, that “nothing in biology makes sense except in the light of evolution.”[17] Which would be the respective question for urbanism? I can’t give an answer here, but it is certainly not independence. This we can know through the lesson told by “Junkspace”. Another, almost ridiculous anti-candidate is sustainability, as far as it is conceived in terms of scarcity of mainly physical resources instead of social complexity. Perhaps we should remember the history of the city beyond its “functionality”. Yet, that would mean to first develop an understanding of (abstract) evolution, to instantiate that, and then to derive a practicable model for urban societies. What does it mean to be social, what does it mean to think, both taken as practice in a context of freedom? Biology then developed a small set of basic contexts along to which any research should be aligned to, without loosing the awareness (hopefully) that there are indeed four of such contexts. These have been clearly stated by Nobel laureate Tinbergen [18]. According to him research in biology is suitably structured by four major perspectives: phylogeny, ontogeny, physiology and behavior. Are there similarly salient dimensions for structuring thought in urbanism, particularly in a putative non-modernist (neither modernist, not post-modernist) version? Particularly interesting are, imho, especially the intersections of such sub-domains.
Perhaps differentiation (as a concept) is indeed a (the) proper candidate for the grand perspective. We will discuss some aspects of this in the next essay: it includes growth and its modes, removal, replacement, deterioration, the problem of the generic, the difference between development and evolution, and a usable concept of complexity. to name but a few. In the philosophy of Gilles Deleuze, particularly the Thousand Plateaus, Difference and Repetition and the Fold, we already can find a good deal of theoretical work about he conceptual issues around differentiation. Differentiation includes learning, individually and collectively (I do NOT refer to swarm ideology here, nor to collectivist mysticism either!!!), which in turn would bring in the (abstract) mental into any consideration of urbanism. Yet, wasn’t mankind differentiating and learning all the time? The challenge will be to find a non-materialist interpretation of those in these materialist times.
1. Cited after [11]
2. Its core principles are the principle of excluded middle (PEM) and the principle of non-contradictivity (PNC). Both principles are equivalent to the concept of macroscopic objects, albeit only in a realist perspective, i.e. under the presupposition that objects are primary against relations. This is, of course, quite problematic, as it excludes an appropriate conceptualisation of information.
Both, the PEM and PNC allow for the construction of paradoxes like the Taylor Paradox. Such paradoxes may be conceived as “Language Game Colliders”, that is as conceptual devices which commit a mistake concerning the application of the grammar of language games. Usually, the bring countability and the sign for non-countability into conflict. First, it is a fault to compare a claim with a sign, second, it is stupid to claim contradicting proposals. Note, that here we are allowed to speak of “contradiction”, because we are following the PNC as it is suggested by the PNC claim. The Taylor-Paradox is of course, like any other paradox, a pseudo-problem. It appears only due to an inappropriate choice or handling of the conceptual embedding, or due to the dismissal of the concept of the “Language Game”, which mostly results in the implicit claim of the existence of a “Private Language”.
3. Vera Bühlmann, “Articulating quantities, if things depend on whatever can be the case“, lecture held at The Art of Concept, 3rd Conference: CONJUNCTURE — A Series of Symposia on 21st Century Philosophy, Politics, and Aesthetics, organized by Nathan Brown and Petar Milat, Multimedia Institute MAMA in Zagreb, Kroatia, June 15-17 2012.
4. German orig.: “Manhattan Space ist, wenn schon nicht überall, so doch im Internet potentiell überall, und zudem nicht mehr auf drei vielleicht gar noch räumliche Dimensionen beschränkt.”
5. Peter Sloterdjik does not like to be called a philosopher
• [1] Michel Foucault, Archaeology of Knowledge. Routledge 2002 [1969].
• [2] Vera Bühlmann, Printed Physics, de Gruyter, forthcoming.
• [4] Michael Hansmeyer, his website about these columns.
• [5] “Pseudopodia. Prolegomena to a Discourse of Design”. In: Vera Bühlmann and Martin Wiedmer . pre-specifics. Some Comparatistic Investigations on Research in Art and Design. JRP| Ringier Press, Zurich 2008. p. 21-80 (English edition). available online;
• [6] Wesley C. Salmon, Causality and Explanation. Oxford University Press, Oxford 1998.
• [7] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38(2): 339-366.
• [8] Alan Hájek (2007). The Reference Class Problem is Your Problem Too. Synthese 156 (3):563-585.
• [9] W.v.O. Quine (1951), Two Dogmas of Empiricism. The Philosophical Review 60: 20-43.
• [10] Gilles Deleuze, Difference and Repetition. Columbia University Press, New York 1994 [1968].
• [11] Vera Bühlmann, inhabiting media. Thesis, University of Basel (CH), 2009.
• [12] Klaus Wassermann (2010). SOMcity: Networks, Probability, the City, and its Context. eCAADe 2010, Zürich. September 15-18, 2010. (pdf)
• [13] Aldo Rossi, The Architecture of the City. MIT Press, Cambridge (Mass.) 1982 [1966].
• [14] Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345.
• [15] Klaus Wassermann, Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here. available here
• [16] Henri Bergson, Matter and Memory. (Matière et Mémoire 1896) transl. N.M. Paul & W.S. Palmer. Zone Books 1990.
• [17] Theodore Dobzhansky, Genetics and the Origin of Species, Columbia University Press, New York 1951 (3rd ed.) [1937].
• [18] Niko Tinbergen (1963). On Aims and Methods in Ethology, Z. Tierpsych., (20): 410–433.
A Deleuzean Move
June 24, 2012 § Leave a comment
It is probably one of the main surprises in the course of
growing up as a human that in the experience of consciousness we may meet things like unresolvable contradictions, thoughts that are incommensurable, thoughts that lead into contradictions or paradoxes, or thoughts that point to something which is outside of the possibility of empirical, so to speak “direct” experience. All these experiences form a particular class of experience. For one or the other reason, these issues are issues of mental itself. We definitely have to investigate them, if we are going to talk about things like machine-based episteme, or the urban condition, which will be the topic of the next few essays.
There have been only very few philosophers1 who have been embracing paradoxicality without getting caught by antinomies and paradoxes in one or another way.2 Just to be clear: Getting caught by paradoxes is quite easy. For instance, by violating the validity of the language game you have been choosing. Or by neglecting virtuality. The first of these avenues into persistent states of worries can be observed in sciences and mathematics3, while the second one is more abundant in philosophy. Fortunately, playing with paradoxicality without getting trapped by paradoxes is not too difficult either. There is even an incentive to do so.
Without paradoxicality it is not possible to think about beginnings, as opposed to origins. Origins—understood as points of {conceptual, historical, factual} departure—are set for theological, religious or mystical reasons, which by definition are always considered as bearer of sufficient reason. To phrase it more accurately, the particular difficulty consists in talking about beginnings as part of an open evolution without universal absoluteness, hence also without the need for justification at any time.
Yet, paradoxicality, the differential of actual paradoxes, could form stable paradoxes only if possibility is mixed up with potentiality, as it is for instance the case for perspectives that could be characterised as reductionist or positivist. Paradoxes exist strictly only within that conflation of possibility and potentiality. Hence, if a paradox or antinomy seems to be stable, one always can find an implied primacy of negativity in lieu of the problematic field spawned and spanned by the differential. We thus can observe the pouring of paradoxes also if the differential is rejected or neglected, as in Derrida’s approach, or the related functionalist-formalist ethics of the Frankfurt School, namely that proposed by Habermas [4]. Paradoxes are like knots that always can be untangled in higher dimensions. Yet, this does NOT mean that everything could be smoothly tiled without frictions, gaps or contradictions.
Embracing the paradoxical thus means to deny the linear, to reject the origin and the absolute, the centre points [6] and the universal. We may perceive remote greetings from Nietzsche here4. Perhaps, you already may have classified the contextual roots of these hints: It is Gilles Deleuze to whom we refer here and who may well be regarded as the first philosopher of open evolution, the first one who rejected idealism without sacrificing the Idea.5
In the hands of Deleuze—or should we say minds?—paradoxicality does neither actualize into paradoxes nor into idealistic dichotomic dialectics. A structural(ist) and genetic dynamism first synthesizes the Idea, and by virtue of the Idea as well as the space and time immanent to the Idea paradoxicality turns productive.7
Philosophy is revealed not by good sense but by paradox. Paradox is the pathos or the passion of philosophy. There are several kinds of paradox, all of which are opposed to the complementary forms of orthodoxy – namely, good sense and common sense. […] paradox displays the element which cannot be totalised within a common element, along with the difference which cannot be equalised or cancelled at the direction of a good sense. (DR227)
As our title already indicates, we not only presuppose and start with some main positions and concepts of Deleuzean philosophy, particularly those he once developed in Difference and Repetition (D&R)8. There will be more details later9. We10 also attempt to contribute some “genuine” aspects to it. In some way, our attempt could be conceived as a development being an alternative to part V in D&R, entitled “Asymmetrical Synthesis of the Sensible”.
This Essay
Throughout the collection of essays about the “Putnam Program” on this site we expressed our conviction that future information technology demands for an assimilation of philosophy by the domain of computer sciences (e.g. see the superb book by David Blair “Wittgenstein, Language and Information” [47]). There are a number of areas—of both technical as well as societal or philosophical relevance—which give rise to questions that already started to become graspable, not just in the computer sciences. How to organize the revision of beliefs?11 What is the structure of the “symbol grounding problem”? How to address it? Or how to avoid the fallacy of symbolism?12 Obviously we can’t tackle such questions without the literacy about concepts like belief or symbol, which of course can’t be reduced to a merely technical notion. Beliefs, for instance, can’t be reduced to uncertainty or its treatment, despite there is already some tradition in analytical philosophy, computer sciences or statistics to do so. Else, with the advent of emergent mental capabilities in machines ethical challenges appear. These challenges are on both sides of the coin. They relate to the engineers who are creating such instances as well as to lawyers who—on the other side of the spectrum—have to deal with the effects and the properties of such entities, and even “users” that have to build some “theory of mind” about them, some kind of folk psychology.
And last but not least, just the externalization of informational habits into machinal contexts triggers often pseudo-problems and “deep” confusion.13 Examples for such confusion are the question about the borders of humanity, i.e. as kind of a defense war fought by anthropology, or the issue of artificiality. Where does the machine end and where does the domain of the human start? How can we speak reasonably about “artificiality”, if our brain/mind remains still dramatically non-understood and thus implicitly is conceived by many as kind of a bewildering nature? And finally, how to deal with technological progress: When will computer scientists need self-imposed guidelines similar to those geneticists ratified for their community in 1974 during the Asimolar Conferences? Or are such guidelines illusionary or misplaced, because we are weaving ourselves so intensively into our new informational carpets—made from multi- or even meta-purpose devices—that are righteous flying carpets?
There is also a clearly recognizable methodological reason for bringing the inventioneering of advanced informational “machines” and philosophy closer together. The domain of machines with advanced mental capabilities—I deliberately avoid the traditional term of “artificial intelligence”—, let us abbreviate it MMC, acquires ethical weight in itself. MMC establishes a subjective Lebenswelt (life form) that is strikingly different from ours and which we can’t understand analytically any more (if at all)14. The challenge then is how to talk about this domain? We should not repeat the same fallacy as anthropology and anthropological philosophy have been committing since Kant, where human measures have been applied (and still are up today) to “nature”. If we are going to compare two different entities we need a differential position from which both can be instantiated. Note that no resemblance can be expected between the instances, nor between the instances and the differential. That differential is a concept, or an idea, and as such it can’t be addressed by any kind of technical perspective. Hence, questions of mode of speaking can’t be conceived as a technical problem, especially not for the domain of MMC, also due to the implied self-referentiality of the mental itself.
Taken together, we may say that our motivation follows two lines. Firstly, the concern is about the problematic field, the problem space itself, about the possibility that problems could become visible at all. Secondly, there is a methodological position characterisable as a differential that is necessary to talk about the subject of incommensurable that are equipped entities with mental capacities.15
Both directions and all related problems can be addressed in the same single move, so at least is our proposal. The goal of this essay is the introduction and a brief discussion of a still emerging conceptual structure that may be used as an image of thought, or likewise as a tool in the sense of an almost formal mental procedure, helping to avoid worries about the diagnosis—or supporting it—of the challenges opened by the new technologies. Of course, it will turn out that the result is not just applicable to the domain of philosophy of technology.
In the following we will introduce a unique structure that has been inspired not only from heterogeneous philosophical sources. Those stretch from Aristotle to Peirce, from Spinoza to Wittgenstein, and from Nietzsche to Deleuze, to name but a few, just to give you an impression what mindset you could expect. Another important source is mathematics, yet not used as a ready-made system for formal reasoning, but rather as a source for a certain way of thinking. Last, but not least, biology is contributing as the home of the organon, of complexity, of evolution, and, more formally, on self-referentiality. The structure we will propose as a starting point that appears merely technical, thus arbitrary, and at the same time it draws upon the primary amalgamate of the virtual and the immanent. Its paradoxicality consists in its potential to describe the “pure” any, the Idea that comprises any beginning. Its particular quality as opposed to any other paradoxicality is caused by a profound self-referentiality that simultaneously leads to its vanishing, its genesis and its own actualization. In this way, the proposed structure solves a challenge that is considered by many throughout the history of philosophy to be one of the most serious one. The challenge in question is that of sufficient reason, justification and conditionability. To be more precise, that challenge is not solved, it is more correct to say that it is dissolved, made disappear. In the end, the problem of sufficient reason will be marked as a pseudo-problem.
Here, a small remark is necessary to be made to the reader. Finally, after some weeks of putting this down, it turned out as a matter of fact that any (more or less) intelligible way of describing the issues exceeds the classical size of a blog entry. After all, now it comprises approx. 150’000 characters (incl white space), which would amount to 42+ pages on paper. So, it is more like a monograph. Still, I feel that there are many important aspects left out. Nevertheless I hope that you enjoy reading it.
The following provides you a table of content (active links) for the remainder of this essay:
2. Brief Methodological Remark
As we already noted, the proposed structure is self-referential. Self-referentiality also means that all concepts and structures needed for an initial description will be justified by the working of the structure, in other words, by its immanence. Actually, similarly to the concept of the Idea in D&R, virtuality and immanence come very close to each other, they are set to be co-generative. As an Idea, the proposed structure is complete. As any other idea, it needs to be instantiated into performative contexts, thus it is to be conceived as an entirety, yet neither as a completeness nor as a totality. Yet, its self-referentiality allows for and actually also generates a “self-containment” that results in a fractal mirroring of itself, in a self-affine mapping. Metaphorically, it is a concept that develops like the leaf of a fern. Superficially, it could look like a complete and determinate entirety, but it is not, similar to area-covering curves in mathematics. Those fill a 2-dimensional area infinitesimally, yet, with regard to their production system they remain truly 1-dimensional. They are a fractal, an entity to which we can’t apply ordinal dimensionality. Such, our concept also develops into instances of fractal entirety.
For these reasons, it would be also wrong to think that the structure we will describe in a moment is “analytical”, despite it is possible to describe its “frozen” form by means of references to mathematical concepts. Our structure must be understood as an entity that is not only not neutral or invariant against time. It forms its own sheafs of time (as I. Prigogine described it) Analytics is always blind against its generative milieu. Analytics can’t tell anything about the world, contrary to a widely exercised opinion. It is not really a surprise that Putnam recommended to reduce the concept of “analytic” to “an inexplicable noise”. Very basically it is a linear endeavor that necessarily excludes self-referentiality. Its starting point is always based on an explicit reference to kind of apparentness, or even revelation. Analytics not only presupposes a particular logic, but also conflates transcendental logic and practiced quasi-logic. Else, the pragmatics of analysis claims that it is free from constructive elements. All these characteristics do not apply to out proposal, which is as less “analytical” as the philosophy of Deleuze, where it starts to grow itself on the notion of the mathematical differential.
3. The Formal Structure
For the initial description of the structure we first need a space of expressibility. This space then will be equipped with some properties. And right at the beginning I would like to emphasize that the proposed structure does not “explain” by itself anything, just like a (philosophical) grammar. Rather, through its usage, that is, its unfolding in time, it shows itself and provides a stable as well as a generative ground.
The space of the structure is not a Cartesian space, where some concepts are mapped onto the orthogonal dimensions, or where concepts are thought to be represented by such dimensions. In a Cartesian space, the dimensions are independent from each other.16 Objects are represented by the linear and additive combination of values along those dimensions and thus their entirety gets broken up. We loose the object as a coherent object and there would be no way to regain it later, regardless the means and the tools we would apply. Hence the Cartesian space is not useful for our purposes. Unfortunately, all the current mathematics is based on the cartesian, analytic conception. Currently, mathematics is a science of control, or more precisely, a science about the arrangement of signs as far as it concerns linear, trivial machines that can be described analytically. There is not yet a mathematics of the organon. Probably category theory is a first step into its direction.
Instead, we conceive our space as an aspectional space, as we introduced it in a previous chapter. In an aspectional space concepts are represented by “aspections” instead of “dimensions”. In contrast to the values in a dimensional space, values in an aspectional can not be changed independently from each other. More precisely, we always can keep only at most 1 aspection constant, while the values along all the others change simultaneously. (So-called ternary diagrams provide a distantly related example for this in a 2-dimensional space.) In other words, within the N-manifolds of the aspectional space always all values are dependent on each other.
This aspectional space is stuffed with a hyperbolic topological structure. The space of our structure is not flat. You may take M.C. Escher’s plates as a visualization of such a space. Yet, our space is different from such a fixed space; it is a relativistic space that is built from overlapping hyperbolic spaces. At each point in the space you will find a point of reference (“origin”) for a single hyperbolic reference system. Our hyperbolic space is locally centred. A mathematical field about comparable structures would be differential topology.
So far, the space is still quite easy and intuitively to understand. At least there is still a visualization possible for it. This changes probably with the next property. Points in this aspectional space are not “points”, or expressed in a better, less obscure way, our space does not contain points at all. In a Cartesian space, points are defined by one or more scales and their properties. For instance, in a x-y-coordinate system we could have real numbers on both dimensions, i.e. scales, or we could have integers on the first, and reals on the second one. The interaction of the number systems used to create a scale along a dimension determines the expressibility of the space. This way, a point is given as a fixed instance of a set of points as soon as the scale is given. Points themselves are thus said to be 0-dimensional.
Our “points”, i.e. the content of our space is quite different from that. It is not “made up” from inert and passive points but the second differential, i.e. ultimately a procedure that invokes an instantiation. Our aspectional space thus is made from infinitesimal procedural sites, or “situs” as Leibniz probably would have said. If we would represent the physical space by a Cartesian dimensional system, then the second derivative would represent an acceleration. Take this as a metaphor for the behavior of our space. Yet, our space is not a space that is passive. The second-order differential makes it an active space and a space that demands for an activity. Without activity it is “not there”.
We also could describe it as the mapping of the intensity of the dynamics of transformation. If you would try to point to a particular location, or situs, in that space, which is of course excluded by its formal definition, you would instantaneously “transported” or transformed, such that you would find yourself elsewhere instantaneously. Yet, this “elsewhere” can not be determined in Cartesian ways. First, because that other point does not exist, second, because it depends on the interaction of the subject’s contribution to the instantiation of the situs and the local properties of the space. Finally, we can say that our aspectional space thus is not representational, as the Cartesian space is.
So, let us sum the elemental17 properties of our space of expressibility:
• 1. The space is aspectional.
• 2. The topology of the space is locally hyperbolic.
• 3. The substance of the space is a second-order differential.
4. Mapping the Semantics
We now are going to map four concepts onto this space. These concepts are themselves Ideas in the Deleuzean sense, but they are also transcendental. They are indeterminate and real, just as virtual entities. As those, we take the chosen concepts as inexplicable, yet also as instantiationable.
These four concepts have been chosen initially in a hypothetical gesture, such that they satisfy two basic requirements. First, it should not be possible to reduce them to one another. Second, together they should allow to build a space of expressibility that would contain as much philosophical issues of mentality as possible. For instance, it should contain any aspect of epistemology or of languagability, but it does not aim to contribute to the theory of morality, i.e. ethics, despite the fact that there is, of course, significant overlapping. For instance, one of the possible goals could be to provide a space that allows to express the relation between semiotics and any logic, or between concepts and models.
So, here are the four transcendental concepts that form the aspections of our space as described above:
• – virtuality
• – mediality
• – model
• – concept
Inscribing four concepts into a flat, i.e. Euclidean aspectional space would result in a tetraedic space. In such a space, there would be “corners,” or points of inflections, which would represent the determinateness of the concepts mapped to the aspections. As we have emphasized above, our space is not flat, though. There is no static visualization possible for it, since our space can’t be mapped to the flat Euclidean space of a drawing, or of the space of our physical experience.
So, let us proceed to the next level by resorting to the hyperbolic disc. If we take any two points inside the disc, their distance is determinate. Yet, if we take any two points at the border of the disc, the distance between those points is infinite from the inside perspective, i.e. for any perspective associated to a point within the disc. Also the distance from any point inside the disc to the border is infinite. This provides a good impression how transcendental concepts that by definition can’t be accessed “as such”, or as a thing, can be operationalized by the hyperbolic structure of a space. Our space is more complicated, though, as the space is not structured by a fixed hyperbolic topology that is, so to speak, global for the entire disc. The consequence is that our space does not have a border, but at the same time it remains an aspectional space. Turning the perspective around, we could say that the aspections are implied into this space.
Let us now briefly visit these four concepts.
4.1. Virtuality
Virtuality describes the property of “being virtual”. Saying that something is virtual does not mean that this something does not exist, despite the property “existing” can’t be applied to it either. It is fully real, but not actual. Virtuality is the condition of potentiality, and as such it is a transcendental concept. Deleuze repeatedly emphasises that virtuality does not refer to a possibility. In the context of information technologies it is often said that this or that is “virtual”, e.g. virtualized servers, or virtual worlds. This usage is not the same as in philosophy, since, quite obviously, we use the virtual server as a server, and the world dubbed “virtual“ indeed does exist in an actualized form. Yet, in both examples there is also some resonance to the philosophical concept of virtuality. But this virtuality is not exclusive to the simulated worlds, the informationally defined server instances or the WWW. Virtualization is, as we will see in a moment, implied by any kind of instance of mediality.
As just said, virtuality and thus also potentiality must be strictly distinguished from possibility. Possible things, even if not yet present or existent, can be thought of in a quasi-material way, as if they would exist in their material form. We even can say that possible things and the possibilities of things are completely determined in any given moment. It is not possible to say so about potentiality. Yet, without the concept of potentiality we could not speak about open evolutionary processes. Neglecting virtuality thus is necessarily equivalent to the apriori claim of determinateness, which is methodologically and ethically highly problematic.
The philosophical concept of virtuality is known since Aristotle. Recently, Bühlmann18 brought it to the vicinity of semiotics and the question of reference19 in her work about mediality. There would be much, much more to say about virtuality here, just, the space is missing…
4.2. Mediality
Mediality, that is the medial aspects of things, facts and processes belongs to the most undervalued concepts nowadays, even as we get some exercise by means of so-called “social media”. That term perfectly puts this blind spot to stage through its emphasis: Neither is there any mediality without sociality, nor is there any sociality without mediality. Mediality is the concept that has been “discovered” as the last one of our small group. There is a growing body of publications, but many are—astonishingly—deeply infected by romanticism or positivism20, with only a few exceptions.21 Mediality comprises issues like context, density, or transformation qua transfer. Mediality is a concept that helps to focus the appropriate level of integration in populations or flows when talking about semantics or meaning and their dynamics. Any thing, whether material or immaterial, that occurs in a sufficient density in its manifoldness may develop a mediality within a sociality. Mediality as a “layer of transport” is co-generative to sociality. Media are never neutral with respect to the transported, albeit one can often find counteracting forces here.
Signs and symbols could not exist as such without mediality. (Yet, this proposal is based on the primacy of interpretation, which is rejected by modernist set of beliefs. The costs for this are, however, tremendous, as we are going to argue here) The same is true for words and language as a whole. In real contexts, we usually find several, if not many medial layers. Of course, signs and symbols are not exhaustively described by mediality. They need reference, which is a compound that comprises modeling.
4.3. Model
Models and modeling need not be explicated too much any more, as it is one of the main issues throughout our essays. We just would like to remember to the obvious fact that a “pure” model is not possible. We need symbols and rules, e.g. about their creation or usage, and necessarily both are not subject of the model itself. Most significantly, models need a purpose, a concept to which they refer. In fact, any model presupposes an environment, an embedding that is given by concepts and a particular social embedding. Additionally, models would not be models without virtuality. On the one hand, virtuality is implied by the fact that models are incarnations of specific modes of interpretation, and on the other hand they imply virtuality themselves, since they are, well, just models.
We frequently mentioned that it is only through models that we can build up references to the external world. Of course, models are not sufficient to describe that referencing. There is also the contingency of the manifold of populations and the implied relations as quasi-material arrangements that contribute to the reference of the individual to the common. Yet, only modeling allows for anticipation and purposeful activity. It is only though models that behavior is possible, insofar any behavior is already differentiated behavior. Models are thus the major site where information is created. It is not just by chance that the 20th century experienced the abundance of models and of information as concepts.
In mathematical terms, models can be conceived as second-order categories. More profane, but equivalent to that, we can say that models are arrangement of rules for transformation. This implies the whole issue of rule-following as it has been investigated and formulated by Wittgenstein. Note that rule-following itself is a site of paradoxicality. As there is no private language, there is also no private model. Philosophically, and a bit more abstract, we could describe them as the compound of providing the possibility for reference (they are one of the conditions for such) and the institutionalized site for creating (f)actual differences.
4.4. Concepts
Concept is probably one of the most abused, or at least misunderstood concepts, at least in modern times. So-called Analytical Philosophy is claiming over and over again that concepts could be explicated unambiguously, that concepts could be clarified or defined. This way, the concept and its definition are equaled. Yet, a definition is just a definition, not a concept. The language game of the definition makes sense only in a tree of analytical proofs that started with axioms. Definitions need not to be interpreted. They are fully given by themselves. Such, the idea of clarifying a concept is nothing but an illusion. Deleuze writes (DR228)
It is not surprising that, strictly speaking, difference should be ‘inexplicable’. Difference is explicated, but in systems in which it tends to be cancelled; this means only that difference is essentially implicated, that its being is implication. For difference, to be explicated is to be cancelled or to dispel the inequality which constitutes it. The formula according to which ‘to explicate is to identify’ is a tautology.
Deleuze points to the particular “mechanism” of eradication by explication, which is equal to its transformation into the sayable. There is a difference between 5 and 7, but the arithmetic difference does not cover all aspects of difference. Yet, by explicating the difference using some rules, all the other differences except the arithmetic one vanish. Such, this inexplicability is not limited to the concept of difference. In some important way, these other aspects are much more interesting and important than the arithmetic operation itself or the result of it. Actually, we can understand differencing only as far we are aware of these other aspects.
Elsewhere, we already cited Augustine and his remark about time:22 “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.” Here, we can observe at least two things. Firstly, this observation may well be the interpreted as the earliest rejection of “knowledge as justified belief”, a perspective which became popular in modernism. Meanwhile it has been proofed to be inadequate by the so-called Gettier problem. The consequences for the theory of data bases, or machine-based processing of data, can’t be underestimated. It clearly shows, that knowledge can’t be reduced to confirmed hypotheses qua validated models, and belief can’t be reduced to kind of a pre-knowledge. Belief must be something quite different.
The second thing to observe by those two example concerns the status of interpretation. While Augustine seems to be somewhat desperate, at least for a moment23, analytical philosophy tries to abolish the annoyance of indeterminateness by killing the freedom inherent to interpretation, which always and inevitably happens, if the primacy of interpretation is denied.
Of course, the observed indeterminateness is equally not limited to time either. Whenever you try to explicate a concept, whether you describe it or define it, you find the unsurmountable difficulty to pick one of many interpretations. Again: There is no private language; meaning, references and signs exist only within social situations of interpretation. In other words, we again find the necessity of invoking the other conceptual aspects from which we build our space. Without models and mediality there is no concept. And even more profound than models, concepts imply virtuality.
In the opposite direction we can understand now that these four concepts are not only not reducible to each other. They are dependent on each other and—somewhat paradoxically—they are even competitively counteracting. From this we can expect an abstract dynamics that reminds somewhat to the patterns evolving in Reaction-Diffusion-Systems. These four concepts imply the possibility for a basic creativity in the realm of the Idea, in the indeterminate zone of actualisation that will result in a “concrete” thought, or at least the experience of thinking.
Before we proceed we would like to introduce a notation that should be helpful in avoiding misunderstandings. Whenever we refer to the transcendental aspects between which the aspections of our space stretch out, we use capital letters and mark it additionally by a bar, such as “_Concept”,or “_Model”.The whole set of aspects we denote by “_A”,while its unspecified items are indicated by “_a”.
5. Anti-Ontology: The T-Bar-Theory
The four conceptual aspects _Aplay different roles. They differ in the way they get activated. This becomes visible as soon as we use our space as a tool for comparing various kinds of mental concepts or activities, such as believing, referring, explicating or understanding. These we will inspect later in detail.
Above we described the impossibility to explicate a concept without departing from the “conceptness”. Well, such a description is actually not appropriate according to our aspectional space. The four basic aspections are built by transcendental concepts. There is a subjective, imaginary yet pre-specific scale along those aspections. Hence, in our space “conceptness” is not a quality, but an intensity, or almost a degree, a quantity. The key point then is that a mental concept or activity relates always to all four transcendental aspections in such a way that the relative location of the mental activity can’t be changed along just a single aspect alone.
We also can recognize another significant step that is provided by our space of expressibility. Traditionally, concepts are used as existential signifiers, in philosophy often called qualia. Such existential signifiers are only capable to indicate presence or absence, which thus is also confined to naive ontology of Hamletian style (to be or not to be). It is almost impossible to build a theory or a model from existential signifiers. From the modeling or the measurement theory point of view, concepts are on the binary scale. Despite concepts collect a multitude of such binary usages, appropriate modeling remains impossible due the binary scale, unless we would probabilize all potential dual pairs.
Similarly to the case of logic we also have to distinguish the transcendental aspect _a,that is, the _Model,_Mediality,_Concept,and _Virtualityfrom the respective entity that we find in applications. Those practiced instances of a are just that: instances. That is: instances produced by orthoregulated habits. Yet, the instances of a that could be gained through the former’s actualization do not form singularities, or even qualia. Any a can be instantiated into an infinite diversity of concrete, i.e. definable and sayable abstract entities. That’s the reason for the kinship between probabilistic entities and transcendental perspectives. We could operationalize the latter by the former, even if we have to distinguish sharply between possibility and potentiality. Additionally we have to keep in mind that the concrete instances do not live independently from their transcendental ancestry24.
Deleuze provides us a nice example of this dynamics in the beginning of part V in D&R. For him, “divergence” is an instance of the transcendental entity “Difference”.
What he calls “phenomenon” we dubbed “instance”, which is probably more appropriate in order to avoid the reference to phenomenology and the related difficulties. Calling it “phenomenon” pretends—typically for any kind of phenomenology or ontology—sort of a deeply unjustified independence of mentality and its underlying physicality.
This step from existential signifiers to the situs in a space for expressibility, made possible by our aspectional space, can’t be underestimated. Take for instance the infamous question that attracted so many misplaced answers: “How do words or concepts acquire reference?” This question appears to be especially troubling because signs do refer only to signs.25 In existential terms, and all the terms in that question are existential ones, this question can’t be answered, even not addressed at all. As a consequence, deep mystical chasms unnecessarily keep separating the world from the concepts. Any resulting puzzle is based on a misconception. Think of Platons chorismos (greek for “separation”) of explanation and description, which recently has been taken up, refreshed and declared being a “chasm” by Epperson [31] (a theist realist, according to his own positioning; we will meet him later again). The various misunderstandings are well-known, ranging from nominalism to externalist realism to scientific constructivism.
They all vanish in a space that overcomes the existentiality embedded in the terms. Mathematically spoken, we have to represent words, concepts and references as probabilized entities, as quasi-species as Manfred Eigen called it in a different context, in order to avoid naive mysticism regarding our relations to the world.
It seems that our space provides the possibility for measuring and comparing different ways of instantiation for _A,kind of a stable scale. We may use it to access concepts differentially, that is, we now are able to transform concepts in a space of quantitability (a term coined by Vera Bühlmann). The aspectional space as we have constructed it is thus necessary even in order to talk just about modeling. It would provide the possibility for theories about any transition between any mental entities one could think of. For instance, if we conceive “reference” as the virtue of purposeful activity and anticipation, we could explore and describe the conditions for the explication of the path between the _Modelon the one side and the _Concept on the other.On this path—which is open on both sides—we could, for instance, first meet different kinds of symbols near the Model, started by idealization and naming of models, followed by the mathematical attitude concerning the invention and treatment of signs, _Logicand all of its instances, semiosis and signs, words, and finally concepts, not forgetting above all that this path necessarily implies a particular dynamics regarding _Medialityand _Virtuality.
Such an embedding of transformations into co-referential transcendental entities is anything we can expect to “know” reliably. That was the whole point of Kant. Well, here we can be more radical than Kant dared to. The choreostemic space is a rejection of the idea of “pure thought”, or pure reason, since such knowledge needs to undergo a double instantiation, and this brings subjectivity back. It is just a phantasm to believe that propositions could be secured up to “truth”. This is even true for least possible common denominator, existence.
I think that we cannot know whether something exists or not (here, I pretend to understand the term exist), that it is meaningless to ask this. In this case, our analysis of the legitimacy of uses has to rest on something else. (David Blair [49])
Note that Blair is very careful in his wording here. He is not about any universality regarding the justification, or legitimization. His proposal is simply that any reference to “Being” or “Existence” is useless apriori. Claiming seriousness of ontology as an aspect of or even as an external reality immediately instantiates the claim of an external reality as such, which would be such-and-such irrespective to its interpretation. This, in turn, would consequently amount to a stance that would set the proof of irrelevance of interpretation and of interpretive relativism as a goal. Any familiar associations about that? Not to the least do physicists, but only physicists, speak of “laws” in nature. All of this is, of course, unholy nonsense, propaganda and ideology at least.
As a matter of fact, even in a quite strict naturalist perspective, we need concepts and models. Those are obviously not part of the “external” nature. Ontology is an illusion, completely and in any of its references, leading to pseudo-problems that are indeed “very difficult” to “solve”. Even if we manage to believe in “existence”, it remains a formless existence, or more precisely, it has to remain formless. Any ascription of form immediately would beat back as a denial of the primacy of interpretation, hence in a naturalist determinism.
Before addressing the issue of the topological structure of our space, let us trace some other figures in our space.
6. Figures and Forms
Whenever we explicate a concept we imply or refer to a model. In a more general perspective, this applies to virtuality and mediality as well. To give an example: Describing a belief does not mean to belief, but to apply a model. The question now is, how to revert the accretion of mental activities towards the _Model._Virtuality can’t be created deliberately, since in this case we would refer again to the concept of model. Speaking about something, that is, saying in the Wittgensteinian sense, intensifies the _Model.
It is not too difficult, though, to find some candidate mechanics that turns the vector of mental activity away from the _Concept.It is through performance, mere action without explicable purpose, that we introduce new possibilities for interpretation and thus also enriched potential as the (still abstract) instance of _Virtuality.
In contrast to that, the _Concept is implied.The _Conceptcan only be demonstrated. Even by modeling. Traveling on some path that is heading towards the _Model,the need for interpretation continuously grows, hence, the more we try to approach the “pure” _Model,the stronger is the force that will flip us back towards the _Concept.
_Mediality,finally, the fourth of our aspects, binds its immaterial colleagues to matter, or quasi-matter, in processes that are based on the multiplicity of populations. It is through _Medialityand its instances that chunks of information start to behave as device, as quasi-material arrangement. The whole dynamics between _Conceptsand _Modelsrequires a symbol system, which can evolve only through the reference to _Mediality,which in turn is implied by populations of processes.
Above we said that the motivation for this structure is to provide a space of expressibility for mental phenomena in their entirety. Mental activity does not consist of isolated, rare events. It is an multitude of flows integrated into various organizational levels, even if we would consider only the language part. Mapping these flows into our space rises the question whether we could distinguish different attractors, different forms of recurrence.
Addressing this question establishes an interesting configuration, since we are talking about the form of mental activities. Perhaps it is also appropriate to call these forms “mental style”. In any case, we may take our space as a tool to formalize the question about potential classes of mental styles. In order to render out space more accessible, we take the tetraedic body as a (crude) approximating metaphor for it.
Above we stressed the point that any explication intensifies the _Model aspect. Transposed into a Cartesian geometry we would have said—metaphori- cally—that explication moves us towards the corner of the model. Let us stick to this primitive representation for a moment and in favour of a more intuitive understanding. Now imagine constructing a vector that points away from the model corner, right to the middle of the area spanned by virtuality, mediality and concept. It is pretty clear, that mental activity that leaves the model behind, and quite literally so, in this way will be some form of basic belief, or revelation. Religiosity (as a mental activity) may be well described as the attempt to balance virtuality, mediality and concept without resorting to any kind of explication, i.e. models. Of course, this is not possible in an absolute manner, since it is not possible to move in the aspectional space without any explication. This in turn then yields a residual that again points towards the model corner.
Inversely, it is not possible to move only in the direction of the _Model.Nevertheless, there are still many people proposing such, think for instance about (abundant as well as overdone) scientism. What we can see here are particular forms of mental activity. What about other forms? For instance, the fixed-point attractor?
As we have seen, our aspectional space does not allow for points as singularities. Both the semantics of the aspections as well as the structure of the space as a second-order differential prevents them. Yet, somebody could attempt to realize an orbit around a singularity that is as narrow as possible. Despite such points of absolute stability are completely illusionary, the idea of the absoluteness of ideas—idealism—represents just such an attempt. Yet, the claim of absoluteness brings mental activity to rest. It is not by accident therefore that it was the logician Frege who championed kind of a rather strange hyperplatonism.
At this point we can recognize the possibility to describe different forms of mental activity using our space. Mental activity draws specific trails into our space. Moreover, our suggestion is that people prefer particular figures for whatever reasons, e.g. due to their cultural embedding, their mental capabilities, their knowledge, or even due to their basic physical constraints. Our space allows to compare, and perhaps even to construct or evolve particular figures. Such figures could be conceived as the orthoregulative instance for the conditions to know. Epistemology thus looses its claim of universality.
It seems obvious to call our space a “choreostemic” space, a term which refers to choreography. Choreography means to “draw a dance”, or “drawing by dancing”, derived from Greek choreia (χορεύω) for „dancing, (round) dance”. Vera Bühlmann [19] described that particular quality as “referring to an unfixed point loosely moving within an occurring choreography, but without being orchestrated prior to and independently of such occurrence.”
The notion of the choreosteme also refers to the chorus of the ancient theatre, with all its connotations, particularly the drama. Serving as an announcement for part V of D&R, Deleuze writes:
However, what carries out the third aspect of sufficient reason—namely, the element of potentiality in the Idea? No doubt the pre-quantitative and pre-qualitative dramatisation. It is this, in effect, which determines or unleashes, which differenciates the differenciation of the actual in its correspondence with the differentiation of the Idea. Where, however, does this power of dramatisation come from? (DR221)
It is right here, where the choreostemic space links in. The choreostemic space does not abolish the dramatic in the transition from the conditionability of Ideas into concrete thoughts, but it allows to trace and to draw, to explicate and negotiate the dramatic. In other words, it opens the possibility for a completely new game: dealing with mental attitudes. Without the choreostemic space this game is not even visible, which itself has rather unfortunate consequences.
The choreostemic space is not an epistemic space either. Epistemology is concerned about the conditions that are influencing the possibility to know. Literally, episteme means “to stand near”, or “to stand over”. It draws upon a fixed perspective that is necessary to evaluate something. Yet, in the last 150 years or so, philosophy definitely has experienced the difficulties implied by epistemology as an endeavour that has been expected to contribute finally to the stabilization of knowledge. I think, the choreostemic space could be conceived as a tool that allows to reframe the whole endeavour. In other words, the problematic field of the episteme, and the related research programme “epistemology” are following an architecture (or intention), that has been set up far too narrow. That reframing, though, has become accessible only through the “results” of—or the tools provided by — the work of Wittgenstein and Deleuze. Without the recognition of the role of language and without a renewal of the notion of the virtual, including the invention of the concept of the differential, that reframing would not have been possible at all.
Before we are going to discuss further the scope of the choreostemic space and the purposes it can serve, we have to correct the Cartesian view that slipped in through our metaphorical references. The Cartesian flavour keeps not only a certain arbitrariness alive, as the four conceptual aspects _Aare given just by some subjective empirical observations. It also keeps us stick completely within the analytical space, hence with a closed approach that again would need a mystical external instance for its beginning. This we have to correct now.
7. Reason and Sufficiency
Our choreostemic space is built as an aspectional space that is spanned by transcendental entities. As such, they reflect the implied conditionability of concrete entities like definitions, models or media. The _Conceptcomprises any potential concrete concept, the _Modelcomprises any actual model of whatsoever kind and expressed in whatsoever symbolic system, the _Medialitycontains the potential for any kind of media, whether more material or more immaterial in character. The transcendental status of these aspects also means that we never can “access” them in their “pure” form. Yet, due to these properties our space allows to map any mental activity, not just of the human brain. In a more general perspective, our space is the space where the _Comparison takes place.
The choreostemic space is of course itself a model. Given the transcendentality of the four conceptual aspects _A,we can grasp the self-referentiality. Yet, this neither does result in an infinite regress, nor in circularity. This would be the case only if the space would be Cartesian and the topological structure would be flat (Euclidean) and global.
First, we have to consider that the choreostemic space is not only model, precisely due to its self-referentiality. Second, it is a tool, and as such it is not time-inert as a physical law. Its relevance unfolds only if it is used. This, however, invokes time and activity. Thus the choreostemic space could be conceived also as a means to intensify the virtual aspects of thought. Furthermore, and third, it is of course a concept, that is, it is an instance of the _Concept.As such, it should be constructed in a way that abolishes any possibility for a Cartesio-Euclidean regression. All these aspects are covered by the topological structure of the choreostemic space: It is meant to be a second-order differential.
A space made by the second-order differential does not contain items. It spawns procedures. In such a space it is impossible to stay at a fixed point. Whenever one would try to determine a point, one would be accelerated away. The whole space causes divergence of mental activities. Here we find the philosophical reason for the impossibility to catch a thought as a single entity.
We just mentioned that the choreostemic space does not contain items. Due to the second-order differential it is not made up as a set of coordinates, or, if we’d consider real scaled dimensions, as potential sets of coordinates. Quite to the opposite, there is nothing determinable in it. Yet, in rear-view, or hindsight, respectively, we can reconstruct figures in a probabilistic manner. The subject of this probabilism is again not determinable coordinates, but rather clouds of probabilities, quite similar to the way things are described in quantum physics by the Schrödinger equation. Unlike the completely structureless and formless clouds of probability which are used in the description of electrons, the figures in our space can take various, more or less stable forms. This means that we can try to evolve certain choreostemic figures and even anticipate them, but only to a certain degree. The attractor of a chaotic system provides a good metaphor for that: We clearly can see the traces in parameter space as drawn by the system, yet, the system’s path as described by a sequence of coordinates remains unpredictable. Nevertheless, the attractor is probabilistically confined to a particular, yet cloudy “figure,” that is, an unsharp region in parameter space. Transitions are far from arbitrary.
Hence, we would propose to conceive the choreostemic space as being made up from probabilistic situs (pl.). Transitions between situs are at the same time also transformations. The choreostemic space is embedded in its own mediality without excluding roots in external media.
Above we stuffed the space with a hyperbolic topology in order to align to the transcendentality of the conceptual aspects. It is quite important to understand that the choreostemic space does not implement a single, i.e. global hyperbolic relation. In contrast, each situs serves as point of reference. Without this relativity, the choreostemic space would be centred again, and in consequence it would turn again to the analytic and totalising side. This relativity can be regarded as the completed and subjectivising Cartesian delocalization of the “origin”. It is clear that the distance measures of any two such relative hyperbolic spaces do not coincide any more. There is neither apriori objectivity nor could we expect a general mapping function. Approximate agreement about distance measures may be achievable only for reference systems that are rather close to each other.
The choreostemic space comprises any condition of any mental attitude or thought. We already mentioned it above: The corollary of that is that the choreostemic space is the space of _Comparisonas a transcendental category.
It comprises the conditions for the whole universe of Ideas, it is an entirety. Here, it is again the topological structure of the space that saves us from mental dictatorship. We have to perform a double instantiation in order to arrive at a concrete thought. It is somewhat important to understand that these instantiations are orthoregulated.
It is clear that the choreostemic space destroys the idea of a uniform rationality. Rationality can’t be tied to truth, justice or utility in an objective manner, even if we would soften objectivity as a kind of relaxed intersubjectivity. Rationality depends completely on the preferred or practiced figures in the choreostemic space. Two persons, or more generally, two entities with some mental capacity, could completely agree on the facts, that is on the percepts, the way of their construction, and the relations between them, but nevertheless assign them completely different virtues and values, simply for the fact that the two entities inhabit different choreostemic attractors. Rationality is global within a specific choreostemic figure, but local and relative with regard to that figure. The language game of rationality therefore does not refer to a particular attitude towards argumentation, but quite in contrast, it includes and displays the will to establish, if not to enforce uniformity. Rationality is the label for the will to power under the auspices of logic and reductionism. It serves as the display for certain, quite critical moral values.
Thus, the notion of sufficient reason looses its frightening character as well. As any other principle of practice it gets transformed into a strictly local principle, retaining some significance only with regard to situational instrumentality. Since the choreostemic space is a generative space, locality comprises temporal locality as well. According to the choreostemic space, sufficient reasons can’t even be transported between subsequent situations. In terms of the choreostemic space notions like rationality or sufficient reason are relative to a particular attractor. In different attractors their significance could be very different, they may bear very different meanings. Viewed from the opposite direction, we also can see that a more or less stable attractor in the choreostemic has first to form, or: to be formed, before there is even the possibility for sufficient reasons. This goes straightly parallel to Wittgenstein’s conception of logic as a transcendental apriori that possibly becomes instantiated only within the process of an unfolding Lebensform. As a contribution to political reason, the choreostemic space it enables persons inhabiting different attractors, following different mental styles. Later, we will return to this aspect.
In D&R, Deleuze explicated the concept of the “Image of Thought”, as part III of D&R is titled. There he first discusses what he calls the dogmatic image of thought, comprised according to him from eight elements that together lead to the concept of the idea as an representation (DR167). Following that we insists that the idea is bound to repetition and difference (as differenciation and differentiation), where repetition introduces the possibility of the new, as it is not the repetition of the same. Nevertheless, Deleuze didn’t develop this Image into a multiplicity, as it could have been expected from a more practical perspective, i.e. the perspective of language games. These games are different from his notion emphasizing at several instances that language is a rich play.
For me it seems that Deleuze didn’t (want to) get rid of ontology, hence he did not conceive of his great concept of the “differential” as a language game, and in turn he missed to detect the opportunity for self-referentiality or even to apply it in a self-referential manner. We certainly do therefore not agree with his attempt to ground the idea of sufficient reason as a global principle. Since “sufficient reason” is a practice I think it is not possible or not sufficient to conceive of it as a transcendental guideline.
8. Elective Kinships
It is pretty clear that the choreostemic space is applicable to many problematic fields concerning mental attitudes, and hence concerning cultural issues at large, reaching far beyond the specificity of individual domains.
As we will see, the choreostemic space may serve as a treatment for several kinds of troublesome aberrances, in philosophy itself as well as in its various applications. Predominantly, the choreostemic space provides the evolutionary perspective towards the self-containing theoretical foundation of plurality and manifoldness.26 Comparing that with Hegel’s slogans of “the synthesis of the nation’s reason“ (“Synthese des Volksgeistes“) or „The Whole is the Truth“ („Das Ganze ist das Wahre“) shows the difference regarding its level and scope.
Before we go into the details of the dynamics that unfolds in the choreostemic space, we would like to pick up on two areas, the philosophy of the episteme and the relationship between anthropology and philosophy.
8.1. Philosophy of the Episteme
The choreostemic space is not about a further variety of some epistemological argument. It is thought as a reframing of the concerns that have been addressed traditionally by epistemology. (Here, we already would like to warn of the misunderstanding that the choreostemic space exhausts as epistemology.) Hence, it should be able to serve (as) the theoretical frame for the sociology of science or the philosophy of science as well. Think about the work of Bruno Latour [9], Karin Knorr Cetina [10] or Günther Ropohl [11] for the sociology of science or the work of van Fraassen [12] of Giere [13] for the field of philosophy of science. Sociology and philosophy, and quite likely any of the disciplines in human sciences, should indeed establish references to the mental in some way, but rather not to the neurological level, and—since we have to avoid anthropological references—to cognition as it is currently understood in psychology as well.
Giere, for instance, brings the “cognitive approach” and hence the issue of practical context close to the understanding of science, criticizing the idealising projection of unspecified rationality:
Philosophers’ theories of science are generally theories of scientific rationality. The scientist of philosophical theory is an ideal type, the ideally rational scientist. The actions of real scientists, when they are considered at all, are measured and evaluated by how well they fulfill the ideal. The context of science, whether personal, social or more broadly cultural, is typically regarded as irrelevant to a proper philosophical understanding of science” (p.3).
The “cognitive approach” that Giere proposes as a means to understand science is, however, threatened seriously by the fact that there is no consensus about the mental. This clearly conflicts with the claim of trans-cultural objectivity of contemporary science. Concerning cognition, there are still many simplistic paradigms around, recently seriously renewed by the machine learning community. Aaron Ben Ze’ev [14] writes critically:
In the schema paradigm [of the mind, m.], which I advocate, the mind is not an internal container but a dynamic system of capacities and states. Mental properties are states of a whole system, not internal entities within a particular system. […] Novel information is not stored in a separate warehouse, but is ingrained in the constitution of the cognitive system in the form of certain cognitive structures (or schemas). […] The attraction of the mechanistic paradigm is its simplicity; this, however, is an inadequate paradigm, because it fails to explain various relevant phenomena. Although the complex schema paradigm does not offer clear-cut solutions, it offers more adequate explanations.
How problematic even such critiques are can be traced as soon as we remember Wittgenstein’s mark on “mental states” (Brown Book, p.143):
There is a kind of general disease of thinking which always looks for (and finds) what would be called a mental state from which all our acts spring as from a reservoir.
In the more general field of epistemology there is still no sign for any agreement about the concept of knowledge. From our position, this is little surprising. First, concepts can’t be defined at all. All we can find are local instances of the transcendental entity. Second, knowledge and even its choreostemic structure is dependent on the embedding culture while at the same time it is forming the culture. The figures in the choreostemic space are attractors: They do not prescribe the next transformation, but they constrain the possibility for it. How ever to “define” knowledge in an explicit, positively representationalist manner? For instance, knowledge can’t be reduced to confirmed hypotheses qua validated models. It is just impossible in principle to say “knowledge is…”, since this implies inevitably the demand for an objective justification. At most, we can take it as a language game. (Thus the choreosteme, that is, the potential of building figures in the choreostemic space, should not be mixed with the episteme! We will return to this issue later again.)
Yet, just to point to the category of the mental as a language game does not feel satisfying at all. Of course, Wittgenstein’s work sheds bright light on many aspects of mentality. Nevertheless, we can’t use Wittgenstein’s work as a structure; it is itself to be conceived as a result of a certain structuredness. On the other hand, it is equally disappointing to rely on the scientific approach to the mental. In some way, we need a balanced view, which additionally should provide the possibility for a differential experimentation with mechanisms of the mental.
Just that is offered by the choreostemic space. We may relate disciplinary reductionist models to concepts as they live in language games without any loss and without getting into troubles as well.
Let us now see what is possible by means of the choreostemic space and the anti-ontological T-Bar-Theory for the terms believing, referring, explicating, understanding and knowing. It might be relevant to keep in mind that by “mental activities” we do not refer to any physical or biochemical process. We distinguish the mental from the low-level affairs in the brain. Beliefs, or believing, are thus considered to be language games. From that perspective our choreostemic space just serves as a tool to externalize language in order to step outside of it, or likewise, to get able to render important aspects of playing the language game visible.
The category of beliefs, or likewise the activity of believing27, we already met above. We characterised it as a mental activity that leaves the model behind. We sharply refute the quite abundant conceptualisation of beliefs as kind of uncertainty in models. Since there is no certainty at all, not even with regard to transcendental issues, such would make little sense. Actually, the language game of believing shows its richness even on behalf of a short investigation like this one.
Before we go into details here let us see how others conceive of it. PMS Hacker [27] gave the following summary:
Over the last two and a half centuries three main strands of opinion can be discerned in philosophers’ investigations of believing. One is the view that believing that p is a special kind of feeling associated with the idea that p or the proposition that p. The second view is that to believe that p is to be in a certain kind of mental state. The third is that to believe that p is to have a certain sort of disposition.
Right to the beginning of his investigation, Hacker marks the technical, reductionist perspective onto believe as a misconception. This technical reductionism, which took form as so-called AGM-theory in the paper by Alchourron, Gärdenfors and Makinson [28] we will discuss below. Hacker writes about it:
Before commencing analysis, one misconception should be mentioned and put aside. It is commonly suggested that to believe that p is a propositional attitude.That is patently misconceived, if it means that believing is an attitude towards a proposition. […] I shall argue that to believe that p is neither a feeling, nor a mental state, nor yet a disposition to do or feel anything.
Obviously, believing has several aspects. First, it is certainly kind of a mental activity. It seems that I need not to tell anybody that I believe in order to be able to believe. Second, it is a language game, and a rich one, indeed. It seems almost to be omnipresent. As a language game, it links “I believe that” with, “I believe A” and “I believe in A”. We should not overlook, however, that these utterances are spoken towards someone else (even in inner speech), hence the whole wealth of processes and relations of interpersonal affairs have to be regarded, all those mutual ascriptions of roles, assertions, maintained and demonstrated expectations, displays of self-perception, attempts to induce a certain co-perception, and so on. We frequently cited Robert Brandom who analysed that in great detail in his “Making it Explicit”.
Yet, can we really say that believing is just a mental activity? For the one, above we did not mention that believing is something like a “pure” mental activity. We clearly would reject such a claim. First, we clearly can not set the mental as such into a transcendental status, as this would lead straight to a system like Hegel’s philosophy, with all its difficulties, untenable claims and disastrous consequences. Second, it is impossible to explicate “purity”, as this would deny the fact that models are impossible without concepts. So, is it possible that a non-conscious being or entity can believe? Not quite, I would like to propose. Such an entity will of course be able to build models, even quite advanced ones, though probably not about reflective subjects as concepts or ideas. It could experience that it could not get rid of uncertainty and its closely related companion, risk. Such we can say that these models are not propositions “about” the world, they comprise uncertainty and allow to deal with uncertainty through actions in the world. Yet, the ability to deal with uncertainty is certainly not the same as believing. We would not need the language game at all. Saying “I believe that A” does not mean to have a certain model with a particular predictive power available. As models are explications, expressing a belief or experiencing the compound mental category “believing” is just the demonstration that any explication is impossible for the person.
Note that we conceive of “belief “as completely free of values and also without any reference to mysticism. Indeed, the choreostemic space allows to distinguish different aspects of the “compound experience” that we call “belief”, which otherwise are not even visible as separate aspects of it. As a language game we thus may specify it as the indication that the speaker assigns—or the listener is expected to assign—a considerable portion of the subject matter to that part of the choreostemic figure that points away from the _Model.It is immediately clear from the choreostemic space that mental activity without belief is not possible. There is always a significant “rest” that could not be covered by any kind of explication. This is true for engineering and of course for any kind of social interaction, as soon as mutual expectations appear on the stage. By means of the choreostemic space we also can understand the significance of trust in any interaction with the external world. In communicative situations, this quickly may lead to a game of mutual deontic ascriptions, as Robert Brandom [15] has been arguing for in his “Making it Explicit”.
Interestingly enough, belief (in its choreostemically founded version) is implied by any transition away from the _Model,for instance also in case of the transition path that ultimately is heading towards the _Concept.Even more surprising—at first sight—and particularly relevant is the “inflection dynamics” in the choreostemic space. The more one tries to explicate something the larger the necessary imports (e.g. through orthoregulations) from the other _a,and hence the larger is the propensity for an inflecting flip.28
As an example, take for instance the historical development of theories in particle physics. There, people started with rather simple experimental observations, which then have been assimilated by formal mathematical models. Those in turn led to new experiments, and so forth, until physics has been reaching a level of sophistication where “observations” are based on several, if not many layers of derived concepts. On the way, structural constants and heuristic side conditions are implied. Finally, then, the system of the physical model turns into an architectonics, a branched compound of theory-models, that sounds as trivial as it is conceptual. In case of physics, it is the so-called grand unified theory. There are several important things here. First, due to large amounts of heuristic settings and orthoregulations, such concepts can’t be proved or disproved anymore, the least by empirical observations. Second, on the achieved level of abstraction, the whole subject could be formulated in a completely different manner. Note that such a dynamic between experiment, model, theory29 and concept never has been described in a convincing manner before.30
Now that we have a differentiated picture about belief at our disposal we can briefly visit the field of so-called belief revision. Belief revision has been widely adopted in artificial intelligence and machine learning as the theory for updating a data base. Quite unfortunately, the whole theory is, well, simply crap, if we would go to apply it according to its intention. I think that we can raw some significance of the choreostemic space from this mismatch for a more appropriate treatment of beliefs in information technology.
The theory of belief revision was put forward by a branch of analytical philosophy in a paper by Alchourron, Gärdenfors and Makinson (1985) [29], often abbr. as “AGM-theory.” Hansson [30] writes:
A striking feature of the framework employed there [monnoo: AGM] is its simplicity. In the AGM framework, belief states are represented by deductively closed sets of sentences, called belief sets. Operations of change take the form of either adding or removing a specified sentence.
Sets of beliefs are held by an agent, who establishes or maintains purely logical relations between the items of those beliefs. Hansson correctly observes that:
The selection mechanism used for contraction and revision encodes information about the belief state not represented by the belief set.
Obviously, such “belief sets” have nothing to do with beliefs as we know it from language game, besides the fact that is a misdone caricature. As with Pearl [23], the interesting stuff is left out: How to achieve those logical sentences at all, notably by a non-symbolic path of derivation? (There are no symbols out there in the world.) By means of the choreostemic space we easily derive the answer: By an orthoregulated instantiation of a particular choreostemic performance in an unbounded (open) aspectional space that spans between transcendental entities. Since the AGM framework starts with or presupposes logic, it simply got stuck in symbolistic fallacy or illusion. Accordingly, Pollock & Gillies [30] demonstrate that “postulational approaches” such as the AGM-theory can’t work within a fully developed “standard” epistemology. Both are simply incompatible to each other.
Closely related to believing is explicating, the latter being just the inverse of the former, pointing to the “opposite direction”. Explicating is almost identical to describing a model. The language game of “explication” means to transform, to translate and to project choreostemic figures into lists of rules that could be followed, or in other words, into the sayable. Of course, this transformation and projection is neither analytic nor neutral. We must be aware of the fact that even a model can’t be explicated completely. Else, this rule-following itself implies the necessity of believes and trust, and it requires a common understanding about the usage or the influence of orthoregulations. In other words, without an embedding into a choreostemic figure, we can’t accomplish an explication.
Understanding, Explaining, Describing
Outside of the perspective of the language game “understanding” can’t be understood. Understanding emerges as a result of relating the items of a population of interpretive acts. This population and the relations imposed on them are closely akin to Heidegger’s scaffold (“Gestell”). Mostly, understanding something is just extending an existent scaffold. About these relations we can’t speak clearly or in an explicit manner any more, since these relations are constitutive parts of the understanding. As all language games this too unfolds in social situations, which need not be syntemporal. Understanding is a confirming report about beliefs and expectations into certain capabilities of one’s own.
Saying “I understand” may convey different meanings. More precisely, understanding may come along in different shades that are placed between two configurations. Either it signals that one believes to be able to extend just the own scaffold, one’s own future “Gestelltheit”. Alternatively it is used to indicate the belief that the extension of the scaffold is shared between individuals in such a way as to be able to reproduce the same effect as anyone else could have produced understanding the same thing. This effect could be merely instrumental or, more significantly, it could refer to the teaching of further pupils. In this case, two people understand something if they can teach another person to the same ends.
Beside the performative and social aspects of understanding there are of course the mental aspects of the concept of “understanding” something. These can be translated into choreostemic terms. Understanding is less a particular “figure” in the CS than it is a deliberate visiting of the outer regions of the figure and the intentional exploration of those outposts. We understand something only in case we are aware of the conditions of that something and of our personal involvements. These includes cognitive aspects, but also the consequences of the performative parts of acts that contribute to an intensifying of the aspect of virtuality. A scientist who builds a strong model without considering his and its conditionability does not understand anything. He just would practice a serious sort of dogma (see Quine about the dogmas of empiricism here!). Such a scientist’s modeling could be replaced by that of a machine.
A similar account could be given to the application of a grammar, irrespective the abstractness of that grammar. Referring to a grammar without considering its conditionability could be performed by a mindless machine as well. It would indeed remain a machine: mindless, and forever determined. Such is most, if not all of the computer software dealing with language today.
We again would like to emphasize that understanding does not exhaust in the ability to write down a model. Understanding means to relate the model to concepts, that is, to trace a possible path that would point towards the concept. A deep understanding refers to the ability to extend a figure towards the other transcendental aspects in a conscious manner. Hence, within idealism and (any sort of) representationalism understanding is actually excluded. They mistake the transcendental for the empirical and vice versa, ending in a strict determinism and dogmatism.
Explaining, in turn, indicates the intention to make somebody else to understand a certain subject. The infamous existential “Why?” does not make any sense. It is not just questionable why this language game should by performed at all, as the why of absolute existence can’t be answered at all. Actually, it seems to be quite different from that. As a matter of fact, we indeed play this game in a well comprehensible way and in many social situations. Conceiving “explanation” of nature as to account for its existence (as Epperson does it, see [31] p.357) presupposes that everything could turned into the sayable. It would result in the conflation of logic and factual world, something Epperson indeed proposes. Some pages later in his proposal about quantum physics he seems to loosen that strict tie when referring to Whitehead he links “understanding” to coherence and empirical adequacy. ([31] p.361)
I offer this argument in the same speculative philosophical spirit in which Whitehead argued for the fitness of his metaphysical scheme to the task of understanding (though not “explaining”) nature—not by the “provability” of his first principles via deduction or demonstration, but by their evaluation against the metrics of coherence and empirical adequacy.
Yet, this presents us an almost a perfect phenomenological stance, separating objects from objects and subjects. Neither coherence nor empirical adequacy can be separated from concepts, models and the embedding Lebenswelt. It expresses thus the believe of “absolute” understanding and final reason. Such ideas that are at least highly problematic, even and especially if we take into account the role Whitehead gives the “value” as an cosmological apriori. It is quite clear, that this attitude to understanding is sharply different from anything that is related to semiotics, the primacy of interpretation, to the role of language or a relational philosophy, in short, to anything what resembles even remotely to what we proposed about understanding of understanding a few lines above.
The intention to make somebody else to understand a certain subject necessarily implies a theory, where theory here is understood (as we always do) as a milieu for deriving or inventing models. The “explaining game” comprises the practice of providing a general perspective to the recipient such that she or he could become able to invent such a model, precisely because a “direct” implant of an idea into someone else is quite impossible. This milieu involves orthoregulation and a grammar (in the philosophical sense). The theory and the grammar associated or embedded with it does nothing else than providing support to find a possibility for the invention or extension of a model. It is a matter of persistent exchange of models from a properly grown population of models that allow to develop a common understanding about something. In the end we then may say “yes, I can follow you!”
Describing is often not distinguished (properly) from explaining. Yet, in our context of choreostemically embedded language games it is neither mysterious nor difficult to do so. We may conceive of describing just as explicating something into the sayable, the element of cross-individual alignment is not part of it, at least in a much less explicit way. Hence, usually the respective declaration will not be made. The element of social embedding is much less present.
Describing pretends more or less that all the three aspects accompanying the model aspect could be neglected, particularly however the aspects of mediality and virtuality. The mathematical proof can be taken as an extreme example for that. Yet, even there it is not possible, since at least a working system of symbols is needed, which in turn is rooted in a dynamics unfolding as choreostemic figure, the mental aspect of Forms of Life. Basically, this impossibility for fixing a “position” in the choreostemic space is responsible for the so-called foundational crisis in mathematics. This crisis prevails even today in philosophy, where many people naively enough still search for absolute justification, or truth, or at least regard such as a reasonable concept.
All this should not be understood as an attempt to deny description or describing as a useful category. Yet, we should be aware that the difference to explaining is just one of (choreostemic) form. More explicitly, said difference is an affair of of culturally negotiated portions of the transcendental aspects that make up mental life.
I hope this sheds some light on Wittgenstein’s claim that philosophy should just describe, but not explain anything. Well, the possibly perceived mysteriousness may vanish as well, if we remember is characterisation of grammar
Both, understanding and explaining are quite complicated socially mediated processes, hence they unfold upon layers of milieus of mediality. Both not only relate to models and concepts that need to exist in advance and thus to a particular dynamics between them, they require also a working system of symbols. Models and concepts relate to each other only as instances of _Models and _Concepts,that is in a space as it is provided by the choreostemic space. Talking about understanding as a practice is not possible without it.
Referring to something means to point to the expectation that the referred entity could point to the issue at hand. Referring is not “pointing to” and hence does not consist of a single move. It is “getting pointed to”. Said expectation is based on at least one model. Hence, if we refer to something, we put our issue as well as ourselves into the context of a chain of signifiers. If we refer to somebody, or to a named entity, then this chain of interpretive relations transforms in one of two ways.
Either the named entity is used, that is, put into a functional context, or more precisely, by assigning it a sayable function. The functionalized entity does not (need to) interpret any more, all activity gets centralized, which could be used as the starting point for totalizing control. This applies to any entity, whether it is just material or living, social.
The second way how referencing is affected by names concerns the reference to another person, or a group of persons. If it is not a functional relationship, e.g. taking the other as a “social tool”, it is less the expected chaining as signifier by the other person. Persons could not be interpreted as we interpret things or build signs from signals. Referring to a person means to accept the social game that comprises (i) mutual deontic assignments that develop into “roles”, including deontic credits and their balancing (as first explicated by Brandom [15]), (ii) the acceptance of the limit of the sayable, which results in a use of language that is more or less non-functional, always metaphorical and sometimes even poetic, as well as (iii) the declared persistence for repeated exchanges. The fact that we interpret the utterances of our partner within the orthoregulative milieu of a theory of mind (which builds up through this interpretations) means that we mediatize our partner at least partially.
The limit of the sayable is a direct consequence of the choreostemic constitution of performing thinking. The social is based on communication, which means “to put something into common”; hence, we can regard “communication” as the driving, extending and public part of using sign systems. As a proposed language game, “functional communication” is nonsense, much like the utterance “soft stone”.
By means of the choreostemic space we also can see that any referencing is equal to a more or less extensive figure, as models, concepts, performance and mediality is involved.
At first hand, we could suspect that before any instantiation qua choreostemic performance we can not know something positively for sure in a global manner, i.e. objectively, as it is often meant to be expressed by the substantive “knowledge”. Due to that performance we have to interpret before we could know positively and objectively. The result is that we never can know anything for sure in a global manner. This holds even for transcendental items, that is, what Kant dubbed “pure reason”. Nevertheless, the language game “knowledge” has a well-defined significance.
“Knowledge” is a reasonable category only with respect to performing, interpreting (performance in thought) and acting (organized performance). It is bound to a structured population of interpretive situations, to Peircean signs. We thus find a gradation of privacy vs. publicness with respect to knowledge. We just have to keep in mind that neither of these qualities could be thought of as being “pure”. Pure privacy is not possible, because there is nothing like a private language (meaning qua usage and shared reference). Pure publicness is not possible because there is the necessity of a bodily rooted interpreting mechanism (associative structure). Things like “public space” as a purely exterior or externalized thing do not exist. The relevant issue for our topic of a machine-based episteme is that functionalism always ends in a denial of the private language argument.
We now can see easily why knowledge could not be conceived as a positively definable entity that could be stored or transferred as such. First, it is of course a language game. Second, and more important, “knowing {of, about, that}” always relates to instances of transcendental entities, and necessarily so. Third, even if we could agree on some specific way of instantiating the transcendental entities, it always invokes a particular figure unfolding in an aspectional space. This figure can’t be transferred, since this would mean that we could speak about it outside of itself. Yet, that’s not possible, since it is in turn impossible to just pretend to follow a rule.
Given this impossibility we should stay for a moment at the apparent gap opened by it towards teaching. How to teach somebody something if knowledge can’t be transferred? The answer is furnished by the equipment that is shared among the members of a community of speakers or co-inhabitants of the choreostemic space. We need this equipment for matching the orthoregulation of our rule-following. The parts, tools and devices of this equipment are made from palpable traditions, cultural rhythms, institutions, individual and legal preferences regarding the weighting of individuals versus the various societal clusters, the large story of the respective culture and the “templates” provided by it, the consciously accessible time horizon, both to the past and the future31, and so on. Common sense wrongly labels the resulting “setup” as “body of values”. More appropriately, we could call it grammatical dynamics. Teaching, then, is in some way more about the reconstruction of the equipment than about the agreement of facts, albeit the arrangement of the facts may tell us a lot about the grammar.
Saying ‘I know’ means that one wants to indicate that she or he is able to perform choreostemically with regard to the subject at hand. In other words, it is a label for a pointer (say reference) to a particular image of thought and its use. This includes the capability of teaching and explaining, which probably are the only way to check if somebody really knows. We can, however, not claim that we are aligned to a particular choreostemic dynamics. We only can believe that our choreostemic moves are part of a supposed attractor in the choreostemic space. From that also follows that knowledge is not just about facts, even if we would conceive of facts as a compound of fixed relations and fixed things.
The traditional concerns of epistemology as the discipline that asks about the conditions of knowing and knowledge must be regarded as a misplaced problem. Usually, epistemology does not refer to virtuality or mediality. Else, in epistemology knowledge is often sharply separated from belief, yet for the wrong reasons. The formula of “knowledge as justified belief” puts them both onto the same stage. It then would have to be clarified what “justified” should mean, which is not possible in turn. Explicating “justifying” would need reference to concepts and models, or rather the confinement to a particular one: logic. Yet, knowledge and belief are completely different with regard to their role in choreostemic dynamics. While belief is an indispensable element of any choreostemic figure, knowledge is the capability to behave choreostemically.
8.2. Anthropological Mirrors
Philosophy suffers even more from a surprising strangeness. As Marc Rölli recently mentioned [34] in his large work about the relations between anthropology and philosophy (KAV),
Since more than 200 years philosophy is anthropologically determined. Yet, philosophy didn’t investigate the relevance of this fact to any significant extent. (KAV15)32
Rölli agrees with Nietzsche regarding his critique of idealism.
“Nietzsche’s critique of idealism, which is available in many nuances, always targeting the philosophical self-misunderstanding of the pure reason or pure concepts, is also directed against a certain conception of nature.” (KAV439)33.
…where this rejected certain conception of nature is purposefulness. In nature there is no forward directed purpose, no plan. Such ideas are either due to religious romanticism or due to a serious misunderstanding of the Darwinian theory of natural evolution. In biological nature, there is only blind tendency towards the preference of intensified capability for generalization34. Since Kant, and inclusively him, and in some way already Descartes, philosophy has been influenced by scientific, technological or anthropological conceptions about nature in general, or the nature of the human mind.
Such is (at least) problematic for three reasons. First, it constitutes a misunderstanding of the role of philosophy to rely on scientific insights. Of course, this perspective is becoming (again) visible only today, notably after the Linguistic Turn as far as it regards non-analytical philosophy. Secondly, however, it is clear that the said influence implies, if it remains unreflected, a normative tie to empiric observations. This clearly represents a methodological shortfall. Thirdly, even if one would accept a certain link between anthropology and philosophy, the foundations taken from a “philosophy of nature”35 are so simplistic, that they hardly could be regarded as viable.
This almost primitive image about the purposeful nature finally flowed into the functionalism of our days, whether in philosophy (Habermas) or so-called neuro-philosophy, by which many feel inclined to establish a variety of determinism that is even proto-Hegelian.
In the same passage that invokes Nietzsche’s critique, Rölli cites Friedrich Albert Lange [39]
“The topic that we actually refer to can be denoted explicitly. It is quasi the apple in the logical lapse of German philosophy subsequent to Kant: the relation between subject and object within knowledge.” (KAV443)36
Lange deliberately attests Kant—in contrast to the philosophers of the German idealism— to be clear about that relationship. For Kant subject and object constitute only as an amalgamate, the pure whatsoever has been claimed by Hegel, Schelling and their epigones and inheritors. The intention behind introducing pureness, according to Lange, is to support absolute reason or absolute understanding, in other words, eternally justified reason and undeniability of certain concepts. Note that German Idealism was born before the foundational crisis in mathematics, that started with Russell’s remark on Frege’s “Begriffsschrift” and his “all” quantor, that found its continuation in the Hilbert programme and that finally has been inscribed to the roots of mathematics by Goedel. Philosophies of “pureness” are not items of the past, though. Think about materialism, or about Agamben’s “aesthetics of pure means”, as Benjamin Morgan [39] correctly identified the metaphysical scaffold of Agamben’s recent work.
Marc Rölli dedicates all of the 512 pages to the endeavor to destroy the extra-philosophical foundations of idealism. As the proposed alternative we find pragmatism, that is a conceptual foundation of philosophy that is based on language and Life form (Lebenswelt in the Wittgensteinian sense). He concludes his work accordingly:
After all it may have become more clear that this pragmatism is not about a simple, naive pragmatism, but rather about a pragmatism of difference37 that has been constructed with great subtlety. (KAV512)38
Rölli’s main target is German Idealism. Yet, undeniably Hegelian philosophy is not only abundant on the European continent, where it is the Frankfurt School from Adorno to Habermas and even K.-O. Apel, followed by the ill-fated ideas of Luhmann that are infected by Hegel as well. Significant traces of it can be found in Germany’s society also in contemporary legal positivism and the oligarchy of political parties.
During the last 20 years or so, Hegelian positions spread considerably also in anglo-american philosophy and political theory. Think about Hard and Negri, or even the recent works of Brian Massumi. Hegelian philosophy, however, can’t be taken in portions. It is totalitarian all through, because its main postulates such as “absolute reason” are totalizing by themselves. Hegelian philosophy is a relic, and a quite dangerous one, regardless whether you interpret it in a leftist (Lenin) or in a rightist (Carl Schmitt) manner. With its built-in claim for absoluteness the explicit denial of context-specificity, of the necessary relativity of interpretation, of the openness of future evolution, of the freedom inscribed deeply even into the basic operation of comparison, all of these positions turn into transcendental aprioris. The same holds for the claim that things, facts, or even norms can be justified absolutely. No further comment should be necessary about that.
The choreostemic space itself can not result in a totalising or even totalitarian attitude. We met this point already earlier when we discussed the topological structure of the space and its a-locational “substance” (Reason and Sufficiency). As Deleuze emphasized, there is a significant difference between entirety and completeness, which just mirrors the difference between the virtual and the actual. We’d like to add that the choreostemic space also disproves the possibility for universality of any kind of conception. In some way, yet implicitly, the choreostemic space defends humanity against materiality and any related attitude. Even if we would be determined completely on the material level, which we are surely not39, the choreostemic space proofs the indeterminateness and openness of our mental life.
You already may have got the feeling that we are going to slip into political theory. Indeed, the choreostemic space not only forms a space indeterminateness and applicable pre-specificity, it provides also a kind of a space of “Swiss neutrality”. Its capability to allow for a comparison of collective mental setups, without resorting to physicalist concepts like swarms or mysticistic concepts like “collective intelligence”, provides a fruitful ground for any construction of transitions between choreostemic attractors.
Despite the fact that the choreostemic space concerns any kind of mentality, whether seen as hosted more by identifiable individuals or by collectives, the concept should not be taken as an actual philosophy of reason (“Philosophie des Geistes”). It transcends it as it does regarding any particular philosophical stance. It would be wrong as well to confine it into an anthropology or an anthropological architecture of philosophy, as it is the case not only in Hegel (Rölli, KAV137). In some way, it presents a generative zone for a-human philosophies, without falling prey to the necessity to define what human or a-human should mean. For sure, here we do not refer to transhumanism as it is known today, which just follows the traditional anthropological imperative of growth (“Steigerungslogik”), as Rölli correctly remarks (KAV459).
A-Human simply means that as a conception it is neither dependent nor confined to the human Lebenswelt. (We again would like to stress the point that it does neither represent a positively sayable universalism not even kind of a universal procedural principle, and as well that this “a-” should also not be understood as “anti” or “opposed”, simply as “being free of”). It is this position that is mandatory to draw comparisons40 and, subsequently, conclusions (in the form of introduced irreversibilities) about entities that belong to strikingly different Lebenswelten (forms of life). Any particular philosophical position immediately would be guilty in applying human scales to non-human entities. That was already a central cornerstone of Nietzsche’s critique not only of German philosophy of the 19th century, but also of natural sciences.
8.3. Simplicissimi
Rölli criticizes the uncritical adoption of items taken from the scientific world view by philosophy in the 19th century. Today, philosophy is still not secured against simplistic conceptions, uncritically assimilated from certain scientific styles, despite the fact that nowadays we could know about the (non-analytic) Linguistic Turn, or the dogmatics in empiricism. What I mean here comprises two conceptual ideas, the reduction of living or social system to states and the notion of exception or that of normality respectively.
There are myriads of references in the philosophy of mind invoking so-called mental states. Yet, not only in the philosophy of mind one can find the state as a concept, but also in political theory, namely in Giorgio Agamben’s recent work, which also builds heavily on the notion of the “state of exception”. The concept of a mental state is utter nonsense, though, and mainly so for three very different reasons. The first one can be derived from the theory of complex systems, the second one from language philosophy, and the third one from the choreostemic space.
In complex systems, the notion of a state is empty. What we can observe subsequent to the application of some empiric modeling is that complex systems exhibit meta-stability. It looks as if they are stable and trivial. Yet, what we could have learned mainly from biological sciences, but also from their formal consideration as complex systems, is that they aren’t trivial. There is no simple rule that could describe the flow of things in a particular period of time. The reason is precisely that they are creative. They build patterns, hence the build a further “phenomenal” level, where the various levels of integration can’t be reduced to one another. They exhibit points of bifurcation, which can be determined only in hindsight. Hence, from the empirical perspective we only can estimate the probability for stability. This, however, is clearly to weak as to support the claim of “states”.
Actually, from the perspective of language-oriented philosophy, the notion of a state is even empty for any dynamical system that is subject to open evolution (but probably even for trivial dynamic systems). A real system does not build “states”. There are only flows and memories. “State” is a concept, in particular, an idealistic—or at least an idealizing—concept that are only present in the interpreting entity. The fact that one first has to apply a model before it is possible to assign states is deliberately peculated whenever it is invoked by an argument that relates to philosophy or to any (other) kind of normativity. Therefore, the concept of “state” can’t be applied analytically, or as a condition in a linearly arranged argument. Saying that we do not claim that the concept of state is meaningless at large. In natural science, especially throughout the process of hypothesis building, the notion of state can be helpful (sometimes, at least).
Yet, if one would use it in philosophy in a recurrent manner, one would quickly arrive at the choreostemic space (or something very similar), where states are neither necessary nor even possible. Despite that a “state” is only assigned, i.e. as a concept, philosophers of mind41 and philosophers of political theory alike (as Agamben [37] among other materialists) use it as a phenomenal reference. It is indeed somewhat astonishing to observe this relapse into naive realism within the community of otherwise trained philosophers. One of the reasons for this may well be met in the missing training in mathematics.42
The third argument against the reasonability of the notion of “state” in philosophy can be derived from the choreostemic space. A cultural body comprises individual mentality as well as a collective mentality based on externalized symbolic systems like language, to make a long story short. Both together provide the possibility for meaning. It is absolutely impossible to assign a “state” to a cultural body without loosing the subject of culture itself. It would be much like a grammatical mistake. That “subject” is nothing else than a figurable trace in the choreostemic space. If one would do such an assignment instead, any finding would be relevant only within the reduced view. Hence, it would be completely irrelevant, as it could not support the self-imposed pragmatics. Continuing to argue about such finding then establishes a petitio principii: One would find only what you originally assumed. The whole argument would be empty and irrelevant.
Similar arguments can be put forward regarding the notion of the exceptional, if it is applied in contexts that are governed by concepts and their interpretation, as opposed to trivial causal relationships. Yet, Giorgio Agamben indeed started to built a political theory around the notion of exception [37], which—at first sight strange enough—already triggered an aesthetics of emergency. Elena Bellina [38] cites Agamben:
The state of exception “is neither external nor internal to the juridical order, and the problem of defining it concerns a threshold, or a zone of indifference, where inside and outside do not exclude each other but rather blur with each other.” In this sense, the state of exception is both a structured or rule-governed and an anomic phenomenon: “The state of exception separates the norm from its application in order to make its application possible. It introduces a zone of anomie into the law in order to make the effective regulation of the real possible.”
It results in nothing else than disastrous consequences if the notion of the exception would be applied to areas where normativity is relevant, e.g. in political theory. Throughout history there are many, many terrible examples for that. It is even problematic in engineering. We may even call it fully legitimized “negativity engineering”, as it establishes completely unnecessary the opposite of the normal and the deviant as an apriori. The notion of the exception presumes total control as an apriori. As such, it is opposed to the notion of openness, hence it also denies the primacy of interpretation. Machines that degenerate and that would produce disasters on any malfunctioning can’t be considered as being built smartly. In a setup that embraces indeterminateness, there is even no possibility for disastrous fault. Instead, deviances are defined only with respect to the expectable, not against an apriori set, hence obscure, normality. If the deviance is taken as the usual (not the normal, though!), fault-tolerance and even self-healing could be built in as a core property, not as an “exception handling”.
Exception is the negative category to the normal. It requires models to define normality, models to quantify the deviation and finally also arbitrary thresholds to label it. All of the three steps can be applied in linear domains only, where the whole is dependent on just very few parameters. For social mega-systems as societies it is nothing else than a methodological categorical illusion to apply the concept of the exception.
9. Critique of Paradoxically Conditioned Reason
Nothing could be more different to that than pragmatism, for which the choreostemic space can serve as the ultimate theory. Pragmatism always suffered from—or at least has been violable against—the reproach of relativism, because within pragmatism it is impossible to argue against it. With the choreostemic space we have constructed a self-sufficient, self-containing and a necessary model that not only supports pragmatism, but also destroys any possibility of universal normative position or normativity. Probably even more significant, it also abolishes relativism through the implied concept of the concrete choreostemic figure, which can be taken as the differential of the institution or the of tradition43. Choreostemic figures are quite stable since they relate to mentality qua population, which means that they are formed as a population of mental acts or as mental acts of the members of a population. Even for individuals it is quite hard to change the attractor inhabited in choreostemic space, to change into another attractor or even to build up a new one.
In this section we will check out the structure of the way we can use the choreostemic space. Naively spoken we could ask for instance, how can we derive a guideline to improve actions? How can we use it to analyse a philosophical attitude or a political writing? Where are the limits of the choreostemic space?
The structure behind such questions concerns a choice on a quite fundamental level. The issue is whether to argue strictly in positive terms, to allow negative terms, or even to define anything starting from negative terms only. In fact, there are quite a few of different possibilities to arrange any melange of positivity or negativity. For instance, one could ontologically insist first on contingency as a positivity, upon then constraints would act as a negativity. Such traces we will not follow here. We regard them either as not focused enough or, most of them, as being infected by realist ontology.
In more practical terms this issue of positivity and negativity regards the way of how to deal with justifications and conditions. Deleuze argues for strict positivity; in that he follows Spinoza and Nietzsche. Common sense, in contrast, is given only as far as it is defined against the non-common. In this respect, any of the existential philosophical attitudes, whether Christian religion, phenomenology or existentialism, are quite similar to each other. Even Levinas’ Other is infected by it.
Admittedly, at first hand it seems quite difficult, if not impossible, to arrive at an appropriate valuation of other persons, the stranger, the strange, in short, the Other, but also the alienated. Or likewise, how to derive or develop a stance to the world that does not start from existence. Isn’t existence the only thing we can be sure about? And isn’t the external, the experience the only stable positivity we can think about? Here, we shout a loud No! Nevertheless we definitely do not deny the external either.
We just mentioned that the issue of justification is invoked by our interests here. This gives rise to ask about the relation of the choreostemic space to epistemology. We will return to this in the second half of this section.
Positivity. Negativity.
Obviously, the problem of the positive is not the positive, but how we are going to approach it. If we set it primary, we first run into problems of justification, then into ethical problems. Setting the external, the existence, or the factual positive as primary we neglect the primacy of interpretation. Hence, we can’t think about the positive as an instance. We have to think of it as a Differential.
The Differential is defined as an entirety, yet not instantiated. Its factuality is potential, hence its formal being is neither exhaustive nor limiting its factuality, or positivity. Its givenness demands for action, that is for a decision (which is sayable regarding its immediacy) bundled with a performance (which is open and just demonstrable as a matter of fact).
The concept of choreosteme follows closely Deleuze’s idea of the Differential: It is built into the possibility of expressibility that spans as the space between the _Directionsas they are indicated by the transcendental aspects _A.The choreostemic space does not constitute a positively definable stance, since the space for it, the choreostemic space is not made from elements that could be defined apriori to any moment in time. Nevertheless it is well-defined. In order to provide an example which requires a similar approach we may refer to the space of patterns as they are potentially generated by Turing-systems. The mechanics of Turing-patterns, its mechanism, is well-defined as well, it is given in its entirety, but the space of the patterns can’t be defined positively. Without deep interpretation there is nothing like a Turing-pattern. Maybe, that’s one of the reasons that hard sciences still have difficulties to deal adequately with complexity.
Besides the formal description of structure and mechanism of our space there is nothing left about one could speak or think any further. We just could proceed by practicing it. This mechanism establishes a paradoxicality insofar as it does not contain determinable locations. This indeterminateness is even much stronger than the principle of uncertainty as it is known from quantum physics, which so far is not constructed in a self-referential manner (at least if we follow the received views). Without any determinate location, there seems to be no determinable figure either, at least none of which we could say that we could grasp them “directly”, or intuitively. Yet, figures may indeed appear in the choreostemic space, though only by applying orthoregulative scaffolds, such as traditions, institutions, or communities that form cultural fields of proposals/propositions (“Aussagefeld”), as Foucault named it [40].
The choreostemic space is not a negativity, though. It does not impose apriori determinable factual limits to a real situation, whether internal or external. It even doesn’t provide the possibility for an opposite. Due to its self-referentiality it can be instantiated into positivity OR negativity, dependent on the “vector”—actually, it is more a moving cloud of probabilities—one currently belongs to or that one is currently establishing by one’s own performances.
It is the necessity of choice itself, appearing in the course of instantiation of the twofold Differential, that introduces the positive and the negative. In turn, whenever we meet an opposite we can conclude that there has been a preceding choice within an instantiation. Think about de Saussure structuralist theory of language, which is full of opposites. Deleuze argues (DR205) that the starting point of opposites betrays language:
In other words, are we not on the lesser side of language rather than the side of the one who speaks and assigns meaning? Have we not already betrayed the nature of the play of language – in other words, the sense of that combinatory, of those imperatives or linguistic throws of the dice which, like Artaud’s cries, can be understood only by the one who speaks in the transcendent exercise of language? In short, the translation of difference into opposition seems to us to concern not a simple question of terminology or convention, but rather the essence of language and the linguistic Idea.
In more traditional terms one could say it is dependent on the “perspective”. Yet, the concept of “perspective” is fallacious here, at least so, since it assumes a determinable stand point. By means of the choreostemic space, we may replace the notion of perspectives by the choreostemic figure, which reflects both the underlying dynamics and the problematic field much more adequately. In contrast to the “perspective”, or even of such, a choreostemic figure spans across time. Another difference is that a perspective needs to be taken, which does not allow for continuity, while a choreostemic figure evolves continually. The possibility for negativity is determined along the instantiation from choreosteme to thought, while the positivity is built into the choreostemic space as a potential. (Negative potentials are not possible.)
Such, the choreostemic space is immune to any attempt—should we say poison pill?—to apply a dialectic of the negative, whether we consider single, double, or absurdly enough multiply repeated ones. Think about Hegel’s negativity, Marx’s rejection and proposal for a double negativity, or the dropback by Marcuse, all of which must be counted simply as stupidity. Negativity as the main structural element of thinking did not vanish, though, as we can see in the global movement of anti-capitalism or the global movement of anti-globalization. They all got—or still get—victimized by the failure to leave behind the duality of concepts and to turn them into a frame of quantitability. A recent example for that ominous fault is given by the work of Giorgio Agamben; Morgan writes:
Given that suspending law only increases its violent activity, Agamben proposes that ‘deactivating’ law, rather erasing it, is the only way to undermine its unleashed force. (p.60)
The first question, of course, is, why the heck does Agamben think that law, that is: any lawfulness, must be abolished. Such a claim includes the denial of any organization and any institution, above all, as practical structures, as immaterial infrastructures and grounding for any kind of negotiation. As Rölli noted in accordance to Nietzsche, there is quite an unholy alliance between romanticism and modernism. Agamben, completely incapable of getting aware of the virtual and of the differential alike, thus completely stuck in a luxurating system of “anti” attitudes, finds himself faced with quite a difficulty. In his mono-(zero) dimensional modernist conception of world he claims:
“What is found after the law is not a more proper and original use value that precedes law, but a new use that is born only after it. And use, which has been contaminated by law, must also be freed from its value. This liberation is the task of study, or of play.”
Is it really reasonable to demand for a world where uses, i.e. actions, are not “contaminated” by law? Morgan continues:
In proposing this playful relation Agamben makes the move that Benjamin avoids: explicitly describing what would remain after the violent destruction of normativity itself. ‘Play’ names the unknowable end of ‘divine violence’.
Obviously, Agamben never realized any paradox concerning rule-following. Instead, he runs amok against his own prejudices. “Divine violence” is the violence of ignorance. Yet, abolishing knowledge does not help either, nor is it an admirable goal in itself. As Derrida (another master of negativity) before him, in the end he demands for stopping interpretation, any and completely. Agamben provides us nothing else than just another modernist flavour of a philosophy of negativity that results in nihilistic in-humanism (quite contrary to Nietzsche, by the way). It is somewhat terrifying that Agamben receives not jut little attention currently.
In the last statement we are going to cite from Morgan, we can see in which eminent way Agamben is a thinker of the early 19th century, incapable to contribute any reasonable suggestion to current political theory:
But it is not only the negative structure of the argument but also the kind of negativity that is continuous between Agamben’s analyses of aesthetic and legal judgement. In other words, ‘normality without a norm’, which paradoxically articulates the subtraction of normativity from the normal, is simply another way of saying ‘law without force or application’.
This Kantian formulation is not only fully packed with uncritical aprioris, such like normality or the normal, which marks Agamben as an epigonic utterer of common sense. As this ancient form of idealism demonstrates, Agamben obviously never heard anything of the linguistic turn as well. The unfortunate issue with Agamben’s writing is that it is considered both as influential and pace-making.
So, should we reject negativity and turn to positivity? Rejecting negativity turns problematic only if it is taken as an attitude that stretches out from the principle down to the activity. Notably, the same is true for positivity. We need not to get rid of it, which only would send us into the abyss of totalised mysticism. Instead, we have to transcend them into the Differential that “precedes” both. While the former could be reframed into the conditionability of processes (but not into constraints!), the latter finds its non-representational roots in the potential and the virtual. If the positive is taken as a totalizing metaphysics, we soon end in overdone specialization, uncritical neo-liberalism or even dictatorship, or in idealism as an ideology. The turn to a metaphysics of (representational) positivity is incurably caught in the necessity of justification, which—unfortunately enough for positivists—can’t be grounded within a positive metaphysics. To justify, that is to give “good reasons”, is a contradictio in adiecto, if it is understood in its logic or idealistic form.
Both, negativity and positivity (in their representational instances) could work only if there is a preceding and more or less concrete subject, which of course could not presupposed when we are talking about “first reasons” or “justification”. This does not only apply to political theory or practice, it even holds for logic as a positively given structure. Abstractly, we can rewrite the concreteness into countability. Turning the whole thing around we see that as long as something is countable we will be confined by negativity and positivity on the representational level. Herein lies the limitation of the Universal Turing Machine. Herein lies also the inherent limitation of any materialism, whether in its profane or it theistic form. By means of the choreostemic space we can see various ways out of this confined space. We may, for instance, remove the countability from numbers by mediatizing it into probabilities. Alternatively, we may introduce a concept like infinity to indicate the conceptualness of numbers and countability. It is somewhat interesting that it is the concept of the infinite that challenges the empiric character of numbers. Else, we could deny representationalism in numbers while trying to keep countability. This creates the strange category of infinitesimals. Or we create multi-dimensional number spaces like the imaginary numbers. There are, of course, many, many ways to transcend the countability of numbers, which we can’t even list here. Yet, it is of utmost importance to understand that the infinite, as any other instance of departure from countability, is not a number any more. It is not countable either in the way Cantor proposed, that is, thinking of a smooth space of countability that stretches between empiric numbers and the infinite. We may count just the symbols, but the reference has inevitably changed. The empirics is targeting the number of the symbols, not the their content, which has been defined as “incountability”. Only by this misunderstanding one could get struck by the illusion that there is something like the countability of the infinite. In some ways, even real numbers do not refer to the language game of countability, and all the more irrational numbers don’t either. It is much more appropriate to conceive of them as potential numbers; it may well be that precisely this is the major reason for the success of mathematics.
The choreostemic space is the condition for separating the positive and the negative. It is structure and tool, principle and measure. Its topology implies the necessity for instantiation and renders the representationalist fallacy impossible; nevertheless, it allows to map mental attitudes and cultural habits for comparative purposes. Yet, this mapping can’t be used for modeling or anticipation. In some way it is the basis for subjectivity as pre-specific property, that is for a _Subjectivity,of course without objectivity. Therefore, the choreostemic space also allows to overcome the naïve and unholy separation of subjects and objects, without denying the practical dimension of this separation. Of course, it does so by rejecting even the tiniest trace of idealism, or apriorisms respectively.
The choreostemic space does not separate apriori the individual or the collective forms of mentality. In describing mentality it is not limited to the sayable, hence it can’t be attacked or even swallowed by positivism. Since it provides the means to map those habitual _Mentalfigures, people could talk about transitions between different attractors, which we could call “choreostemic galaxies”. The critical issue of values, those typical representatives of uncritical aprioris, is completely turned into a practical concern. Obviously, we can talk about “form” regarding politics without the need to invoke aesthetics. As Benjamin Morgan recently demonstrated (in the already cited [41]), aesthetics in politics necessarily refers to idealism.
Rejecting representational positivity, that is, any positivity that we could speak of in a formal manner, is equivalent to the rejection of first reason as an aprioric instance. As we already proposed for representational positivity, the claim of a first reason as a point of departure that is never revisited again results as well in a motionless endpoint, somewhere in the triangle built from materialism, idealism or realism. Attempts to soften this outcome by proposing a playful, or hypothetical, if not pragmatic, “fixation of first principles” are not convincing, mainly because this does not allow for any coherence between games, which results in a strong relativity of principles. We just could not talk about the relationships between those “firstness games”. In other words, we would not gain anything. An example for such a move is provided by Epperson [42]. Though he refers to the Aristotelian potential, he sticks with representational first principles, in his case logic in the form of the principle of the excluded middle and the principle of non-contradiction. Epperson does not get aware of the problems regarding the use of symbols in doing this. Once Wittgenstein critized the very same point in the Principia by Russell and Whitehead. Additionally, representational first principles are always transporters for ontological claims. As long as we recognize that the world is NOT made from objects, but of relations organized, selected and projected by each individual through interpretation, we would face severe difficulties. Only naive realism allows for a frictionless use of first principles. Yet, for a price that is definitely too high.
We think that the way we dissolved the problem of first reason has several advantages as compared to Deleuze’s proposal of the absolute plane of immanence. First, we do not need the notion of absoluteness, which appears at several instances in Deleuze’s main works “What is Philosophy?” [35] (WIP), “Empiricism and Subjectivity [43], and his “Pure Immanence” [44]. The second problem with the plane of immanence concerns the relation between immanence and transcendence. Deleuze refers to two different kinds of transcendence. While in WIP he denounces transcendence as inappropriate due to its heading towards identity, the whole concept of transcendental empiricism is built on the Kantian invention. This two-fold measure can’t be resolved. Transcendence should not be described by its target. Third, Deleuze’s distinction between the absolute plane of immanence and the “personal” one, instantiated by each new philosophical work, leaves a major problem: Deleuze leaves completely opaque how to relate the two kinds of immanence to each other. Additionally, there is a potentially infinite number of “immanences,” implying a classification, a differential and an abstract kind of immanence, all of which is highly corrosive for the idea of immanence itself. At least, as long one conceives immanence not as an entity that could be naturalized. This way, Deleuze splits the problem of grounding into two parts: (1) a pure, hence “transcendent” immanence, and (2) the gap between absolute and personal immanence. While the first part could be accepted, the second one is left completely untouched by Deleuze. The problem of grounding has just been moved into a layer cake. Presumably, these problems are caused by the fact that Deleuze just considers concepts, or _Concepts, if we’d like to consider the transcendental version as well. Several of those imply the plane of immanence, which can’t be described, which has no structure, and which just is implied by the factuality of concepts. Our choreostemic space moves this indeterminacy and openness into a “form” aspect in a non-representational, non-expressive space with the topology of a double-differential. But more important is that we not only have a topology at our disposal which allows to speak about it without imposing any limitation, we else use three other foundational and irreducibly elements to think that space, the choreostemic space. The CS thus also brings immanence and transcendence into the same single structure.
In this section we have discussed a change of perspective towards negativity and positivity. This change did become accessible by the differential structure of the choreostemic space. The problematic field represented by them and all the respective pseudo-solutions has been dissolved. This abandonment we achieved through the “Lagrangean principle”, that is, we replaced the constants—positivity and negativity respectively—by a procedure—instantiation of the Differential—plus a different constant. Yet, this constant is itself not a not a finite replacement, i.e. a “constant” as an invariance. The “constant” is only a relative one: the orthoregulation, comprising habits, traditions and institutions.
Reason—or as we would like to propose for its less anthropological character and better scalability, mentality—has been reconstructed as a kind of omnipresent reflection on the conditionability of proceedings in the choreostemic space. The conditionability can’t be determined in advance to the performed mental proceedings (acts), which for many could appear as somewhat paradoxical. Yet, it is not. The situation is quite similar to Wittgenstein’s transcendental logic that also gets instantiated just by doing something, while the possibility for performance precedes that of logic.
Finally, there is of course the question, whether there is any condition that we impose onto the choreostemic itself, a condition that would not be resolved by its self-referentiality. Well, there is indeed one: The only unjustified apriori of the choreostemic space seems to be the primacy of interpretation (POI). This apriori, however, is only a weak one, and above all, a practicable one, or one that derives from the openness of the world. Ultimately, the POI in turn is a direct consequence of the time-being. Any other aspect of interpretation is indeed absorbed by the choreostemic space and its self-referentiality, hence requiring no further external axioms or the like. In other words, the starting point of the choreostemic space, or the philosophical attitude of the choreosteme, is openness, the insight that the world is far to generative as to comprehend all of it.
The fact that it is almost without any apriori renders the choreostemic space suitable for those practical purposes where the openness and its sibling, ignorance, calls for dedicated activity, e.g. in all questions of cross-disciplinarity or trans-culturality. As far as different persons establish different forms of life, the choreostemic space even is highly relevant for any aspect of cross-personality. This in turn gives rise to a completely new approach to ethics, which we can’t follow here, though.
<h5>Mentality without Knowledge</h5>
Two of the transcendental aspects of the choreostemic space are _Model,and _Concept. The concepts of model and concept, that is, instantiations of our aspects, are key terms in philosophy of science and epistemology. Else, we proposed that our approach brings with it a new image of thought. We also said that mental activities inscribe figures or attractors into that space. Since we are additionally interested in the issue of justification—we are trying to get rid of them—the question of the relation between the choreostemic space and epistemology is being triggered.
The traditional primary topic of epistemology is knowledge, how we acquire it, particularly however the questions of first how to separate it from beliefs (in the common sense) on the one hand, and second how to secure it in a way that we possibly could speak about truth. In a general account, epistemology is also about the conditions of knowledge.
Our position is pretty clear: the choreostemic space is something that is categorically different from episteme or epistemology. Which are the reasons?
We reject the view that truth in its usual version is a reasonable category for talking about reasoning. Truth as a property of a proposition can’t be a part of the world. We can’t know anything for sure, neither regarding the local context, nor globally. Truth is an element of logic, and the only truth we can know of is empty: a=a. Yet, knowledge is supposed to be about empirical facts (arrangements of relations). Wittgenstein thus set logic as transcendental. Only the transcendental logic can be free of semantics and thus only within transcendental logic we can speak of truth conditions. The consequence is that we can observe either of two effects. First, any actual logic contains some semantic references, because of which it could be regarded as “logic” only approximately. Second, insisting on the application of logical truth values to actual contexts instead results in a categorical fault. The conclusion is that knowledge can’t be secured neither locally from a small given set of sentences about empirical facts, nor globally. We even can’t measure the reliability of knowledge, since this would mean to have more knowledge about the fact than it is given by the local observations provide. As a result, paradoxes and antinomies occur. The only thing we can do is try to build networks of stable models for a negotiable anticipation with negotiable purposes. In other words, facts are not given by relation between objects, but rather as a system of relations between models, which as a whole is both accepted by a community of co-modelers and which provides satisfying anticipatory power. Compared to that the notion of partial truth (Newton da Costa & Steven French) is still misconceived. It keeps sticking to the wrong basic idea and as such it is inferior to our concept of the abstract model. After all, any account of truth violates the fact that it is itself a language game.
Dropping the idea of truth we could already conclude that the choreostemic space is not about epistemology.
Well, one might say, ok, then it is an improved epistemology. Yet, this we would reject as well. The reason for that is a grammatical one. Knowledge in the meaning of epistemology is either about sayable or demonstrable facts. If someone says “I know”, or if someone ascribes to another person “he knows”, or if a person performs well and in hindsight her performance is qualified as “based on intricate knowledge” or the like, we postulate an object or entity called knowledge, almost in an ontological fashion. This perspective has been rejected by Isabelle Peschard [45]. According to her, knowledge can’t be separated from activity, or “enaction”, and knowledge must be conceived as a social embedded practice, not as a stateful outcome. For her, knowledge is not about representation at all. This includes the rejection of the truth conditions as a reasonable part of a concept of knowledge. Else, it will be impossible to give a complete or analytical description of this enaction, because it is impossible to describe (=to explicate) the Form of Life in a self containing manner.
In any case, however, knowledge is always, at least partially, about how to do something, even if it is about highly abstract issues. That means that a partial description of knowledge is possible. Yet, as a second grammatical reason, the choreostemic space does not allow for any representations at all, due to its structure, which is strictly local and made up from the second-order differential.
There are further differences. The CS is a tool for the expression of mental attractors, to which we can assign distinct yet open forms. To do so we need the concepts of mediality and virtuality, which are not mentioned anywhere in epistemology. Mental attractors, or figures, will always “comprise” beliefs, models, ideas, concepts as instances of transcendental entities, and these instances are local instances, which are even individually constrained. It is not possible to explicate these attractors other than by “living” it.
In some way, the choreostemic space is intimately related to the philosophy of C.S. Peirce, which is called “semiotics”. As he did, we propose a primacy of interpretation. We fully embrace his emphasis that signs only refer to signs. We agree with his attempt for discerning different kinds of signs. And we think that his firstness, secondness and thirdness could be related to the mechanisms of the choreostemic space. In some way, the CS could be conceived as a generalization of semiotics. Saying this, we also may point to the fact that Peirce’s philosophy is not regarded as epistemology either.
Rejecting the characterization of the choreostemic space as an epistemological subject we can now even better understand the contours of the notion of mentality. The “mental” can’t be considered as a set of things like beliefs, wishes, experiences, expectations, thought experiments, etc. These are just practices, or likewise practices of speaking about the relation between private and public aspects of thinking. Any of these items belong to the same mentality, to the same choreostemic figures.
In contrast to Wittgenstein, however, we propose to discard completely the distinction between internal and external aspects of the mental.
And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]
One of the transcendental aspects in the CS is concept, another is model. Both together are providing the aspects of use, idea and reference, that is, there is nothing internal and external any more. It simply depends on the purpose of the description, or the kind of report we want to create about the mental, whether we talk about the mental in an internalist or in externalist way, whether we talk about acts, concepts, signs, or models. Regardless, what we do as humans, it will always be predominantly a mental act, irrespective the change of material reconfigurations.
10. Conclusion
It is probably not an exaggeration to say that in the last two decades the diversity of mentality has been discovered. A whole range of developments and shifts in public life may have been contributing to that, concerning several domains, namely from politics, technology, social life, behavioural science and, last but not least, brain research. We saw the end of the Cold War, which has been signalling an unrooting of functionalism far beyond the domain of politics, and simultaneously the growth and discovery of the WWW and its accompanied “scopic44 media” [46, 47]. The “scopics” spurred the so-called globalization that worked much more in favour of the recognition of diversity than it levelled that diversity, at least so far. While we are still in the midst of the popularization and increasingly abundant usage of so-called machine learning, we already witness an intensified mutual penetration and amalgamation of technological and social issues. In the behavioural sciences, probably also supported by the deepening of mediatization, an unforeseen interest in the mental and social capabilities of animals manifested, pushing back the merely positivist and dissecting description of behavior. As one of the most salient examples may serve the confirmation of cultural traditions in dolphins and orcas, concerning communication as well as highly complex collaborative hunting. The unfolding of collaboration requires the mutual and temporal assignment of functional roles for a given task. This not only prerequisites a true understanding of causality, but even its reflected use as a game in probabilistic spaces.
Let us distil three modes or forms here, (i) the animal culture, (ii) the machine-becoming and of course (iii) the human life forms in the age of intensified mediatization. All three modes must be considered as “novel” ones, for one reason or another. We won’t go in any further detail here, yet it is pretty clear that the triad of these three modes render any monolithic or anthropologically imprinted form of philosophy of mind impossible. In turn, any philosophy of mind that is limited to just the human brains relation to the world, or even worse, which imposes analytical, logical or functional perspectives onto it, must be considered as seriously defect. This applies still to large parts of the mainstream in philosophy of mind (and even ethics).
In this essay we argued for a new Image of Thought that is independent from the experience of or by a particular form of life, form of informational45 organization or cultural setting, respectively. This new Image of Thought is represented through the choreostemic space. This space is dynamic and active and can be described formally only if it is “frozen” into an analytical reduction. Yet, its self-referentiality and self-directed generativity is a major ingredient. This self-referentiality is takes a salient role in the space’s capability to leave its conditions behind.
One of the main points of the choreostemic space (CS) probably is that we can not talk about “thought”—regardless its quasi-material and informational foundations—without referring to the choreostemic space. It is a (very) strong argument against Rylean concepts about the mind that claim the irrelevance of the concept of the mental by proposing that looking at the behavior is sufficient to talk about the “mind”. Of course, the CS does not support “the dogma of the ghost in the machine“ either. The choreostemic space defies (and helps to defy) any empirical and so also anthropological myopias through its triple-feature of transcendental framing, differential operation and immanent rooting. Such it is immune against naturalist fallacies such as Cartesian dualism as well as against arbitrariness or relativism. Neither it could be infected by any kind of preoccupation such like idealism or universalism. Despite one could regard it in some way as “pure Thought”, or consider it as the expressive situs of it, its purity is not an idealistic one. It dissolves either into the metaphysical transcendentality of the four conceptual aspects _a,that is, the _Model, _Mediality,_Concept,and _Virtuality.Or it takes the form of the Differential that could be considered as being kind of a practical transcendentality46 [48]. There, as one of her starting points Bühlmann writes:
Deleuze’s fundamental critique in Difference and Repetition is that throughout the history of philosophy, these conditions have always been considered as »already confined« in one way or another: Either within »a formless, entirely undifferentiated underground« or »abyss« even, or within the »highly personalized form« of an »autocratically individuated Being«
Our choreostemic space provides also the answer to the problematics of conditions.47 As Deleuze, we suggest to regard conditions only as secondary, that is as relevant entities only after any actualization. This avoids negativity as a metaphysical principle. Yet, in order to get completely rid of any condition while at the same time retain conditionability as a transcendental entity we have to resort to self-referentiality as a generic principle. Hence, our proposal goes beyond Deleuze’s framework as he developed it from “Difference and Repetition” until “What is Philosophy?”, since he never made this move.
Basically, the CS supports Wittgenstein’s rejection of materialism, which experienced a completely unjustified revival in the various shades of neuro-isms. Malcolm cites him [49]:
It makes as little sense to ascribe experiences, wishes, thoughts, beliefs, to a brain as to a mushroom. (p.186)
This support should not surprise, since the CS was deliberately constructed to be compatible with the concept of language game. Despite the CS also supports his famous remark about meaning:
it is also clear that the CS may be taken as a means to overcome the debate about external or internal primacies or foundations of meaning. The duality of internal vs. external is neutralized in the CS. While modeling and such the abstract model always requires some kind of material body, hence representing the route into some interiority, the CS is also spanned by the Concept and by Mediality. Both concepts are explicit ties between any kind of interiority and and any kind of exteriority, without preferring a direction at all. The proposal that any mental activity inscribes attractors into that space just means that interiority and exteriority can’t be separated at all, regardless the actual conceptualisation of mind or mentality. Yet, in accordance with PI 693 we also admit that the choreostemic space is not equal to the mental. Any particular mentality unfolds as an actual performance in the CS. Of course, the CS does not describe material reconfigurations, environmental contingency etc. and the performance taking place “there”. In other words, it does not cover any aspect of use. On the other hand, material reconfiguration are simply not “there” as long as they do not get interpreted by applying some kind of model.
The CS clearly shows that we should regard questions like “Where is the mind?” as kind of a grammatical mistake, as Blair lucidly demonstrates [50]. Such a usage of the word “mind” not only implies irrevocably that it is a localizable entity. It also claims its conceptual separatedness. Such a conceptualization of the mind is illusionary. The consequences for any attempt to render “machines” “more intelligent” are obviously quite dramatic. As for the brain, it is likewise impossible to “localize” mental capacities in the case of epistemic machines. This fundamental de-territorialization is not a consequence of scale, as in quantum physics. It is a consequence of the verticality of the differential, the related necessity of forms of construction and the fact, that a non-formal, open language, implying randolations to the community, is mandatory to deal with concepts.
One important question about a story like the “choreostemic space” with its divergent, but nevertheless intimately tied four-fold transcendentality is about the status of that space. What “is” it? How could it affect actual thought? Since we have been starting even with mathematical concepts like space, mappings, topology, or differential, and since our arguments frequently invokes the concept of mechanism,one could suspect that it is a piece of analytical philosophy. This ascription we can clearly reject.
Peter Hacker convincingly argues that “analytical philosophy” can’t be specified by a set of properties of such assumed philosophy. He proposes to consider it as a historical phase of philosophy, with several episodes, beginning around 1890 [53]. Nevertheless, during the 1970ies a a set of believes formed kind of a basic setup. Hacker writes:
But there was broad consensus on three points. First, no advance in philosophical understanding can be expected without the propaedeutic of investigating the use of the words relevant to the problem at hand. Second, metaphysics, understood as the philosophical investigation into the objective, language-independent, nature of the world, is an illusion. Third, philosophy, contrary to what Russell had thought, is not continuous with, but altogether distinct from science. Its task, contrary to what the Vienna Circle averred, is not the clarification or ‘improvement’ of the language of science.
Where we definitely disagree is at the point about metaphysics. Not only do we refute the view that metaphysics is about the objective, language-independent, nature of the world. As such we indeed would reject metaphysics. An example for this kind of thinking is provided by the writing of Whitehead. It should have become clear throughout our writing that we stick to the primacy of interpretation, and accordingly we do regard the believe in an objective reality as deeply misconceived. Thereby we do neither claim that our mental life is independent from the environment—as radical constructivism (Varela & Co) does—nor do we claim that there is no external world around us that is independent from our perception and constructions. Such is just belief in metaphysical independence, which plays an important tole in modernism. The idea of objective reality is also infected by this belief, resulting in a self-contradiction. For “objective” makes sense only as an index to some kind of sociality, and hence to a group sharing a language, and further to the use of language. The claim of “objective reality is thus childish.
More important, however, we have seen that the self-referentiality of terms like concept (we called those “strongly singular terms“) enforces us to acknowledge that Concept, much like logic, is a transcendental category. Obviously we refer strongly to transcendental, that is metaphysical categories. At the same time we also propose, however, that there are manifolds of instances of those transcendental categories.
The choreostemic space describes a mechanism. In that it resembles to the science of biology, where the concept of mechanism is an important epistemological tool. As such, we try to defend against mysticism, against the threat that is proposed by any all too quick reference to the “Lebenswelt”, the form of life and the ways of living. But is it really an “analysis”?
Putnam called “analysis” an “inexplicable noise”[54]. His critique was precisely that semantics can’t be found by any kind of formalization, that is outside of the use of language. In this sense we certainly are not doing analytic philosophy. As a final point we again want to emphasize that it is not possible to describe the choreostemic space completely, that is, all the conditions and effects, etc., due to its self-referentiality. It is a generative space that confirms its structure by itself. Nevertheless it is neither useless nor does it support solipsism. In a fully conscious act it can be used to describe the entirety of mental activity, and only as a fully conscious act, while this description is a fully non-representational description. In this way it overcomes not only the Cartesian dualism about consciousness. In fact, it is another way to criticise the distinction between interiority and exteriority.
For one part we agree with Wittgenstein’s critique (see also the work of PMS Hacker about that), which identifies the “mystery” of consciousness as an illusion. The concept of the language game, which is for one part certainly an empiric concept, is substantial for the choreostemic space. Yet, the CS provides several routes between the private and the communal, without actually representing one or the other. The CS does no distinguish between the interior and the exterior at all, just recall that mediality is one of the transcendental aspects. Along with Wittgenstein’s “solipsistic realism” we consequently reject also the idea that ontology can be about the external world, as this again would introduce such a separation. Quite to the contrast, the CS vanishes the need for the naive conception of ontology. Ontology makes sense only within the choreostemic space.
Yet, we certainly embrace the idea that mental processes are ultimately “based” on physical matter, but unfolded into and by their immaterial external surrounds, yielding an inextricable compound. Referring to any “neuro” stuff regarding the mental does neither “explain” anything nor is it helpful to any regard, whether one considers it as neuro-science or as neuro-phenomenology.
Summarizing the issue we may say that the choreostemic space opens a completely new level for any philosophy of the mental, not just what is being called the human “mind”. It also allows to address scientific questions about the mental in a different way, as well as it clarifies the route to machines that could draw their own traces and figures into that space. It makes irrepealable clear that any kind of functionalism or materialism is once and for all falsified.
Let us now finally inspect our initial question that we put forward in the editorial essay. Is there a limit for the mental capacity of machines? If yes, which kind of limit and where could we draw it? The question about the limit of machines directly triggers the question about the image of humanity („Bild des Menschen“), which is fuelled from the opposite direction. So, does this imply kind of a demarcation line between the domain of the machines and the realm of the human? Definitely not, of course. To opt for such a separation would not only follow idealist-romanticist line of critizising technology, but also instantiate a primary negativity.
Based on the choreostemic space, our proposal is a fundamentally different one. It can be argue that this space can contains any condition of any thought as an population of unfolding thoughts. These unfoldings inscribe different successions into the space, appearing as attractors and figures. The key point of this is that different figures, representing different Lebensformen (Forms of Life) that are probably even incommensurable to each other, can be related to each other without reducing any of them. The choreostemic space is a space of mental co-habitation.
Let us for instance start with the functionalist perspective that is so abundant in modernism since the times of Descartes. A purely functionalist stance is just a particular figure in that space, as it applies to any other style of thinking. Using the dictum of the choreosteme as a guideline, it is relatively easy to widen the perspective into a more appropriate one. Several developmental paths into a different choreostemic attractor are possible. For instance, mediatization through social embedding [52], opening through autonomous associative mechanisms as we have described it, or the adhoc recombination of conceptual principles as it has been demonstrated by Douglas Hofstadter. Letting a robot range freely around also provokes the first tiny steps away from functionalism, albeit the behavioral Bauplan of the insects (arthropoda) demonstrates that this does not install a necessity for the evolutionary path to advanced mental capabilities.
The choreostemic space can serve as such a guideline because it is not infected by anthropology in any regard. Nevertheless it allows to speak clearly about concepts like belief and knowledge, of course, without reducing these concepts to positive definite or functionalist definitions. It also remains completely compatible with Wittgenstein’s concept of the language game. For instance, we reconstructed the language game “knowing” as a label for a pointer (say reference) to a particular image of thought and its use. Of course, this figure should not be conceived as a fixed point attractor, as the various shades of materialism, idealism and functionalism actually would do (if they would argue along the choreosteme). It is somewhat interesting that here, by means of the choreostemic space, Wittgenstein and Deleuze approach each other quite closely, something they themselves would not have been supported, probably.
Where is the limit of machines, then?
I guess, any answer must refer to the capability to leave a well-formed trace in the choreostemic space. As such, the limits of machines are to be found in the same way as they are found for us humans: To feel and to act as an entity that is able to contribute to culture and to assimilate it in its mental activity.
We started the choreostemic space as a framework to talk about thinking, or more general: about mentality, in a non-anthropological and non.-reductionist manner. In the course of our investigation, we found a tool that actualizes itself into real social and cognitive situations. We also found the infinite space of choreostemic galaxies as attractors for eternal returns without repetition of the identical. Choreosteme keeps the any alive, without subjugating individuality, it provides a new and extended level of sayability without falling into representationalism. Taken together, as a new Image of Thought it allows to develop thinking deliberately and as part of a multitudinous variety.
1. This piece is thought of as a close relative to Deleuze’s Difference & Repetition (D&R)[1]. Think of it as a satellite of it, whose point of nearest approach is at the end of part IV of D&R, and thus also as a kind of extension of D&R.
2. Deleuze of course, belongs to them, but of course also Ludwig Wittgenstein (see §201 of PI [2], “paradox” of rule following), and Wilhelm Vossenkuhl [3], who presented three mutually paradoxical maxims as a new kind of a theory of morality (ethics), that resists the reference to monolithically set first principles, such as for instance in John Rawls’ “Theory of Justice”. The work of those philosophers also provides examples of how to turn paradoxicality productive, without creating paradoxes at all, the main trick being to overcome their fixation by a process. Many others, including Derrida, just recognize paradoxes, but are neither able to conceive of paradoxicality nor to distinguish them from paradoxes, hence they take paradoxes just as unfortunate ontological knots. In such works, one can usually find one or the other way to prohibit interpretation (think about the trail, grm. “Spur” in Derrida)
3. Paradoxes and antinomies like those described by Taylor, Banach-Tarski, Russell or of course Zenon are all defect, i.e. pseudo-paradoxes, because they violate their own “gaming pragmatics”. They are not paradoxical at all, but rather either simply false or arbitrarily fixed within the state of such violation. The same fault is committed by the Sorites paradox and its relatives. They are all mixing up—or colliding—the language game of countability or counting with the language game of denoting non-countability, as represented by the infinite or the infinitesimal. Instead of saying that they violate the apriori self-declared “gaming pragmatics” we also could say that they change the most basic reference system on the fly, without any indication of doing so. This may happen through an inadequate use of the concept of infiniteness.
4. DR 242 eternal return: it is not the same and the identical that returns, but the virtual structuredness (not even a “principle”), without which metamorphosis can’t be conceived.
5. In „Difference and Repetition“, Deleuze chose to spell “Idea” with a capital letter, in order to distinguish his concept from the ordinary word.
7. Here we find interesting possibilities for a transition to Alan Turing‘s formal foundation of creativity [5].
8. This includes the usage of concepts like virtuality, differential, problematic field, the rejection of the primacy of identity and closely related to that, the rejection of negativity, the rejection of the notion of representation, etc. Rejecting the negative opens an interesting parallel to Wittgenstein’s insisting on the transcendentality of logics and the subordination of any practical logic to performance. Since the negative is a purely symbolic entity, it is also purely aposteriori to any genesis, that is self-referential performance.
9. I would like to recommend to take a look to the second part of part IV in D&R, and maybe, also to the concluding chapter therein (download it here).
10. Saying „we“ here is not just due to some hyperbolic politeness. The targeted concept of this essay, the choreosteme, has been developed by Vera Bühlmann and the author of this essay (Klaus Wassermann) in close collaboration over a number of years. Finally the idea proofed to be so strong that now there is some dissent about the role and the usage of the concept.
11. For belief revision as described by others, overview @ Stanford, a critique by Pollock, who clarified that belief revision as comprised and founded by the AGM theory (see below) is incompatible to standard epistemology.
12. By symbolism we mean the belief that symbols are the primary and apriori existent entities for any description of any problematic field. In machine-based epistemology for instance, we can not start with data organized in tables because this pre-supposes a completed process of “ensymbolization”. Yet, in the external world there are no symbols, because symbols only exist subsequent to interpretation. We can see that symbolism creates the egg-chick-problem.
13. Miriam Meckel, communication researcher at the university of Zürich, is quite active in drawing dark-grey pictures. Recently, she coined “Googlem” as a resemblance to Google and Golem. Meckel commits several faults in that: She does not understand the technology(accusing Google to use averages), and she forgets about the people (programmers) behind “the computer”, and the people using the software as well. She follows exactly the pseudo-romantic separation between nature and the artificial.
Miriam Meckel, Next. Erinnerungen an eine Zukunft ohne uns, Rowohlt 2011.
14. Here we find a resemblance to Wittgenstein’s denial to attribute philosophy the role of an enabler of understanding. According to Wittgenstein, philosophy even does not and can not describe. It just can show.
15. This also concerns the issue of cross-culturality.
16. Due to some kind of cultural imprinting, a frequently and solitary exercised habit, people almost exclusively think of Cartesian spaces as soon as a “space” is needed. Yet, there is no necessary implication between the need for a space and the Cartesian type of space. Even Deleuze did not recognize the difficulties implied by the reference to the Cartesian space, not only in D&R, but throughout his work. Nevertheless, there are indeed passages (in What is philosophy? with “planes of immanence”, or in the “Fold”) where it seems that he could have smelled into a different conception of space.
17. For the role of „elements“ please see the article about „Elementarization“.
18. Vera Bühlmann [8]: „Insbesondere wird eine Neu-Bestimmung des aristotelischen Verhältnisses von Virtualität und Aktualität entwickelt, unter dem Gesichtspunkt, dass im Konzept des Virtuellen – in aller Kürze formuliert – das Problem struktureller Unendlichkeit auf das Problem der zeichentheoretischen Referenz trifft.“
19. which is also a leading topic of our collection of essays here.
20. e.g. Gerhard Gamm, Sybille Krämer, Friedrich Kittler
21. cf. G.C. Tholen [7], V.Bühlmann [8].
22. see the chapter about machinic platonism.
23. Actually, Augustine instrumentalises the discovered difficulty to propose the impossibility to understand God’s creation.
24. It is an „ancestry“ only with respect to the course in time, as the result of a process, not however in terms of structure, morphology etc.
25. cf. C.S. Peirce [16], Umberto Eco [17], Helmut Pape [18];
26. Note that in terms of abstract evolutionary theory rugged fitness landscapes enforce specialisation, but also bring along an increased risk for vanishing of the whole species. Flat fitness landscapes, on the other hand, allow for great diversity. Of course the fitness landscape is not a stable parameter space, neither locally not globally. IN some sense, it is even not a determinable space. Much like the choreostemic space, it would be adequate to conceive of the fitness landscape as a space built from 2-set of transformatory power and the power to remain stability. Both can be determined only in hindsight. This paradoxality is not by chance, yet it has not been discovered as an issue in evolutionary theory.
27. Of course I know that there are important differences between verbs and substantives, which we may level out in our context without loosing too much.
28. In many societies, believing has been thought to be tied to religion, the rituals around the belief in God(s). Since the renaissance, with upcoming scientism and profanisation of societies religion and science established sort of a replacement competition. Michel Serres described how scientists took over the positions and the funds previously held by the cleric. The impression of a competition is well-understandable, of course, if we consider the “opposite direction” of the respective vectors in the choreostemic space. Yet, it is also quite mistaken, maybe itself provoked by overly idealisation, since neither the clerk can make his day without models nor the scientist his one without beliefs.
29. The concept of “theory” referred to here is oriented towards a conceptualisation based on language game and orthoregulation. Theories need to be conceived as orthoregulative milieus of models in order to be able to distinguish between models and theories, something which can’t be accomplished by analytic concepts. See the essay about theory of theory.
30. Of course, we do not claim to cover completely the relation between experiments, experience, observation on the one side and their theoretical account on the other by that. We just would like to emphasize the inextricable dynamic relation between modeling and concepts in scientific activities, whether in professional or “everyday-type” of science. For instance, much could be said in this regard about the path of decoherence from information and causality. Both aspects, the decoherence and the flip from intensifying modeling over to a conceptual form has not been conceptualized before. The reason is simple enough: There was no appropriate theory about concepts.
When, for instance, Radder [28] contends that the essential step from experiment to theory is to disconnect theoretical concepts from the particular experimental processes in which they have been realized [p.157], then he not only misconceives the status and role of theories, he also does not realize that experiments are essentially material actualisations of models. Abstracting regularities from observations into models and shaping the milieu for such a model in order to find similar ones, thereby achieving generalization is anything but to disconnect them. It seems that he overshoot a bit in his critique of scientific constructivism. Additionally, his perspective does not provide any possibility to speak about the relation between concepts and models. Though Radder obviously had the feeling of a strong change in the way from putting observations into scene towards concepts, he fails to provide a fruitful picture about it. He can’t surpass that feeling towards insight, as he muses about “… ‘unintended consequences’ that might arise from the potential use of theoretical concepts in novel situations.” Such descriptions are close to scientific mysticism.
Radder’s account is a quite recent one, but others are not really helpful about the relation between experiment, model and concept either. Kuhn’s praised concept of paradigmatic changes [24] can be rated at most as a phenomenological or historizing description. Sure, his approach brought a fresh perspective in times of overdone reductionism, but he never provided any kind of abstract mechanism. Other philosophers of science stuck to concepts like prediction (cf. Reichenbach [20], Salmon [21]) and causality (cf. Bunge [22], Pearl [23]), which of course can’t say anything about the relation to the category of concepts. Finally, Nancy Cartwright [25], Isabelle Stengers [26], Bruno Latour [9] or Karin Knorr Cetina [10] are representatives for the various shades of constructivism, whether individually shaped or as a phenomenon embedded into a community, which also can’t say anything about concepts as categories. A screen through the Journal of Applied Measurement did not reveal any significantly different items.
Thus, so far philosophy of science, sociology and history of science have been unable to understand the particular dynamics between models and concepts as abstract categories, i.e. as _Modelsor _Concepts.
31. If the members of a community, or even the participants in random interactions within it, agree on the persistence of their relations, then they will tend to exhibit a stronger propensity towards collaboration. Robert Axelrod demonstrated that on the formal level by means of a computer experiment [33]. He has been the first one, who proposed game theory as a means to explain the choice of strategies between interactees.
32. Orig.: „Seit über 200 Jahren ist die Philosophie anthropologisch bestimmt. Was das genauer bedeutet, hat sie dagegen kaum erforscht.“
33. Orig.: „Nietzsches Idealismuskritik, die in vielen Schattierungen vorliegt und immer auf das philosophische Selbstmissverständnis eines reinen Geistes und reiner Begriffe zielt, richtet sich auch gegen ein bestimmtes Naturverständnis.“ (KAV439)
34. More precisely, in evolutionary processes the capability for generalization is selected under conditions of scarcity. Scarcity, however, is inevitably induced under the condition of growth or consumption. It is important to understand that newly emerging levels of generalization do not replace former levels of integration. Those undergo a transformation with regard to their relations and their functional embedding, i.e. with regard to their factuality. In morphology of biological specimens this is well-known as “Überformung”. For more details about evolution and generalization please see this.
35. The notions of “philosophy of nature” or even “natural philosophy” are strictly inappropriate. Both “kinds” of philosophy are not possible at all. They have to be regarded as a strange mixture of contemporarily available concepts from science (physics, chemistry, biology), mysticism or theism and the mistaken attempt to transfer topics as such from there to philosophy. Usually, the result is simply a naturalist fallacy with serious gaps regarding the technique of reflection. Think about Kant’s physicalistic tendencies throughout his philosophy, the unholy adaptation of Darwinian theory, analytic philosophy, which is deeply influenced by cybernetics, or the comeback of determinism and functionalism due to almost ridiculous misunderstandings of the brain.
Nowadays it must be clear that philosophy before the reflection of the role of language, or more general, before the role of languagability—which includes processes of symbolization and naming—can’t be regarded as serious philosophy. Results from sciences can be imported into philosophy only as formalized structural constraints. Evolutionary theory, for instance, first have to be formalized appropriately (as we did here), before it could be of any relevance to philosophy. Yet, what is philosophy? Besides Deleuze’s answer [35], we may conceive philosophy as a technique of asking about the conditionability of the possibility to reflect. Hence, Wittgenstein said that philosophy should be regarded as a cure. Thus philosophy includes fields like ethics as a theory of morality or epistemology, which we developed here into a “choreostemology”.
36. Orig.: „Der Punkt, um den es sich namentlich handelt, lässt sich ganz bestimmt angeben. Es ist gleichsam der Apfel in dem logischen Sündenfall der deutschen Philosophie nach Kant: das Verhältnis zwischen Subjekt und Objekt in der Erkenntnis.“
37. Despite Rölli usually esteems Deleuze’s philosophy of the differential, here he refers to the difference though. I think it should be read as “divergence and differential”.
38. Orig.: „Nach allem wird klarer geworden sein, dass es sich bei diesem Pragmatismus nicht um einen einfachen Pragmatismus handelt, sondern um einen mit aller philosophischen Raffinesse konstruierten Pragmatismus der Differenz.“
39. As scientific facts, Quantum physics, the probabilistic structure of the brain and the non-representationalist working of the brain falsify determinism as well as finiteness of natural processes, even if there should be something like “natural laws”.
40. See the article about the structure of comparison.
41. Even Putnam does so, not only in his early functionalist phase, but still in Representation and Reality [36].
42. Usually, philosophers are trained only in logics, which does not help much, since logic is not a process. Of course, being trained in mathematical structures does not imply that the resulting philosophy is reasonable at all. Take Alain Badiou as an example, who just blows up materialism.
43. A complete new theory of governmentality and sovereignty would be possible here.
44. The notion of “scopic” media as coined by Knorr Cetina means that modern media substantially change the point of view (“scopein”, looking, viewing). Today, we are not just immersed into them, but we deliberately choose them and search for them. The change of perspective is thought to be a multitude and contracting space and time. This however, is not quite typical for the new media.
45. Here we refer to our extended view onto “information” that goes far beyond the technical reduced perspective that is forming the main stream today. Information is a category that can’t be limited to the immaterial. See the chapter about “Information and Causality”.
46. Vera Bühlmann described certain aspects of Deleuze’s philosophy as an attempt to naturalize transcendentality in the context of emergence, as it occurs in complex systems. Deleuze described the respective setting in “Logic of Sense” [49] as the 14th series of paradoxes.
47. …which is not quite surprising, since we developed the choreostemic space together.
• [1] Gilles Deleuze, Difference and Repetition. Translated by Paul Patton, Athlon Press, 1994 [1968].
• [2] Ludwig Wittgenstein, Philosophical Investigations.
• [3] Wilhelm Vossenkuhl. Die Möglichkeit des Guten. Beck, München 2006.
• [4] Jürgen Habermas, Über Moralität und Sittlichkeit – was macht eine Lebensform »rational«? in: H. Schnädelbach (Hrsg.), Rationalität. Suhrkamp, Frankfurt 1984.
• [5] Alan Turing. Chemical Basis of Morphogenesis.
• [6] K. Wassermann, That Centre-Point Thing. The Theory Model in Model Theory. In: Vera Bühlmann, Printed Physics, Springer New York 2012, forthcoming.
• [7] Georg Christoph Tholen. Die Zäsur der Medien. Kulturphilosophische Konturen. Suhrkamp, Frankfurt 2002.
• [8] Vera Bühlmann. Inhabiting media : Annäherungen an Herkünfte und Topoi medialer Architektonik. Thesis, University of Basel 2011. available online, summary (in German language) here.
• [9] Bruno Latour,
• [10] Karin Knorr Cetina (1991). Epistemic Cultures: Forms of Reason in Science. History of Political Economy, 23(1): 105-122.
• [11] Günther Ropohl, Die Unvermeidlichkeit der technologischen Aufklärung. In: Paul Hoyningen-Huene, & Gertrude Hirsch (eds.), Wozu Wissenschaftsphilosophie? De Gruyter, Berlin 1988.
• [12] Bas C. van Fraassen, Scientific Representation: Paradoxes of Perspective. Oxford University Press, New York 2008.
• [13] Ronald N. Giere, Explaining Science: A Cognitive Approach. Cambridge University Press, Cambridge 1988.
• [14] Aaron Ben-Ze’ev, Is There a Problem in Explaining Cognitive Progress? pp.41-56 in: Robert F. Goodman & Walter R. Fisher (eds.), Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences) SUNY Press, New York 1995.
• [15] Robert Brandom, Making it Explicit.
• [16] C.S. Peirce, var.
• [17] Umberto Eco,
• [18] Helmut Pape, var.
• [19] Vera Bühlmann, “Primary Abundance, Urban Philosophy — Information and the Form of Actuality.” pp.114-154, in: Vera Bühlmann (ed.), Printed Physics. Springer, New York 2012, forthcoming.
• [20] Hans Reichenbach, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge, University of Chicago Press, Chicago, 1938.
• [21] Wesley C. Salmon, Causality and Explanation. Oxford University Press, New York 1998.
• [22] Mario Bunge, Causality and Modern Science. Dover Publ. 2009 [1979].
• [23] Judea Pearl , T.S. Verma (1991) A Theory of Inferred Causation.
• [24] Thomas S. Kuhn, Scientific Revolutions
• [25] Nancy Cartwright. var.
• [26] Isabelle Stengers, Spekulativer Konstruktivismus. Merve, Berlin 2008.
• [27] Peter M. Stephan Hacker, “Of the ontology of belief”, in: Mark Siebel, Mark Textor (eds.), Semantik und Ontologie. Ontos Verlag, Frankfurt 2004, pp. 185–222.
• [28] Hans Radder, “Technology and Theory in Experimental Science.” in: Hans Radder (ed.), The Philosophy Of Scientific Experimentation. Univ of Pittsburgh 2003, pp.152-173
• [29] C. Alchourron, P. Gärdenfors, D. Makinson (1985). On the logic of theory change: Partial meet contraction functions and their associated revision functions. Journal of Symbolic Logic, 50: 510–530.
• [30] Sven Ove Hansson, Sven Ove Hansson (1998). Editorial to Thematic Issue on: “Belief Revision Theory Today”, Journal of Logic, Language, and Information 7(2), 123-126.
• [31] John L. Pollock, Anthony S. Gillies (2000). Belief Revision and Epistemology. Synthese 122: 69–92.
• [32] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38:2, 339-366.
• [33] Robert Axelrod, Die Evolution der Kooperation. Oldenbourg, München 1987.
• [34] Marc Rölli, Kritik der anthropologischen Vernunft. Matthes & Seitz, Berlin 2011.
• [35] Deleuze, Guattari, What is Philosophy?
• [36] Hilary Putnam, Representation and Reality.
• [37] Giorgio Agamben, The State of Exception.University of Chicago Press, Chicago 2005.
• [38] Elena Bellina, “Introduction.” in: Elena Bellina and Paola Bonifazio (eds.), State of Exception. Cultural Responses to the Rhetoric of Fear. Cambridge Scholars Press, Newcastle 2006.
• [39] Friedrich Albert Lange, Geschichte des Materialismus und Kritik seiner Bedeutung in der Gegenwart. Frankfurt 1974. available online @ zeno.org.
• [40] Michel Foucault, Archaeology of Knowledge.
• [41] Benjamin Morgan, Undoing Legal Violence: Walter Benjamin’s and Giorgio Agamben’s Aesthetics of Pure Means. Journal of Law and Society, Vol. 34, Issue 1, pp. 46-64, March 2007. Available at SSRN: http://ssrn.com/abstract=975374
• [42] Michael Epperson, “Bridging Necessity and Contingency in Quantum Mechanics: The Scientific Rehabilitation of Process Metaphysics.” in: David R. Griffin, Timothy E. Eastman, Michael Epperson (eds.), Whiteheadian Physics: A Scientific and Philosophical Alternative to Conventional Theories. in process, available online; mirror
• [43] Gilles Deleuze, Empiricism and Subjectivity. An Essay on Hume’s Theory of HUman Nature. Columbia UNiversity Press, New York 1989.
• [44] Gilles Deleuze, Pure Immanence – Essays on A Life. Zone Books, New York 2001.
• [45] Isabelle Peschard
• [46] Knorr Cetina, Karin (2009): The Synthetic Situation: Interactionism for a Global World. In: Symbolic Interaction, 32 (1), S. 61-87.
• [47] Knorr Cetina, Karin (2012): Skopische Medien: Am Beispiel der Architektur von Finanzmärkten. In: Andreas Hepp & Friedrich Krotz (eds.): Mediatisierte Welten: Beschreibungsansätze und Forschungsfelder. Wiesbaden: VS Verlag, S. 167-195.
• [48] Vera Bühlmann, “Serialization, Linearization, Modelling.” First Deleuze Conference, Cardiff 2008) ; Gilles Deleuze as a Materialist of Ideality”, (lecture held at the Philosophy Visiting Speakers Series, University of Duquesne, Pittsburgh 2010.
• [49] Gilles Deleuze, Logic of Sense. Columbia University Press, New York 1991 [1990].
• [50] N. Malcolm, Nothing is Hidden: Wittgenstein’s Criticism of His Early Thought, Basil Blackwell, Oxford 1986.
• [51] David Blair, Wittgenstein, Language and Information: “Back to the Rough Ground!” Springer, New York 2006. mirror
• [52] Caroline Lyon, Chrystopher L Nehaniv, J Saunders (2012). Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. Available online (doi:10.1371/journal.pone.0038236)
• [53] Peter M. Stephan Hacker, “Analytic Philosophy: Beyond the linguistic turn and back again”, in: M. Beaney (ed.), The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology. Routledge, London 2006.
• [54] Hilary Putnam, The Meaning of “Meaning”, 1976.
May 17, 2012 § Leave a comment
In the late 1980ies there was a funny, or strange, if you like,
discussion in the German public about a particular influence of the English language onto the German language. That discussion got not only teachers engaged in higher education going, even „Der Spiegel“, Germany’s (still) leading weekly news magazine damned the respective „anglicism“. What I am talking about here considers the attitude to „sense“. At those times well 20 years ago, it was meant to be impossible to say „dies macht Sinn“, engl. „this makes sense“. Speakers of German at that time understood the “make” as “to produce”. Instead, one was told, the correct phrase had to be „dies ergibt Sinn“, in a literal, but impossible translation something like „this yields sense“, or even „dies hat Sinn“, in a literal, but again wrong and impossible translation, „this has sense“. These former ways of building a reference to the notion of „sense“ feels even awkward for many (most?) speakers of German language today. Nowadays, the English version of the meaning of the phrase replaced the old German one, and one even can find in the “Spiegel“ now the analogue to “making” sense.
Well, the issue here is not just one historical linguistics or one of style. The differences that we can observe here are deeply buried into the structure of the respective languages. It is hard to say whether such idioms in German language are due to the history of German Idealism, or whether this particular philosophical stance developed on the basis of the structures in the language. Perhaps a bit of both, one could say from a Wittgensteinian point of view. Anyway, we may and can be relate such differences in “contemporary” language to philosophical positions.
It is certainly by no means an exaggeration to conclude that the cultures differ significantly in what their languages allow to be expressible. Such a thing as an “exact” translation is not possible beyond trivial texts or a use of language that is very close to physical action. Philosophically, we may assign a scale, or a measure, to describe the differences mentioned above in probabilistic means, and this measure spans between pragmatism and idealism. This contrast also deeply influences philosophy itself. Any kind of philosophy comes in those two shades (at least), often expressed or denoted by the attributes „continental“ and „anglo-american“. I think these labels just hide the relevant properties. This contrast of course applies to the reading of idealistic or pragmatic philosophers itself. It really makes a difference (1980ies German . . . „it is a difference“) whether a native English speaking philosopher reads Hegel, or a German native, whether a German native is reading Peirce or an American guy, whether Quine conducts research in logic or Carnap. The story quickly complicates if we take into consideration French philosophy and its relation to Heidegger, or the reading of modern French philosophers in contemporary German speaking philosophy (which is almost completely absent).1
And it becomes even more complicated, if not complex and chaotic, if we consider the various scientific sub-cultures as particular forms of life, formed by and forming their own languages. In this way it may well seem to be rather impossible—at least, one feels tempted to think so—to understand Descartes, Leibniz, Aristotle, or even the pre-Socratics, not to speak about the Cro-Magnon culture2, albeit it is probably more appropriate to reframe the concept of understanding. After all, it may itself be infected by idealism.
In the chapters to come you may expect the following sections. As we did before we’ll try to go beyond the mere technical description, providing the historical trace and the wider conceptual frame:
A Shift of Perspective
Here, I need this reference to the relativity as it is introduced in—or by —language for highlighting a particular issue. The issue concerns a shift in preference, from the atom, the point, from matter, substance, essence and metaphysical independence towards the relation and its dynamic form, the transformation. This shift concerns some basic relationships of the weave that we call “Lebensform” (form of life), including the attitude towards those empiric issues that we will deal with in a technical manner later in this essay, namely the transformation of “data”. There are, of course, almost countless aspects of the topos of transformation, such like evolutionary theory, the issue of development, or, in the more abstract domains, mathematical category theory. In some way or another we already dealt with these earlier (for category theory, for evolutionary theory). These aspects of the concept of transformation will not play a role here.
In philosophical terms the described difference between German and English language, and the change of the respective German idiom marks the transition from idealism to pragmatism. This corresponds to the transition from a philosophy of primal identity to one where difference is transcendental. In the same vein, we could also set up the contrast between logical atomism and the event as philosophical topoi, or between favoring existential approaches and ontology against epistemology. Even more remarkably, we also find an opposing orientation regarding time. While idealism, materialism, positivism or existentialism (and all similar attitudes) are heading backwards in time, and only backwards, pragmatism and, more generally, a philosophy of events and transformation is heading forward, and only forward. It marks the difference between settlement (in Heideggerian „Fest-Stellen“, English something like „fixing at a location“, putting something into the „Gestell“3) and anticipation. Settlements are reflected by laws of nature in which time does not—and shall not—play a significant role. All physical laws, and almost all theories in contemporary physics are symmetric with respect to time. The “law perspective” blinds against the concept of context, quite obviously so. Yet, being blinded against context also disables to refer to information in an adequate manner.
In contrast, within a framework that is truly based on the primacy of interpretation and thus following the anticipatory paradigm, it does not make sense to talk about “laws”. Notably, issues like the “problem” of induction exist only in the framework of the static perspective of idealism and positivism.
It is important to understand that these attitudes are far from being just “academic” distinctions. There are profound effects to be found on the level of empiric activity, how data are handled using which kind of methods. Further more, they can’t be “mixed”, once one of them have been chosen. Despite we may switch between them in a sequential manner, across time or across domains, we can’t practice them synchronously as the whole setup of the life form is influenced. Of course, we do not want to rate one of them as the “best”, we just want to ensure that it is clear that there are particular consequences of that basic choice.
Towards the Relational Perspective
As late as 1991, Robert Rosen’s work about „Relational Biology“ has been anything but nearby [1]. As a mathematician, Rosen was interested in the problematics of finding a proper way to represent living systems by formal means. As a result of this research, he strongly proposed the “relational” perspective. He identifies Nicolas Rashevsky as the originator of it, who mentioned about it around 1935 for the first time. It really sounds strange that relational biology had to be (re-)invented. What else than relations could be important in biology? Yet, still today the atomistic thinking is quite abundant, think alone about the reductionist approaches in genetics (which fortunately got seriously attacked meanwhile4). Or think about the still prevailing helplessness in various domains to conceive appropriately about complexity (see our discussion of this here). Being aware of relations means that the world is not conceived as made from items that are described by inputs and outputs with some analytics, or say deterministics, in between. Only such items could be said that they “function”. The relational perspective abolishes the possibility of the reduction of real “systems” to “functions”.
As it is already indicated by the appearance of Rashevsky, there is, of course, a historical trace for this shift, kind of soil emerging from intellectual sediments.5 While the 19th century could be considered as being characterized by the topos of population (of atoms)—cf. the line from Laplace and Carnot to Darwin and Boltzmann—we can observe a spawning awareness for the relation in the 20th century. Wittgenstein’s Tractatus started to oppose Frege and has been always in stark contrast to logical positivism, then accompanied by Zermelo (“axiom” of choice6), Rashevsky (relational biology), Turing (morphogenesis in complex systems), McLuhan (media theory), String Theory in physics, Foucault (field of propositions), and Deleuze (transcendental difference). Comparing Habermas and Luhmann on the one side—we may label their position as idealistic functionalism—with Sellars and Brandom on the other—who have been digging into the pragmatics of the relation as it is present in humans and their culture—we find the same kind of difference. We also could include Gestalt psychology as kind of a pre-cursor to the party of “relationalists,” mathematical category theory (as opposed to set theory) and some strains from the behavioral sciences. Researchers like Ekman & Scherer (FACS), Kummer (sociality expresses as dynamics in relative positions), or Colmenares (play) focused the relation itself, going far beyond the implicit reference to the relation as a secondary quality. We may add David Shane7 for architecture and Clarke or Latour8 for sociology. Of course, there are many, many other proponents who helped to grow the topos of the relation, yet, even without a detailed study we may guess that compared to the main streams they still remain comparatively few.
These difference could not be underestimated in the field of information sciences, computer sciences, data analysis, or machine-based learning and episteme. It makes a great difference whether one would base the design of an architecture or the design of use on the concept of interfaces, most often defined as a location of full control, notably in both directions, or on the concept of behavioral surfaces.9. In the field of empiric activities, that is modeling in its wide sense, it yields very different setups or consequences whether we start with the assumption of independence between our observables or between our observations or whether we start with no assumptions about the dependency between observables, or observations, respectively. The latter is clearly the preferable choice in terms of intellectual soundness. Even if we stick to the first of both alternatives, we should NOT use methods that work only if that assumption is satisfied. (It is some kind of a mystery that people believe that doing so could be called science.) The reason is pretty simple. We do not know anything about the dependency structures in the data before we have finished modeling. It would inevitably result in a petitio principii if we’d put “independence” into the analysis, wrapped into the properties of methods. We would just find. . . guess what. After destroying facts—in the Wittgensteinian sense understood as relationalities—into empiristic dust we will not be able to find any meaningful relation at all.
Positioning Transformation (again)
Similarly, if we treat data as a “true” mapping of an outside “reality”, as “givens” that eventually are distorted a bit by more or less noise, we will never find multiplicity in the representations that we could derive from modeling, simply because it would contradict the prejudice. We also would not recognize all the possible roles of transformation in modeling. Measurement devices act as a filter10, and as such it does not differ from any analytic transformation of the data. From the perspective of the associative part of modeling, where the data are mapped to desired outcomes or decisions, “raw” data are simply not distinguishable from “transformed” data, unless the treatment itself would not be encoded as data as well. Correspondingly, we may consider any data transformation by algorithmic means as additional measurement devices, which are responding to particular qualities in the observations on their own. It is this equivalence that allows for the change from the linear to a circular and even a self-referential arrangement of empiric activities. Long-term adaptation, I would say even any adaptation at all is based on such a circular arrangement. The only thing we’d to change to earn the new possibilities was to drop the “passivist” representationalist realism11.
Usually, the transformation of data is considered as an issue that is a function of discernibility as an abstract property of data (Yet, people don’t talk like that, it’s our way of speaking here). Today, the respective aphorism as coined by Bateson already became proverbial, despite its simplistic shape: Information is the difference that makes the difference. According to the context in which data are handled, this potential discernibility is addressed in different ways. Let us distinguish three such contexts: (i) Data warehousing, (ii) statistics, and (iii) learning as an epistemic activity.
In Data Warehousing one is usually faced with a large range of different data sources and data sinks, or consumers, where the difference of these sources and sinks simply relates to the different technologies and formats of data bases. The warehousing tool should “transform” the data such that they can be used in the intended manner on the side of the sinks. The storage of the raw data as measured from the business processes and the efforts to provide any view onto these data has to satisfy two conditions (in the current paradigm). It has to be neutral—data should not be altered beyond the correction of obvious errors—and its performance, simply in terms of speed, has to be scalable, if not even independent from the data load. The activities in Data Warehousing are often circumscribed as “Extract, Transform, Load”, abbreviated ETL. There are many and large software solutions for this task, commercial ones and open source (e.g. Talend). The effect of DWH is to disclose the potential for an arbitrary and quickly served perspective onto the data, where “perspective” means just re-arranged columns and records from the database. Except cleaning and simple arithmetic operations, the individual bits of data itself remain largely unchanged.
In statistics, transformations are applied in order to satisfy the conditions for particular methods. In other words, the data are changed in order to enhance discernibility. Most popular is the log-transformation that shifts the mode of a distribution to the larger values. Two different small values that consequently are located nearby are separated better after a log-transformation, hence it is feasible to apply log-transformation to data that form a left-skewed distribution. Other transformations are aiming at a particular distribution, such as the z-score, or Fisher’s z-transformation. Interestingly, there is a further class of powerful transformations that is not conceived as such. Residuals are defined as deviation of the data from a particular model. In linear regression it is the square of the distance to the regression line.
The concept, however, can be extended to those data which do not “follow” the investigated model. The analysis of residual has two aspects, a formal one and an informal one. Formally, it is used as a complex test whether the investigated model does fit or whether it does not. The residual should not show any evident “structure”. That’s it. There is no institutional way back to the level of the investigated model, there are no rules about that, which could be negotiated in a yet to establish community. The statistical framework is a linear one, which could be seen as a heritage from positivism. It is explicitly forbidden to “optimize” a correlation by multiple actualization. Yet, informally the residuals may give hints on how to change the basic idea as represented by the model. Here we find a circular setup, where the strategy is to remove any rule-based regularity, i.e. discernibility form the data.
The effect of this circular arrangement takes completely place in the practicing human as kind of a refinement. It can’t be found anywhere in the methodological procedure itself in a rule-based form. This brings us to the third area, epistemic learning.
In epistemic learning, any of the potentially significant signals should be rendered in such a way as to allow for an optimized mapping towards a registered outcome. Such outcomes often come as dual values, or as a small group of ordinal values in the case of multi-constraint, multi-target optimization. In epistemic learning we thus find the separation of transformation and association in its most prominent form, despite the fact that data warehousing and statistics as well also are intended to be used for enhancing decisions. Yet, their linearity simply does not allow for any kind of institutionalized learning.
This arbitrary restriction to the linear methodological approach in formal epistemic activities results in two related quite unfavorable effects: First, the shamanism of “data exploration”, and second, the infamous hell of methods. One can indeed find thousands, if not 10s of thousands of research or engineering articles trying to justify a particular new method as the most appropriate one for a particular purpose. These methods themselves however are never identified as a „transformation“. Authors are all struggling for the “best” method, the whole community being neglecting the possibility—and the potential—of combining different methods after shaping them as transformations.
The laborious and never-ending training necessary to choose from the huge amount of possible methods then is called methodology… The situation is almost paradox. First, the methods are claimed to tell something about the world, despite this is not possible at all, not just because those methods are analytic. It is an idealistic hope, which has been abolished already by Hume. Above all, only analytic methods are considered to be scientific. Then, through the large population of methods the choice for a particular one becomes aleatory, which renders the whole activity into a deeply non-scientific one. Additionally, it is governed by the features of some software, or the skills of the user of such software, not by a conceptual stance.
Now remember that any method is also a specific filter. Obviously, nothing could be known about the beneficiality of a particular method before the prediction that is based on the respective model had been validated. This simple insight renders “data exploration” into meaninglessness. It can only play its role within linear empirical frameworks, which are inappropriate any way. Data exploration is suggested to be done “intuitively”, often using methods of visualization. Yet, those methods are severely restricted with regard to the graspable dimensionality. More than 6 to 8 dimensions can’t be “visualized” at once. Compare this to the 2n (n: number of variables) possible models and you immediately see the problem. Else, the only effect of visualization is just a primitive form of clustering. Additionally, visual inputs are images, above all, and as images they can’t play a well-defined epistemological role.12
Complementary to the non-concept of “exploring” data13, and equally misconceived, is the notion of “preparing” data. At least, it must be rated as misconceived as far as it comprises transformations beyond error correction and arranging data into tables. The reason is the same: We can’t know whether a particular “cleansing” will enhance the predictive power of the model, in other words, whether it comprises potential information that supports the intended discernibility, before the model has been built. There is no possibility to decide which variables to include before having finished the modeling. In some contexts the information accessible through a particular variable could be relevant or even important. Yet, if we conceive transformations as preliminary hypothesis we can’t call them “preparation” any more. “Preparation” for what? For proofing the petitio principii? Certainly the peak of all preparatory nonsense is the “imputation” of missing values.
Dorian Pyle [11] calls such introduced variables “pseudo variables”, others call them “latent” or even “hidden variables”.14 Any of these labels is inappropriate, since the transformation is nothing else than a measurement device. Introduced variables are just variables, nothing else.
Indeed, these labels are reliable markers: whenever you meet a book or article dealing with data exploration, data preparation, the “problem” of selecting a method, or likewise, selecting an architecture within a meta-method like the Artificial Neural Networks, you can know for sure that the author is not really interested in learning and reliable predictions. (Or, that he or she is not able to distinguish analysis from construction.)
In epistemic learning the handling of residuals is somewhat inverse to their treatment in statistics, again as a result of the conceptual difference between the linear and the circular approach. In statistics one tries to prove that the model, say: transformation, removes all the structure from the data such that the remaining variation is pure white noise. Unfortunately, there are two drawbacks with this. First, one has to define the model before removing the noise and before checking the predictive power. Secondly, the test for any possibly remaining structure again takes place within the atomistic framework.
In learning we are interested in the opposite. We are looking for such transformations which remove the noise in a multi-variate manner such that the signal-noise ratio is strongly enhanced, perhaps even to the proto-symbolic level. Only after the de-noising due to the learning process, that is after a successful validation of the predictive model, the structure is then described for the (almost) noise-free data segment15 as an expression that is complementary to the predictive model.
In our opinion an appropriate approach would actualize as an instance of epistemic learning that is characterized by
• – conceiving any method as transformation;
• – conceiving measurement as an instance of transformation;
• – conceiving any kind of transformation as a hypothesis about the “space of expressibility” (see next section), or, similarly, the finally selected model;
• – the separation of transformation and association;
• – the circular arrangement of transformation and association.
The Abstract Perspective
We now have to take a brief look onto the mechanics of transformations in the domain of epistemic activities.16 For doing this, we need a proper perspective. As such we choose the notion of space. Yet, we would like to emphasize that this space is not necessarily Euclidean, i.e. flat, or open, like the Cartesian space, i.e. if quantities running to infinite. Else, dimensions need not be thought of as being “independent”, i.e. orthogonal on each other. Distance measures need to be defined only locally, yet, without implying ideal continuity. There might be a certain kind of “graininess” defined by a distance D, below which the space is not defined. The space may even contain “bubbles” of lower dimensionality. So, it is indeed a very general notion of “space”.
Observations shall be represented as “points” in this space. Since these “points” are not independent from the efforts of the observer, these points are not dimensionless. To put it more precisely, they are like small “clouds”, that are best described as probability densities for “finding” a particular observation. Of course, this “finding” is kind of an inextricable mixture of “finding” and “constructing”. It does not make much sense to distinguish both on the level of such cloudy points. Note, that the cloudiness is not a problem of accuracy in measurement! A posteriori, that is, subsequent to introducing an irreversible move17, such a cloud could also be interpreted as an open set of the provoked observation and virtual observations. It should be clear by now that such a concept of space is very different from the Euclidean space that nowadays serves as a base concept for any statistics or data mining. If you think that conceiving such a space is not necessary or even nonsense, then think about quantum physics. In quantum physics we also are faced with the break-down of observer and observable, and they ended up quite precisely in spaces as we described it above. These spaces then are handled by various means of renormalization methods.18 In contrast to the abstract yet still physical space of quantum theory, our space need not even contain an “origin”. Elsewhere we called such a space aspectional space.
Now let us take the important step in becoming interested in only a subset of these observations. Assume we not only want to select a very particular set of observations—they are still clouds of probabilities, made from virtual observations—by means of prediction. This selection now can be conceived in two different ways. The first way is the one that is commonly applied and consists of the reconstruction of a “path”. Since in the contemporary epistemic life form of “data analysts” Cartesian spaces are used almost exclusively, all these selection paths start from the origin of the coordinate system. The endpoint of the path is the point of interest, the “outcome” that should be predicted. As a result, one first gets a mapping function from predictor variables to the outcome variable. All possible mappings form the space of mappings, which is a category in the mathematical sense.
The alternative view does not construct such a path within a fixed coordinate system, i.e. with a space with fixed properties. Quite to the contrast, the space itself gets warped and transformed until very simple figures appear, which represent the various subsets of observations according to the focused quality.
Imagine an ordinary, small, blown-up balloon. Next, imagine a grid in the space enclosed by the balloon’s hull, made by very thin threads. These threads shall represent the space itself. Of course, in our example the space is 3d, but it is not limited to this case. Now think of two kinds of small pearls attached to the threads all over the grid inside the balloon, blue ones and red ones. It shall be the red ones in which we are interested. The question now is what can we do to separate the blue ones from the red ones?
The way to proceed is pretty obvious, though the solution itself may be difficult to achieve. What we can try is to warp and to twist, to stretch, to wring and to fold the balloon in such a way that the blue pearls and the red pearls separate as nicely as possible. In order to purify the groups we may even consider to compress some regions of the space inside the balloon such that they are turn into singularities. After all this work—and beware it is hard work!—we introduce a new grid of threads into the distorted space and dissolve the old ones. All pearls automatically attach to the threads closest nearby, stabilizing the new space. Again, conceiving of such a space may seem weird, but again we can find a close relative in physics, the Einsteinian space of space-time. Gravitation effectively is warping that space, though in a continuous manner. There are famous empirical proofs of that warping of physical space-time.19
Analytically, these two perspectives, the path reconstruction on the hand and the space warping on the other, are (almost) equivalent. The perspective of space warping, however, offers a benefit that is not to be underestimated. We arrive at a new space for which we can define its own properties and in which we again can define measures that are different from those possible in the original space. The path reconstruction does not offer such a “a derived space”. Hence, once the path is reconstructed, the story stops. It is a linear story. Our proposal thus is to change perspective.
Warping the space of measurability and expressibility is an operation that inverts the generation of cusp catastrophes.20 (see Figure 1 below). Thus it transcends the cusp catastrophes. In the perspective of path reconstruction one has to avoid the phenomenon of hysteresis and cusps altogether, hence loosing a lot of information about the observed source of data.
In the Cartesian space and the path reconstruction methodology related to it, all operations are analytic, that is organized as symbolic rewriting. The reason for this is the necessity for the paths remaining continuous and closed. In contrast, space warping can be applied locally. Warping spaces in dealing with data is not an exotic or rare activity at all. It happens all the time. We know it even from (simple) mathematics, when we define different functions, including the empty function, for different sets of input parameter values.
The main consequence of changing the perspective from path reconstruction to space warping is an enlargement of the set of possible expressions. We can do more without the need to call it “heuristics”. Our guess is that any serious theory of data and measurement must follow the opened route of space warping, if this theory of data tries to avoid positivistic reductionism. Most likely, such a theory will be kind of a renormalization theory in a connected, relativistic data space.
Revitalizing Punch Cards and Stacks
In this section we will introduce the outline of a tool that allows to follow the circular approach in epistemic activities. Basically, this tool is about organizing arbitrary transformations. While for analytic (mathematical) expressions there are expression interpreters it is also clear that analytic expressions form only a subset of the set of all possible transformations, even if we consider the fact that many expression interpreters have been growing to some kind of programming languages, or script language. Indeed, Java contains an interpreting engine for JavaScript by default, and there are several quite popular ones for mathematical purposes. One could also conceive mathematical packages like Octave (open source), MatLab or Mathematica (both commercial) as such expression interpreters, even as their most recent versions can do much, much more. Yet, using MatLab & Co. are not quite suitable as a platform for general purpose data transformation.
The structural metaphor that proofed to be as powerful as it was sustainable for more than 10 years now is the combination of the workbench with the punch card stack.
Image 1: A Punched Card for feeding data into a computer
Any particular method, mathematical expression or arbitrary computational procedure resulting in a transformation of the original data is conceived as a “punch card”. This provides a proper modularization, and hence standardization. Actually, the role of these “functional compartments” is extremely standardized, at least enough to define an interface for plugins. Like the ancient punch cards made from paper, each card represents a more or less fixed functionality. Of course, these functionality may be defined by a plugin that itself connects to Matlab…
Else, again like the ancient punch cards, the virtualized versions can be stacked. For instance, we first put the treatment for missing values onto the stack, simply to ensure that all NULLS are written as -1. The next card then determines minimum and maximum in order to provide the data for linear normalization, i.e. the mapping of all values into the interval [0..1]. Then we add a card for compressing the “fat tail” of the distribution of values in a particular variable. Alternatively we may use a card to split the “fat tail” off into a new variable! Finally we apply the card=plugin for normalizing the data to the original and the new data column.
I think you got the idea. Such a stack is not only maintained for any of the variables, it is created on the fly according to the needs as these got detected by simple rules. You may think of the cards also as the set of rules that describe the capabilities of agents, which constantly check the data whether they could apply their rules. You also may think of these stacks as a device that works like a tailored distillation column , as it is used for fractional distillation in petro-chemistry.
Image 2: Some industrial fractional distillation columns for processing mineral oil. Dependent on the number of distillation steps different products result.
These stacks of parameterized procedures and expressions represent a generally programmable computer, or more precisely, operating system, quite similar to a spreadsheet, albeit the purpose of the latter, and hence the functionality, actualizes in a different form. The whole thing may even be realized as a language! In this case, one would not need a graphical user-interface anymore.
The effect of organizing the transformation of data in this way, by means of plugins that follow the metaphor of the “punch card stack”, is dramatic. Introducing transformations and testing them can be automated. At this point we should mention about the natural ally of the transformation workbench, the maximum likelihood estimation of the most promising transformations that combine just two or three variables into a new one. All three parts, the transformation stack engine, the dependency explorer, and the evolutionary optimized associative engine (which is able to create a preference weighting for the variables) can be put together in such a way that finding the “optimal” model can be run in a fully automated manner. (Meanwhile the SomFluid package has grown into a stage where it can accomplish this. . . download it here, but you need still some technical expertise to make it running)
The approach of the “transformation stack engine” is not just applicable to tabular data, of course. Given a set of proper plugins, it can be used as a digester for large sets of images or time series as well (see below).
Transforming Data
In this section we now will take a more practical and pragmatic perspective. Actually, we will describe some of the most useful transformations, including their parameters. We do so, because even prominent books about “data mining” have been handling the issue of transforming data in a mistaken or at least seriously misleading manner.21,22
If we consider the goal of the transformation of numerical data, increasing the discernibility of assignated observations , we will recognize that we may identify a rather limited number of types of such transformations, even if we consider the space of possible analytic functions, which combine two (or three) variables.
We will organize the discussion of the transformations into three sub-sections, whose subjects are of increasing complexity. Hence, we will start with the (ordinary) table of data.
Tabular Data
Tables may comprise numerical data or strings of characters. In its general form it may even contain whole texts, a complete book in any of the cells of a column (but see the section about unstructured data below!). If we want to access the information carried by the string data, we more sooner than later have to translate them into numbers. Unlike numbers, string data, and the relations between data points made from string data, must be interpreted. As a consequence, there are always several, if not many different possibilities of that representation. Besides referring to the actual semantics of the strings that could be expressed by means of the indices of some preference orders, there are also two important techniques of automatic scaling available, which we will describe below.
Besides string data, dates are further multi-dimensional category of data. A date encodes not only a serial number relative to some (almost) arbitrarily chosen base date, which we can use to express the age of the item represented by the observation. We have, of course, day of week, day of month, number of week, number of month, and not to forget about season as an approximate class. It depends a bit on the domain whether these aspects play any role at all. Yet, think about the rhythms in the city or on the stock markets across the week, or the “black Monday/ Tuesday/Friday effect” in production plants or hospitals then it is clear that we usually have to represent the single date value by several “informational extracts”.
A last class of data types that we have to distinguish are time values. We already mentioned the periodicity in other aspects of the calendar. In which pair of time values we find a closer similarity, T1( 23:41pm, 0:05pm), or T2(8:58am;3:17pm)? In case of any kind of distance measure the values of T2 are evaluated as much more similar than those in T1. What we have to do is to set a flag for “circularity” in order to calculate the time distances correctly.
Numerical Data: Numbers, just Numbers?
Numerical data are data for which in principle any value from within a particular interval could be observed. If such data are symmetrically normal distributed then we have little reasons to guess that there is something interesting within these sample of values. As soon as the distribution becomes asymmetrical, it starts to become interesting. We may observe “fat tails” (large values are “over-represented), or multi-modal distributions. In both cases we could suspect that there are at least two different processes, one dominating the other differentially across peaks. So we should split the variable into two (called “deciling”) and ceteris paribus check out the effect on the predictive power of the model. Typically one splits the values at the minimum between the peaks, but it is also possible to implement an overlap, where some records are present in both of the new variables.
Long tails indicate some aberrant behavior of the items represented by the respective records, or, like in medicine even pathological contexts. Strongly left-skewed distribution often indicate organizational or institutional influences. Here we could compress the long tail, log-shift, and then split the variable, that is decile it into two. 21
In some domains, like the finances, we find special values at which symmetry breaks. For ordinary money values the 0 is such a value. We know in advance that we have to split the variable into two, because the semantic and the structural difference between +50$ and -75$ is much bigger than between 150$ and 2500$… probably. As always, we transform it such that we create additional variables as kind of a hypotheses, for which we have to evaluate their (positive) contribution to the predictive power of the model.
In finances, but also in medicine, and more general in any system that is able to develop meta-stable regions, we have to expect such points (or regions) with increased probability of breaking symmetry and hence strong semantic or structural difference. René Thom first described similar phenomena by his theory that he labeled “catastrophe theory”. In 3d you can easily think about cusp catastrophes as a hysteresis in x-z direction that is however gradually smoothed out in y-direction.
Figure 1: Visualization of folds in parameters space, leading to catastrophes and hystereses.
In finances we are faced with a whole culture of rule following. The majority of market analysts use the same tools, for instance “stochasticity,” or a particularly parameterized MACD for deriving “signals”, that is, indicators for points of actions. The financial industries have been hiring a lot of physicists, and this population sticks to greatly the same mathematics, such as GARCH, combined with Monte-Carlo-Simulations. Approaches like fractal geometry are still regarded as exotic.23
Or think about option prices, where we find several symmetry breaks by means of contract. These points have to be represented adequately in dedicated, means derived variables. Again, we can’t emphasize it enough, we HAVE to do so as a kind of performing hypothesizing. The transformation of data by creating new variables is, so to speak, the low-level operationalization of what later may grow into a scientific hypothesis. Creating new variables poses serious problems for most methods, which may count as a reason why many people don’t follow this approach. Yet, for our approach it is not a problem, definitely not.
In medicine we often find “norm values”. Potassium in blood serum may take any value within a particular range without reflecting any physiologic problem. . . if the person is healthy. If there are other risk factors the story may be a different one. The ratio of potassium and glucose in serum provides us an example for a significant marker. . . if the person has already heart problems. By means of such risk markers we can introduce domain-specific knowledge. And that’s actually a good message, since we can identify our own “markers” and represent it as a transformation. The consequence is pretty clear: a system that is supposed to “learn” needs a suitable repository for storing and handling such markers, represented as a relational system (graph).
Let us return to the norm ranges briefly again. A small difference outside the norm range could be rated much more strongly than within the norm range. This may lead to the weight functions shown in the next figure, or more or less similar ones. For a certain range of input values, the norm range, we leave the values unchanged. The output weight equals 1. Outside of this range we transform them in a way that emphasizes the difference to the respective boundary value of the norm range. This could be done in different ways.
Figure 2: Examples for output weight configurations in norm-range transformation
Actually, this rationale of the norm range can be applied to any numerical data. As an estimate of the norm range one could use the 80% quantile, centered around the median and realized as +/-40% quantiles. On the level of model selection, this will result in a particular sensitivity for multi-dimensional outliers, notably before defining any criterion apriori of what an outlier should be.
From Strings to Orders to Numbers
Many data come as some kind of description or label. Such data are described as nominal data. Think for instance about prescribed drugs in a group of patients included into an investigation of risk factors for a disease, or think about the name or the type of restaurants in a urbanological/urbanistic investigation. Nominal data are quite frequent in behavioral, organizational or social data, that is, in contexts that are established mainly on a symbolic level.
It should be avoided to perform measurements only on the nominal scale, yet, sometimes it is not possible to circumvent it. It could be avoided at least partially by including further properties that can be represented by numerical values. For instance, instead using only the names cities in a data set, one can use the geographical location, number of inhabitants, or when referring to places within a city one can use descriptors that cover some properties of the respective area, such items as density of traffic, distance to similar locations, price level of consumer goods, economical structure etc. If a direct measurement is not possible, estimates can do the job as well, if the certainty of the estimate is expressed. The certainty then can be used to generate surrogate data. If the fine grained measurement creates further nominal variables, they could be combined for form a scale. Such enrichment is almost always possible, irrespective the domain. One should keep in mind, however, that any such enrichment is nothing else than a hypothesis.
Sometimes, data on the nominal level, technically a string of alphanumerical characters, already contains valuable information. For instance, the contain numerical values, as in the name of cars. If we would deal with things like names of molecules, where these names often come as compounds, reflecting the fact that molecules themselves are compounds, we can calculate the distance of each name to a virtual “average name” by applying a technique called “random graph”. Of course, in case of molecules we would have a lot of properties available that can be expressed as numerical values.
Ordinal data are closely related to nominal data. Essentially, there are two flavors of them. In case of the least valuable of them the numbers to not express a numerical value, the cipher is just used as kind of a letter, indicating that there is a set of sortable items. Sometimes, values of an ordinal scale represent some kind of similarity. Despite this variant is more valuable it still can be misleading, because the similarity may not scale isodistantly with the numerical values of the ciphers. Undeniably, there is still a rest of a “name” in it.
We are now going to describe some transformations to deal with data from low-level scales.
The least action we have to apply to nominal data is a basic form of encoding. We use integer values instead of the names. The next, though only slightly better level would be to reflect the frequency of the encoded item in the ordinal value. One would, for instance not encode the name into an arbitrary integer value, but into the log of the frequency. A much better alternative, however, is provided by the descendants of the correspondence analysis. These are called Optimal Scaling and the Relative Risk Weight. The drawback for these method is that some information about the predicted variable is necessary. In the context of modeling, by which we always understand target-oriented modeling—as opposed to associative storage24—we usually find such information, so the drawback is not too severe.
First to optimal scaling (OSC). Imagine a variable, or “assignate” as we prefer to call it25, which is scaled on the nominal or the low ordinal scale. Let us assume that there are just three different names or values. As already mentioned, we assume that a purpose has been selected and hence a target variable as its operationalization is available. Then we could set up the following table (the figures are denoting frequencies).
Table 1: Summary table derived from a hypothetical example data set. av(i) denote three nominally scaled assignates.
marginal sum
tf (focused)
marginal sum
From these figures we can calculate the new scale values by the formula
For the assignate av1 this yields
Table 2: Here, various encodings are contrasted.
literal encoding
normalized log(freq)
optimal scaling
normalized OSC
Using these values we could replace any occurrence of the original nominal (ordinal) values by the scaled values. Alternatively—or better additionally—, we could sum up all values for each observation (record), thereby collapsing the nominally scaled assignates into a single numerically scaled one.
Now we will describe the RRW. Imagine a set of observations {o(i)} where each observation is described by a set of assignates a(i). Also let us assume that some of these assignates are on the binary level, that is, the presence of this quality in the observation is encoded by “1”, its missing by “0”. This usually results in sparsely filled (regions of ) the data table. Depending on the size of the “alphabet”, even more than 99.9% of all values could simply be equal to 0. Such data can not be grouped in a reasonable manner. Additionally, if there are further assignates in the table that are not binary encoded, the information in the binary variables would be neglected almost completely without applying a rescaling like the RRW.
For the assignate av1 this yields
As you can see, the RRW uses the marginal from the rows, while the optimal scaling uses the marginal from the columns. Thus, the RRW uses slightly more information. Assuming a table made from binary assignates av(i), which could be summarized into table 1 above, the formula yields the following RRW factors for the three binary scaled assignates:
Table 3: Relative Risk Weights (RRW) for the frequency data shown in table 1.
raw RRWi
normalized RRW
The ranking of av(i) based RRW is equal to that returned by OSC, even the normalized score values are quite similar. Yet, while in the case of nominal variables assignates are usually not collapsed, this will be done always in case of binary variables.
So, let us summarize these simple methods in the following table.
Table 4: Overview about some of the most important transformations for tabular data.
Effect, New Value
Properties, Conditions
analytic function
analytic combination
explicit analytic function (a,b)→f(a,b)
enhancing signal-to-noise ratio for the relationship between predictors and predicted, 1 new variable
targeted modeling
empiric combinational recoding
using simple clustering methods like KNN or K-means for a small number of assignates
distance from cluster centers and, or cluster center as new variables
targeted modeling
upon evaluation of properties of the distribution
2 new variables
based on extreme-value quantiles
1 new variable, better distinction for data in frequent bins
optimal scaling
numerical encoding and/or rescaling using marginal sums
enhancing the scaling of the assignate from nominal to numerical
targeted modeling
relative risk weight
collapsing sets of sparsely filled variables
targeted modeling
Obviously, the transformation of data is not an analytical act, on both sides. Left-hand it refers to structural and hence semantic assumptions, while right hand it introduces hypotheses about those assumptions. Numbers are never ever just values, much like sentences and words do not consists just from letters. After all, the difference between both is probably less than one could initially presume. Later we will address this aspect from the opposite direction, when it comes to the translation of textual entities into numbers.
Time Series and Contexts
Time series data are the most valuable data. They allow the reconstruction of the flow of information in the observed system, either between variables intrinsic to the measurement setup (reflecting the “system”) or between treatment and effects. In the recent years, so-called “causal FFT” gained some popularity.
Yet, modeling time series data poses the same problematics as tabular data. We do not know apriori which variables to include, or how to transform variables in order to reflect particular parts of the information in the most suitable way. Simply pressing a FFT onto the data is nothing but naive. FFT assumes a harmonic oscillation, or a combination thereof, which certainly is not appropriate. Even if we interpret a long series of FFT terms as an approximation to an unknown function, it is by no means clear whether the then assumed stationarity26 is indeed present in the data.
Instead, it is more appropriate to represent the aspects of a time series in multiple ways. Often, there are many time series available, one for each assignate. This brings the additional problem of careful evaluation of cross-correlations and auto-correlations, and all of this under the condition that it is not known apriori whether the evolution of the system is stationary.
Fortunately, the analysis of multiple time series, even from non-stationary processes, is quite simple, if we follow the approach as outlined so far. Let us assume a set of assignates {a(i)} for which we have their time series measurement available, which are given by equidistant measurement points. A transformation then is constructed by a method m that is applied to a moving window of size md(k). All moving windows of any size are adjusted such that their endpoints meet at the measurement point at time t(m(k)). Let us call this point the prediction base point, T(p). The transformed values consist either from the residuals resulting from this methods values and the measurement data, or the parameters of the method fitted to the moving window. A example for the latter case are for instance given by the wavelet coefficients, which provide a quite suitable, multi-frequency perspective onto the development up to T(p). Of course, the time series data of different assignates could be related to each other by any arbitrary functional mapping.
The target value for the model could be any set of future points relative to t(m(k)). The model may predict a singular point, averages some time in the future, the volatility of the future development of the time series, or even the parameters of a particular mapping function relating several assignates. In the latter case the model would predict several criteria at once.
Such transformations yield a table that contain a lot more variables than originally available. The ratio may grow up to 1:100 in complex cases like the global financial markets. Just to be clear: If you measure, say the index values of 5 stock markets, some commodities like gold, copper, precious metals and “electronics metals”, the money market, bonds and some fundamentals alike, that is approx. 30 basic input variables, even a superficial analysis would have to inspect 3000 variables… Yes, learning and gaining experience can take quite a bit! Learning and experience do not become cheaper only for that we use machines to achieve it. Just exploring is more easy nowadays, not requiring life times any more. The reward consists from stable models about complex issues.
Each point in time is reflected by the original observational values and a lot of variables that express the most recent history relative to the point in time represented by the respective record. Any of the synthetic records thus may be interpreted as a set of hypothesis about the future development, where this hypothesis comes as a multidimensional description of the context up to T(p). It is then the task of the evolutionarily optimized variable selection based on the SOM to select the most appropriate hypothesis. Any subgroup contained in the SOM then represents comparable sets of relations between the past relative to T(p) and the respective future as it is operationalized into the target variable.
Typical transformations in such associative time series modeling are
• – moving average and exponentially decaying moving average for de-seasoning or de-trending;
• – various correlational methods: cross- and auto-correlation, including the result parameters of the Bartlett test;
• – Wavelet-, FFT-, or Walsh- transforms of different order, residuals to the denoised reconstruction;
• – fractal coefficients like Lyapunov coefficient or Hausdorff dimension
• – ratios of simple regressions calculated over moving windows of different size;
• – domain specific markers (think of technical stock market analysis, or ECG.
Once we have expressed a collection of time series as series of contexts preceding the prediction point T(p), the further modeling procedure does not differ from the modeling of ordinary tabular data, where the observations are independent from each other. From the perspective of our transformation tool, these time series transformation are nothing else than “methods”, they do not differ from other plugin methods with respect to the procedure calls in their programing interface.
„Unstructurable“ „Data“: Images and Texts
The last type of data for which we briefly would like to discuss the issue of transformation is “unstructurable” data. Images and texts are the main representatives for this class of entities. Why are these data “unstructurable”?
Let us answer this question from the perspective of textual analysis. Here, the reason is obvious, actually, there are several obvious reasons. Patrizia Violi [17] for instance emphasizes that words are creating their own context, upon which they are then going to be interpreted. Douglas Hofstadter extended the problematics to thinking at large, arguing that for any instance of analogical thinking—and any thinking he claimed as being analogical—it is impossible to define criteria that would allow to set up a table. Here on this site we argued repeatedly that it is not possible to define any criteria apriori that would capture the “meaning” of a text.
Else, understanding language, as well as understanding texts can’t be mapped to the problematics of predicting a time series. In language, there is no such thin as a prediction point T(p), and there is no positively definable “target” which could be predicted. The main reason for this is the special dynamics between context (background) and proposition (figure). It is a multi-level, multi-scale thing. It is ridiculous to apply n-grams to text, then hoping to catch anything “meaningful”. The same is true for any statistical measure.
Nevertheless, using language, that is, producing and understanding is based on processes that select and compose. In some way there must be some kind of modeling. We already proposed a structure, or more, an architecture, for this in a previous essay.
The basic trick consists of two moves: Firstly, texts are represented probabilistically as random contexts in an associative storage like the SOM. No variable selection takes place here, no modeling and no operationalization of a purpose is present. Secondly, this representation then is used as a basis for targeted modeling. Yet, the “content” of this representation does not consist from “language” data anymore. Strikingly different, it contains data about the relative location of language concepts and their sequence as they occur as random contexts in a text.
The basic task in understanding language is to accomplish the progress from a probabilistic representation to a symbolic tabular representation. Note that any tabular representation of an observation is already on the symbolic level. In the case of language understanding precisely this is not possible: We can’t define meaning, and above all, not apriori. Meaning appears as a consequence of performance and execution of certain rules to a certain degree. Hence we can’t provide the symbols apriori that would be necessary to set up a table for modeling, assessing “similarity” etc.
Now, instead of probabilistic non-structured representation we also could say arbitrary unstable structure. From this we should derive a structured, (proto-)symbolic and hence tabular and almost stable structure. The trick to accomplish this consists of using the modeling system itself as measurement device and thus also as a “root” for further reference in the then possible models. Kohonen and colleagues demonstrated this crucial step in their WebSom project. Unfortunately (for them), they then actualized several misunderstandings regarding modeling. For instance, they misinterpreted associative storage as a kind of model.
The nice thing with this architecture is that once the symbolic level has been achieved, any of the steps of our modeling approach can be applied without any change, including the automated transformation of “data” as described above.
Understanding the meaning of images follows the same scheme. The fact that there are no words renders the task more complicated and more simple at the same time. Note that so far there is no system that would have learned to “see”, to recognize and to understand images, despite many titles claim that the proposed “system” can do so. All computer vision approaches are analytic by nature, hence they are all deeply inadequate. The community is running straight into the method hell as the statisticians and the data miners did before, mistaking transformations as methods, conflating transformation and modeling, etc.. We discussed this issues at length above. Any of the approaches might be intelligently designed, but all are victimized by the representationalist fallacy, and probably even by naive realism. Due to the fact that the analytic approach is first, second and third mainstream, the probabilistic and contextual bottom-up approach is missing so far. In the same way as a word is not equal to the grapheme, a line is not defined on the symbolic level in the brain. We else and again meet the problem of analogical thinking even on the most primitive graphical level. When is a line still a line, when is a triangle still a triangle?
In order to start in the right way we first have to represent the physical properties of the image along different dimensions, such as textures, edges, or salient points, and all of those across different scales. Probably one can even detect salient objects by some analytic procedure. From any of the derived representations the random contexts are derived and arranged as vectors. A single image is represented as a table that contains random contexts derived from the image as a physical entity. From here on, the further processing scheme is the same as for texts. Note, that there is no such property as “line” in this basic mapping.
In case of texts and images the basic transformation steps thus consist in creating the representation as random contexts. Fortunately, this is “only” a question of the suitable plugins for our transformation tool. In both cases, for texts as well as images, the resulting vectors could grow considerably. Several thousands of implied variables must be expected. Again, there is already a solution, known as random projection, which allows to compress even very large vectors (say 20’000+) into one of say maximal 150 variables, without loosing much of the information that is needed to retain the distinctive potential. Random projection works by multiplying a vector of size N with a matrix of uniformly distributed random values of size NxM, which results in a vector of size M. Of course, M is chosen suitably (100+). The reason why this works is that with that many dimension all vectors are approximately orthogonal to each other! Of course, the resulting fields in such a vector do not “represent” anything that could be conceived as a reference to an “object”. Internally, however, that is from the perspective of a (population of) SOMs, it may well be used as a (almost) fixed “attribute”. Yet, neither the missing direct reference not the subjectivity poses a problem, as the meaning is not a mental entity anyway. Q.E.D.
Here in this essay we discussed several aspects related to the transformation of data as an epistemic activity. We emphasized that an appropriate attitude towards the transformation of data requires a shift in perspective and the focus of another vantage point. One of the more significant changes in attitude consider, perhaps, the drop of any positivist approach as one of the main pillars of traditional modeling. Remember that statistics is such a positivist approach. In our perspective, statistical methods are just transformations, nothing less, but above all also nothing more, characterized by a specific set of rather strong assumptions and conditions for their applicability.
We also provided some important practical examples for the transformation of data, whether tabular data derived from independent observations, time series data or “unstructurable” “data” like texts and images. According to the proposed approach we else described a prototypical architecture for a transformation tool, that could be used universally. In particular, it allows a complete automation of the modeling task, as it could be used for instance in the field of so-called data mining. The possibility for automated modeling is, of course, a fundamental requirement for any machine-based episteme.
1. The only reason why we do not refer to cultures and philosophies outside Europe is that we do not know sufficient details about them. Yet, I am pretty sure that taking into account Chinese or Indian philosophy would severe the situation.
2. It was Friedrich Schleiermacher who first observed that even the text becomes alien and at least partially autonomous to its author due to the necessity and inevitability of interpretation. Thereby he founded hermeneutics.
3. In German language these words all exhibit a multiple meaning.
4. In the last 10 years (roughly) it became clear that the gene-centered paradigms are not only not sufficient [2], they are even seriously defect. Evely Fox-Keller draws a detailed trace of this weird paradigm [3].
5. Michel Foucault [4]
6. The „axiom of choice“ is one of the founding axioms in mathematics. Its importance can’t be underestimated. Basically, it assumes that “something is choosable”. The notion of “something choosable” then is used to construct countability as a derived domain. This implies three consequences. First, this avoids to assume countability, that is, the effect of a preceding symbolification, as a basis for set theory. Secondly, it puts performance at the first place. These two implications render the “Axiom of Choice” into a doubly-articulated rule, offering two docking sites, one for mathematics, and one for philosophy. In some way, it thus can not count as an “axiom”. Those implications are, for instance, fully compatible with Wittgenstein’s philosophy. For these reasons, Zermelo’s “axiom” may even serve as a shared point (of departure) for a theory of machine-based episteme. Finally, the third implication is that through the performance of the selection the relation, notably a somewhat empty relation is conceived as a predecessor of countability and the symbolic level. Interestingly, this also relates to Quantum Darwinism and String Theory.
7. David Grahame Shane’s theory on cities and urban entities [5] is probably the only theory in urbanism that is truly a relational theory. Additionally, his work is full of relational techniques and concepts, such as the “heterotopy” (a term coined by Foucault).
8. Bruno Latour developed the Actor-Network-Theory [6,7], while Clarke evolved “Grounded Theory” into the concept of “Situational Analysis” [8]. Latour, as well as Clarke, emphasize and focus the relation as a significant entity.
9. behavioral coating, and behavioral surfaces ;
10. See Information & Causality about the relation between measurement, information and causality.
11. „Passivist“ refers to the inadequate form of realism according to which things exist as-such independently from interpretation. Of course, interpretation does affect the material dimension of a thing. Yet, it changes its relations insofar the relations of a thing, the Wittgensteinian “facts”, are visible and effective only if we assign actively significance to them. The “passivist” stance conceives itself as a re-construction instead of a construction (cf. Searle [9])
12. In [10] we developed an image theory in the context of the discussion about the mediality of facades of buildings.
13. nonsense of „non-supervised clustering“
14. In his otherwise quite readable book [11], though it may serve only as an introduction.
15. This can be accomplished by using a data segment for which the implied risk equals 0 (positive predictive value = 1). We described this issue in the preceding chapter.
16. hint to particle physics…
17. See our previous essay about the complementarity of the concepts of causality and information.
18. For an introduction of renormalization (in physics) see [12], and a bit more technical [13]
19. see the Wiki entry about so-called gravitational lenses.
20. Catastrophe theory is a concept invented and developed by French mathematician Rene Thom as a field of Differential Topology. cf. [14]
21. In their book, Witten & Eibe [15] recognized the importance of transformation and included a dedicated chapter about it. They also explicitly mention the creation of synthetic variables. Yet, they do also explicitly retreat from it as a practical means for the reason of computational complexity (=here, the time needed to perform a calculation in relation to the amount of data). After all, their attitude towards transformation is somehow that towards an unavoidable evil. They do not recognize its full potential. After all, as a cure for the selection problem, they propose SVM and their hyperplanes, which is definitely a poor recommendation.
22. Dorian Pyle [11]
23. see Benoit Mandelbrot [16].
24. By using almost meaningless labels target-oriented modeling is often called supervised modeling as opposed to “non-supervised modeling”, where no target variable is being used. Yet, such a modeling is not a model, since the pragmatics of the concept of “model” invariably requires a purpose.
25. About assignates: often called property, or feature… see about modeling
26. Stationarity is a concept in empirical system analysis or description, which denotes the expectation that the internal setup of the observed process will not change across time within the observed period. If a process is rated as “stationary” upon a dedicated test, one could select a particular, and only one particular method or model to reflect the data. Of course, we again meet the chicken-egg problem. We can decide about stationarity only by means of a completed model, that is after the analysis. As a consequence, we should not use linear methods, or methods that depend on independence, for checking the stationarity before applying the “actual” method. Such a procedure can not count as a methodology at all. The modeling approach should be stable against non-stationarity. Yet, the problem of the reliability of the available data sample remains, of course. As a means to “robustify” the resulting model against the unknown future one can apply surrogating. Ultimately, however, the only cure is a circular, or recurrent methodology that incorporates learning and adaptation as a structure, not as a result.
• [1] Robert Rosen, Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press, New York 1991.
• [2] Nature Insight: Epigenetics, Supplement Vol. 447 (2007), No. 7143 pp 396-440.
• [3] Evelyn Fox Keller, The Century of the Gene. Harvard University Press, Boston 2002. see also: E. Fox Keller, “Is There an Organism in This Text?”, in P. R. Sloam (ed.), Controlling Our Destinies. Historical, Philosophical, Ethical, and Theological Perspectives on the Human Genome Project, Notre Dame (Indiana), University of Notre Dame Press, 2000, pp. 288-289
• [4] Michel Foucault, Archeology of Knowledge. 1969.
• [5] David Grahame Shane. Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design and City Theory
• [6] Bruno Latour. Reassembling The Social. Oxford University Press, Oxford 2005.
• [7] Bruno Latour (1996). On Actor-network Theory. A few Clarifications. in: Soziale Welt 47, Heft 4, p.369-382.
• [8] Adele E. Clarke, Situational Analysis: Grounded Theory after the Postmodern Turn. Sage, Thousand Oaks, CA 2005).
• [9] John R. Searle, The Construction of Social Reality. Free Press, New York 1995.
• [11] Dorian Pyle, Data Preparation for Data Mining. Morgan Kaufmann, San Francisco 1999.
• [12] John Baez (2009). Renormalization Made Easy. Webpage
• [13] Bertrand Delamotte (2004). A hint of renormalization. Am.J.Phys. 72: 170-184. available online.
• [14] Tim Poston & Ian Stewart, Catastrophe Theory and Its Applications. Dover Publ. 1997.
• [15] Ian H. Witten & Frank Eibe, Data Mining. Practical Machine Learning Tools and Techniques (2nd ed.). Elsevier, Oxford 2005.
• [16] Benoit Mandelbrot & Richard L. Hudson, The (Mis)behavior of Markets. Basic Books, New York 2004.
• [17] Patrizia Violi (2000). Prototypicality, typicality, and context. in: Liliana Albertazzi (ed.), Meaning and Cognition – A multidisciplinary approach. Benjamins Publ., Amsterdam 2000. p.103-122.
Prolegomena to a Morphology of Experience
May 2, 2012 § Leave a comment
Experience is a fundamental experience.
The very fact of this sentence demonstrates that experience differs from perception, much like phenomena are different from objects. It also demonstrates that there can’t be an analytic treatment or even solution of the question of experience. Experience is not only related to sensual impressions, but also to affects, activity, attention1 and associations. Above all, experience is deeply linked to the impossibility to know anything for sure or, likewise, apriori. This insight is etymologically woven into the word itself: in Greek, “peria” means “trial, attempt, experience”, influencing also the roots of “experiment” or “peril”.
In this essay we will focus on some technical aspects that are underlying the capability to experience. Before we go in medias res, I have to make clear the rationale for doing so, since, quite obviously so, experience could not be reduced to those said technical aspects, to which for instance modeling belongs. Experience is more than the techné of sorting things out [1] and even more than the techné of the genesis of discernability, but at the same time it plays a particular, if not foundational role in and for the epistemic process, its choreostemic embedding and their social practices.
Epistemic Modeling
As usual, we take the primacy of interpretation as one of transcendental conditions, that is, it is a condition we can‘t go beyond, even on the „purely“ material level. As a suitable operationalization of this principle, still a quite abstract one and hence calling for situative instantiation, we chose the abstract model. In the epistemic practice, the modeling does not, indeed, even never could refer to data that is supposed to „reflect“ an external reality. If we perform modeling as a pure technique, we are just modeling, but creating a model for whatsoever purpose, so to speak „modeling as such“, or purposed modeling, is not sufficient to establish an epistemic act, which would include the choice of the purpose and the choice of the risk attitude. Such a reduction is typical for functionalism, or positions that claim a principle computability of epistemic autonomy, as for instance the computational theory of mind does.
Quite in contrast, purposed modeling in epistemic individuals already presupposes the transition from probabilistic impressions to propositional, or say, at least symbolic representation. Without performing this transition from potential signals, that is mediated „raw“ physical fluctuations in the density of probabilities, to the symbolic it is impossible to create a structure, let it be for instance a feature vector as a set of variably assigned properties, „assignates“, as we called it previously. Such a minimal structure, however, is mandatory for purposed modeling. Any (re)presentation of observations to a modeling methods thus is already subsequent to prior interpretational steps.
Our abstract model that serves as an operationalization of the transcendental principle of the primacy of interpretation thus must also provide, or comprise, the transition from differences into proto-symbols. Proto-symbols are not just intensions or classes, they are so to speak non-empiric classes that have been derived from empiric ones by means of idealization. Proto-symbols are developed into symbols by means of the combination of naming and an associated practice, i.e a repeating or reproducible performance, or still in other words, by rule-following. Only on the level of symbols we then may establish a logic, or claiming absolute identity. Here we also meet the reason for the fact that in any real-world context a “pure” logic is not possible, as there are always semantic parts serving as a foundation of its application. Speaking about “truth-values” or “truth-functions” is meaningless, at least. Clearly, identity as a logical form is a secondary quality and thus quite irrelevant for the booting of the capability of experience. Such extended modeling is, of course, not just a single instance, it is itself a multi-leveled thing. It even starts with the those properties of the material arrangement known as body that allow also an informational perspective. The most prominent candidate principle of such a structure is the probabilistic, associative network.
Epistemic modeling thus consists of at least two abstract layers: First, the associative storage of random contexts (see also the chapter “Context” for their generalization), where no purpose is implied onto the materially pre-processed signals, and second, the purposed modeling. I am deeply convinced that such a structure is only way to evade the fallacy of representationalism2. A working actualization of this abstract bi-layer structure may comprise many layers and modules.
Yet, once one accepts the primacy of interpretation, and there is little to say against it, if anything at all, then we are lead directly to epistemic modeling as a mandatory constituent of any interpretive relationship to the world, for primitive operations as well as for the rather complex mental life we experience as humans, with regard to our relationships to the environment as well as with regard to our inner reality. Wittgenstein emphasized in his critical solipsism that the conception of reality as inner reality is the only reasonable one [3]. Epistemic modeling is the only way to keep meaningful contact with the external surrounds.
The Bridge
In its technical parts experience is based on an actualization of epistemic modeling. Later we will investigate the role and the usage of these technical parts in detail. Yet, the gap between modeling, even if conceived as an abstract, epistemic modeling, and experience is so large that we first have to shed some light on the bridge between these concepts. There are some other issues with experience than just the mere technical issues of modeling that are not less relevant for the technical issues, too.
Experience comprises both more active and more passive aspects, both with regard to performance and to structure. Both dichotomies must not be taken as ideally separated categories, of course. Else, the basic distinction into active and passive parts is not a new one either. Kant distinguished receptivity and spontaneity as two complementary faculties that combine in order to bring about what we call cognition. Yet, Leibniz, in contrast, emphasized the necessity of activity even in basic perception; nowadays, his view has been greatly confirmed by the research on sensing in organic (animals) as well as in in-organic systems (robots). Obviously, the relation between activity and passivity is not a simple one, as soon as we are going to leave the bright spheres of language.3
In the structural perspective, experience unfolds in a given space that we could call the space of experiencibility4. That space is spanned, shaped and structured by open and dynamic collections of any kind of theory, model, concept or symbol as well as by the mediality that is “embedding” those. Yet, experience also shapes this space itself. The situation reminds a bit to the relativistic space in physics, or the social space in humans, where the embedding of one space into another one will affect both participants, the embedded as well as the embedding space. These aspects we should keep in mind for our investigation of questions about the mechanisms that contribute to experience and the experience of experience. As you can see, we again refute any kind of ontological stances even to their smallest degrees.5
Now when going to ask about experience and its genesis, there are two characteristics of experience that enforce us to avoid the direct path. First, there is the deep linkage of experience to language. We must get rid of language for our investigation in order to avoid the experience of finding just language behind the language or what we call upfront “experience”; yet, we also should not forget about language either. Second, there is the self-referentiality of the concept of experience, which actually renders it into a strongly singular term. Once there are even only tiny traces of the capability for experience, the whole game changes, burying the initial roots and mechanisms that are necessary for the booting of the capability.
Thus, our first move consists in a reduction and linearization, which we have to catch up with later again, of course. We will achieve that by setting everything into motion, so-to-speak. The linearized question thus is heading towards the underlying mechanisms6:
How do we come to believe that there are facts in the world? 7
What are—now viewed from the outside of language8—the abstract conditions and the practiced moves necessary and sufficient for the actualization of such statements?
Usually, the answer will refer to some kind of modeling. Modeling provides the possibility for the transition from the extensional epistemic level of particulars to the intensional epistemic level of classes, functions or categories. Yet, modeling does not provide sufficient reason for experience. Sure, modeling is necessary for it, but it is more closely related to perception, though also not being equivalent to it. Experience as a kind of cognition thus can’t be conceived as kind of a “high-level perception”, quite contrary to the suggestion of Douglas Hofstadter [4]. Instead, we may conceive experience, in a first step, as the result and the activity around the handling of the conditions of modeling.
Even in his earliest writings, Wittgenstein prominently emphasized that it is meaningless to conceive of the world as consisting from “objects”. The Tractatus starts with the proposition:
The world is everything that is the case.
Cases, in the Tractatus, are states of affairs that could be made explicit into a particular (logical) form by means of language. From this perspective one could derive the radical conclusion that without language there is no experience at all. Despite we won’t agree to such a thesis, language is a major factor contributing to some often unrecognized puzzles regarding experience. Let us very briefly return to the issue of language.
Language establishes its own space of experiencibility, basically through its unlimited expressibility that induces hermeneutic relationships. Probably mainly to this particular experiential sphere language is blurring or even blocking clear sight to the basic aspects of experience. Language can make us believe that there are phenomena as some kind of original stuff, existing “independently” out there, that is, outside the human cognition.9 Yet, there is no such thing like a phenomenon or even an object that would “be” before experience, and for us humans even not before or outside of language. It is even not reasonable to speak about phenomena or objects as if they would exist before experience. De facto, it is almost non-sensical to do so.
Both, objects as specified entities and phenomena at large are consequences of interpretation, in turn deeply shaped by cultural imprinting, and thus heavily depending on language. Refuting that consequence would mean to refute the primacy of interpretation, which would fall into one of the categories of either naive realism or mysticism. Phenomenology as an ontological philosophical discipline is nothing but a mis-understanding (as ontology is henceforth); since phenomenology without ontological parts must turn into some kind of Wittgensteinian philosophy of language, it simply vanishes. Indeed, when already being teaching in Cambridge, Wittgenstein once told a friend to report his position to the visiting Schlick, whom he refused to meet on this occasion, as “You could say of my work that it is phenomenology.” [5] Yet, what Wittgenstein called “phenomenology” is completely situated inside language and its practicing, and despite there might be a weak Kantian echo in his work, he never supported Husserl’s position of synthetic universals apriori. There is even some likelihood that Wittgenstein, strongly feeling to be constantly misunderstood by the members of the Vienna Circle, put this forward in order to annoy Schlick (a bit), at least to pay him back in kind.
Quite in contrast, in a Wittgensteinian perspective facts are sort of collectively compressed beliefs about relations. If everybody believes to a certain model of whatever reference and of almost arbitrary expectability, then there is a fact. This does not mean, however, that we get drowned by relativism. There are still the constraints implied by the (unmeasured and unmeasurable) utility of anticipation, both in its individual and its collective flavor. On the other hand, yes, this indeed means that the (social) future is not determined.
More accurately, there is at least one fact, since the primacy of interpretation generates at least the collectivity as a further fact. Since facts are taking place in language, they do not just “consist” of content (please excuse such awful wording), there is also a pragmatics, and hence there are also at least two different grammars, etc.etc.
How do we, then, individually construct concepts that we share as facts? Even if we would need the mediation by a collective, a large deal of the associative work takes place in our minds. Facts are identifiable, thus distinguishable and enumerable. Facts are almost digitized entities, they are constructed from percepts through a process of intensionalization or even idealization and they sit on the verge of the realm of symbols.
Facts are facts because they are considered as being valid, let it be among a collective of people, across some period of time, or a range of material conditions. This way they turn into kind of an apriori from the perspective of the individual, and there is only that perspective. Here we find the locus situs of several related misunderstandings, such as direct realism, Husserlean phenomenology, positivism, the thing as such, and so on. The fact is even synthetic, either by means of “individual”10 mental processes or by the working of a “collective reasoning”. But, of course, it is by no means universal, as Kant concluded on the basis of Newtonian science, or even as Schlick did in 1930 [6]. There is neither a universal real fact, nor a particular one. It does not make sense to conceive the world as existing from independent objects.
As a consequence, when speaking about facts we usually studiously avoid the fact of risk. Participants in the “fact game” implicitly agree on the abandonment of negotiating affairs of risk. Despite the fact that empiric knowledge never can be considered as being “safe” or “secured”, during the fact game we always behave as if. Doing so is the more or less hidden work of language, which removes the risk (associated with predictive modeling) and replaces it by metaphorical expressibility. Interestingly, here we also meet the source field of logic. It is obvious (see Waves & Words) that language is neither an extension of logics, nor is it reasonable to consider it as a vehicle for logic, i.e. for predicates. Quite to the contrast, the underlying hypothesis is that (practicing) language and (weaving) metaphors is the same thing.11 Such a language becomes a living language that (as Gier writes [5])
“[…] grows up as a natural extension of primitive behavior, and we can count on it most of the time, not for the univocal meanings that philosophers demand, but for ordinary certainty and communication.”
One might just modify Gier’s statement a bit by specifying „philosophers“ as idealistic, materialistic or analytic philosophers.
In “On Certainty” (OC, §359), Wittgenstein speaks of language as expressing primitive behavior and contends that ordinary certainty is “something animal”. This now we may take as a bridge that provides the possibility to extend our asking about concepts and facts towards the investigation of the role of models.
Related to this there is a pragmatist aspect that is worthwhile to be mentioned. Experience is a historicizing concept, much like knowledge. Both concepts are meaningful only in hindsight. As soon as we consider their application, we see that both of them refer only to one half of the story that is about the epistemic aspects of „life“. The other half of the epistemic story and directly implied by the inevitable need to anticipate is predictive or, equivalently, diagnostic modeling. Abstract modeling in turn implies theory, interpretation and orthoregulated rule-following.
Epistemology thus should not be limited to „knowledge“, the knowable and its conditions. Epistemology has explicitly to include the investigation of the conditions of what can be anticipated.
In a still different way we thus may repose the question about experience as the transition from epistemic abstract modeling to the conditions of that modeling. This would include the instantiation of practicable models as well as the conditions for that instantiation, and also the conditions of the application of models.In technical terms this transition is represented by a problematic field: The model selection problem, or in more pragmatic terms, the model (selection) risk.
These two issues, the prediction task and the condition of modeling now form the second toehold of our bridge between the general concept of experience and some technical aspects of the use of models. There is another bridge necessary to establish the possibility of experience, and this one connects the concept of experience with languagability.
The following list provides an overview about the following chapters:
These topics are closely related to each other, indeed so closely that other sequences would be justifiable too. Their interdependencies also demand a bit of patience from you, the reader, as the picture will be complete only when we arrive at the results of modeling.
A last remark may be allowed before we start to delve into these topics. It should be clear by now that any kind of phenomenology is deeply incompatible with the view developed here. There are several related stances, e.g. the various shades of ontology, including the objectivist conception of substance. They are all rendered as irrelevant and inappropriate for any theory about episteme, whether in its machine-based form or regarding human culture, whether as practice or as reflecting exercise.
The Modeling Statement
As the very first step we have to clearly state the goal of modeling. From the outside that goal is pretty clear. Given a set of observations and the respective outcomes, or targets, create a mapping function such that the observed data allow for a reconstruction of the outcome in an optimized manner. Finding such a function can be considered as a simple form of learning if the function is „invented“. In most cases it is not learning but just the estimation of pre-defined parameters.12 In a more general manner we also could say that any learning algorithm is a map L from data sets to a ranked list of hypothesis functions. Note that accuracy is only one of the possible aspects of that optimization. Let us call this for convenience the „outer goal“ of modeling. Would such mapping be perfect within reasonable boundaries, we would have found automatically a possible transition from probabilistic presentation to propositional representation. We could consider the induction of a structural description from observations as completed. So far the secret dream of Hans Reichenbach, Carl Schmid-Hempel, Wesley Salmon and many of their colleagues.
The said mapping function will never be perfect. The reasons for this comprise the complexity of the subject, noise in the measured data, unsuitable observables or any combinations of these. This induces a wealth of necessary steps and, of course, a lot of work. In other words, a considerable amount of apriori and heuristic choices have to be taken. Since a reliable, say analytic mapping can’t be found, every single step in the value chain towards the model at once becomes questionable and has to be checked for its suitability and reliability. It is also clear that the model does not comprise just a formula. In real-world situations a differential modeling should be performed, much like in medicine a diagnosis is considered to be complete only if a differential diagnosis is included. This comprises the investigation of the influence of the method’s parameterization onto the results. Let us call the whole bunch of respective goals the „inner goals“ of modeling.
So, being faced with the challenge of such empirical mess, how does the statement about the goals of the „inner modeling“ look like? We could for instance demand to remove the effects of the shortfalls mentioned above, which cause the imperfect mapping: complexity of the subject, noise in the measured data, or unsuitable observables.
To make this more concrete we could say, that the inner goals of modeling consist in a two-fold (and thus synchronous!) segmentation of the data, resulting in the selection of the proper variables and in the selection of the proper records, where this segmentation is performed under conditions of a preceding non-linear transformation of the embedding reference system. Ideally, the model identifies the data for which it is applicable. Only for those data then a classification is provided. It is pretty clear that this statement is an ambitious one. Yet, we regard it as crucial for any attempt to step across our epistemic bridge that brings us from particular data to the quality of experience. This transition includes something that is probably better known by the label „induction“. Thus, we finally arrive at a short statement about the inner goals of modeling:
How to conclude and what to conclude from measured data?
Obviously, if our data are noisy and if our data include irrelevant values any further conclusion will be unreliable. Yet, for any suitable segmentation of the data we need a model first. From this directly follows that a suitable procedure for modeling can’t consist just from a single algorithm, or a „one-shot procedure“. Any instance of single-step approaches are suffering from lots of hidden assumptions that influence the results and its properties in unforeseeable ways. Modeling that could be regarded as more than just an estimation of parameters by running an algorithm is necessarily a circular and—dependent on the amount of variables—possibly open-ended process.
Predictability and Predictivity
Let us assume a set of observations S obtained from an empirical process P. Then this process P should be called “predictable” if the results of the mapping function f(m) that serves as an instance of a hypothesis h from the space of hypotheses H coincides with the outcome of the process P in such a way that f(m) forms an expectation with a deviation d<ε for all f(m). In this case we may say that f(m) predicts P. This deviation is also called “empirical risk”, and the purpose of modeling is often regarded as minimizing the empirical risk (ERM).
There are then two important questions. Firstly, can we trust f(m), since f(m) has been built on a limited number of observations? Secondly, how can we make f(m) more trustworthy, given the limitation regarding the data? Usually, these questions are handled under the label of validation. Yet, validation procedures are not the only possible means to get an answer here. It would be a misunderstanding to think that it is the building or construction of a model that is problematic.
The first question can be answered only by considering different models. For obtaining a set of different models we could apply different methods. That would be o.k. if prediction would be our sole interest. Yet, we also strive for detecting structural insights. And from that perspective we should not, of course, use different methods to get different models. The second possibility for addressing the first question is to use different sub-samples, which turns simple validation into a cross-validation. Cross-validation provides an expectation for the error (or the risk). Yet, in order to compare across methods one actually should describe the expected decrease in “predictive power”13 for different sample sizes (independent cross-validation per sample size). The third possibility for answering question (1) is related to the the former and consists by adding noised, surrogated (or simulated) data. This prevents the learning mechanism from responding to empirically consistent, but nevertheless irrelevant noisy fluctuations in the raw data set. The fourth possibility is to look for models of equivalent predictive power, which are, however, based on a different set of predicting variables. This possibility is not accessible for most statistical approaches such like Principal Component Analysis (PCA). Whatever method is used to create different models, models may be combined into a “bag” of models (called “bagging”), or, following an even more radical approach, into an ensemble of small and simple models. This is employed for instance in the so-called Random Forest method.
Commonly, if a model passes cross-validation successfully, it is considered to be able to “generalize”. In contrast to the common practice, Poggio et al. [7] demonstrated that standard cross-validation has to be extended in order to provide a characterization of the capability of a model to generalize. They propose to augment
CV1oo stability with stability of the expected error and stability of the empirical error to define a new notion of stability, CVEEE1oo stability.
This makes clear that Poggio’s et al. approach is addressing the learning machinery, not any longer just the space of hypotheses. Yet, they do not take the free parameters of the method into account. We conclude that their proposed approach still remains an uncritical approach. Thus I would consider such a model as not completely trustworthy. Of course, Poggio et al. are definitely pointing towards the right direction. We recognize a move away from naive realism and positivism, instead towards a critical methodology of the conditional. Maybe, philosophy and natural sciences find common grounds again by riding the information tiger.
Checking the stability of the learning procedure leads to a methodology that we called “data experiments” elsewhere. The data experiments do NOT explore the space of hypotheses, at least not directly. Instead they create a map for all possible models. In other words, instead of just asking about the predictability we now ask about the differential predictivity of in the space of models.
From the perspective of a learning theory Poggio’s move can’t be underestimated. Statistical learning theory (SLT)[8] explicitly assumes that a direct access to the world is possible (via identity function, perfectness of the model). Consequently, SLT focuses (only) on the reduction of the empirical risk. Any learning mechanism following the SLT is hence uncritical about its own limitation. SLT is interested in the predictability of the system-as-such, thereby not rather surprisingly committing the mistake of pre-19th century idealism.
The Independence Assumption
The independence assumption [I.A.], or linearity assumption, acts mainly on three different targets. The first of them is the relationship between observer and observed, while its second target is the relationship between observables. The third target finally regards the relation between individual observations. This last aspect of the I.A. is the least problematic one. We will not discuss this any further.
Yet, the first and the second one are the problematic ones. The I.A. is deeply buried into the framework of statistics and from there it made its way into the field of explorative data analysis. There it can be frequently met for instance in the geometrical operationalization of similarity, the conceptualization of observables as Cartesian dimensions or independent coefficients in systems of linear equations, or as statistical kernels in algorithms like the Support Vector Machine.
Of course, the I.A. is just one possible stance towards the treatment of observables. Yet, taking it as an assumption we will not include any parameter into the model that reflects the dependency between observables. Hence, we will never detect the most suitable hypothesis about the dependency between observables. Instead of assuming the independence of variables throughout an analysis it would be methodological much more sound to address the degree of dependency as a target. Linearity should not be an assumption, it should be a result of an analysis.
The linearity or independence assumption carries another assumption with it under its hood: the assumption of the homogeneity of variables. Variables, or assignates, are conceived as black-boxes, with unknown influence onto the predictive power of the model. Yet, usually they exert very different effects on the predictive power of a model.
Basically, it is very simple. The predictive power of a model depends on the positive predictive value AND the negative predictive value, of course; we may also use closely related terms sensitivity and specificity. Accordingly, some variables contribute more to the positive predictive value, other help to increase the negative predictive value. This easily becomes visible if we perform a detailed type-I/II error analysis. Thus, there is NO way to avoid testing those combinations explicitly, even if we assume the initial independence of variables.
As we already mentioned above, the I.A. is just one possible stance towards the treatment of observables. Yet, its status as a methodological sine qua non that additionally is never reflected upon renders it into a metaphysical assumption. It is in fact an irrational assumption, which induces serious costs in terms of the structural richness of the results. Taken together, the independence assumption represents one of the most harmful habits in data analysis.
The Model Selection Problem
In the section “Predictability and Predictivity” above we already emphasized the importance of the switch from the space of hypotheses to the space of models. The model space unfolds as a condition of the available assignates, the size of the data set and the free parameters of the associative (“modeling”) method. The model space supports a fundamental change of the attitude towards a model. Based on the denial of the apriori assumption of independence of observables we identified the idea of a singular best model as an ill-posed phantasm. We thus move onwards from the concept of a model as a mapping function towards ensembles of structurally heterogeneous models that together as a distinguished population form a habitat, a manifold in the sphere of the model space. With such a structure we neither need to arrive at a single model.
Methods, Models, Variables
The model selection problem addresses two sets of parameters that are actually quite different from each other. Model selection should not be reduced to the treatment of the first set, of course, as it happens at least implicitly for instance in [9]. The first set refers to the variables as known from the data, sometimes also called the „predictors“. The selection of the suitable variables is the first half of the model selection problem. The second set comprises all free parameters of the method. From the methodological point of view, this second set is much more interesting than the first one. The method’s parameters are apriori conditions to the performance of the method, which additionally usually remain invisible in the results, in contrast to the selection of variables.
For associative methods like SOM or other clustering variables the effect of de-/selecting variables can be easily described. Just take all the objects in front of you, for instance on the table, or in your room. Now select an arbitrary purpose and assign this purpose as a degree of support to those objects. For now, we have constructed the target. Now we go “into” the objects, that is, we describe them by a range of attributes that are present in most of the objects. Dependent on the selection of a subset from these attributes we will arrive at very different groups. The groups now represent the target more or less, that’s the quality of the model. Obviously, this quality differs across the various selections of attributes. It is also clear that it does not help to just use all attributes, because some of the attributes just destroy the intended order, they add noise to the model and decrease its quality.
As George observes [10], since its first formulation in the 1960ies a considerable, if not large number of proposals for dealing with the variable selection problem have been proposed. Although George himself seem to distinguish the two sets of parameters, throughout the discussion of the different approaches he always refers just to the first set, the variables as included in the data. This is not a failure of the said author, but a problem of the statistical approach. Usually, the parameters of statistical procedures are not accessible, as any analytic procedure, they work as they work. In contrast to Self-organizing Maps, and even to Artificial Neural Networks (ANN) or Genetic Procedures, analytic procedures can’t be modified in order to achieve a critical usage. In some way, with their mono-bloc design they perfectly fit into representationalist fallacy.
Thus, using statistical (or other analytic) procedures, the model selection problem consists of the variable selection problem and the method selection problem. The consequences are catastrophic: If statistical methods are used in the context of modeling, the whole statistical framework turns into a black-box, because the selection of a particular method can’t be justified in any respect. In contrast to that quite unfavorable situation, methods like the Self-Organizing Map provide access to any of its parameters. Data experiments are only possible with methods like SOM or ANN. Not the SOM or the ANN are „black-boxes“, but the statistical framework must be regarded as such. Precisely this is also the reason for the still ongoing quarrels about the foundations of the statistical framework. There are two parties, the frequentists and the bayesians. Yet, both are struck by the reference class problem [11]. From our perspective, the current dogma of empirical work in science need to be changed.
The conclusion is that statistical methods should not be used at all to describe real-world data, i.e. for the modeling of real-world processes. They are suitable only within a fully controlled setting, that is, within a data experiment. The first step in any kind of empirical analysis thus must consist of a predictive modeling that includes the model selection task.14
The Perils of Universalism
Many people dealing with the model selection task are mislead by a further irrational phantasm, caused by a mixture of idealism and positivism. This is the phantasm of the single best model for a given purpose.
Philosophers of science long ago recognized, starting with Hume and ultimately expressed by Quine, that empirical observations are underdetermined. The actual challenge posed by modeling is given by the fact of empirical underdetermination. Goodman felt obliged to construct a paradox from it. Yet, there is no paradox, there is only the phantasm of the single best model. This phantasm is a relic from the Newtonian period of science, where everybody thought the world is made by God as a miraculous machine, everything had to be well-defined, and persisting contradictions had to be rated as evil.
Secondarily, this moults into the affair of (semantic) indetermination. Plainly spoken, there are never enough data. Empirical underdetermination results in the actuality of strongly diverging models, which in turn gives rise to conflicting experiences. For a given set of data, in most cases it is possible to build very different models (ceteris paribus, choosing different sets of variables) that yield the same utility, or say predictive power, as far as this predictive power can be determined by the available data sample at all. Such ceteris paribus difference will not only give rise to quite different tracks of unfolding interpretation, it is also certainly in the close vicinity of Derrida’s deconstruction.
Empirical underdetermination thus results in a second-order risk, the model selection risk. Actually, the model selection risk is the only relevant risk. We can’t change the available data, and data are always limited, sometimes just by their puniness, sometimes by the restrictions to deal with them. Risk is not attached to objects or phenomena, because objects “are not there” before interpretation and modeling. Risk is attached only to models. Risk is a particular state of affair, and indeed a rather fundamental one. Once a particular model would tell us that there is an uncertainty regarding the outcome, we could take measures to deal with that uncertainty. For instance, we hedge it, or organize some other kind of insurance for it. But hedging has to rely on the estimation of the uncertainty, which is dependent on the expected predictive power of the model, not just the accuracy of the model given the available data from a limited sample.
Different, but equivalent selections of variables can be used to create a group of models as „experts“ on a given task to decide on. Yet, the selection of such „experts“ is not determinable on the basis of the given data alone. Instead, further knowledge about the relation of the variables to further contexts or targets needs to be consulted.
Universalism is usually unjustifiable, and claiming it instead usually comes at huge costs, caused by undetectable blindnesses once we accept it. In contemporary empiricism, universalism—and the respecting blindness—is abundant also with regard to the role of the variables. What I am talking about here is context, mediality and individuality, which, from a more traditional formal perspective, is often approximated by conditionality. Yet, it more and more becomes clear that the Bayesian mechanisms are not sufficient to get the complexity of the concept of variables covered. Just to mention the current developments in the field of probability theory I would like to refer to Brian Weatherson, who favors and develops the so-called dynamic Keynesian models of uncertainty. [10] Yet, we regard this only as a transitional theory, despite the fact that it will have a strong impact on the way scientists will handle empiric data.
The mediating individuality of observables (as deliberately chosen assignates, of course) is easy to observe, once we drop the universalism qua independence of variables. Concerning variables, universalism manifests in an indistinguishability of the choices made to establish the assignates with regard to their effect onto the system of preferences. Some criteria C will induce the putative objects as distinguished ones only, if another assignate A has pre-sorted it. Yet, it would be a simplification to consider the situation in the Bayesian way as P(C|A). The problem with it is that we can’t say anything about the condition itself. Yet, we need to “play” (actually not “control”) with the conditionability, the inner structure of these conditions. As it is with the “relation,” which we already generalized into randolations, making it thereby measurable, we also have to go into the condition itself in order to defeat idealism even on the structural level. An appropriate perspective onto variables would hence treat it as a kind of media. This mediality is not externalizable, though, since observables themselves precipitate from the mediality, then as assignates.
What we can experience here is nothing else than the first advents of a real post-modernist world, an era where we emancipate from the compulsive apriori of independence (this does not deny, of course, its important role in the modernist era since Descartes).
Optimizing a model means to select a combination of suitably valued parameters such that the preferences of the users in terms of risk and implied costs are served best. The model selection problem is thus the link between optimization problems, learning tasks and predictive modeling. There are indeed countless many procedures for optimization. Yet, the optimization task in the context of model selection is faced with a particular challenge: its mere size. George begins his article in the following way:
A distinguishing feature of variable selection problems is their enormous size. Even with moderate values of p, computing characteristics for all 2p models is prohibitively expensive and some reduction of the model space is needed.
Assume for instance a data set that comprises 50 variables. From that 1.13e15 models are possible, and assume further that we could test 10‘000 models per second, then we still would need more than 35‘000 years to check all models. Usually, however, building a classifier on a real-world problem takes more than 10 seconds, which would result in 3.5e9 years in the case of 50 variables. And there are many instances where one is faced with much more variables, typically 100+, and sometimes going even into the thousands. That’s what George means by „prohibitively“.
There are many proposals to deal with that challenge. All of them fall into three classes: they use either (1) some information theoretic measure (AIC, BIC, CIC etc. [11]), or (2) they use likelihood estimators, i.e. they conceive of parameters themselves as random variables, or (3) they are based of probabilistic measures established upon validation procedures. Particularly the instances from the first two of those classes are hit by the linearity and/or the independence assumption, and also by unjustified universalism. Of course, linearity should not be an assumption, it should be a result, as we argued above. Hence, there is no way to avoid the explicit calculation of models.
Given the vast number of combinations of symbols it appears straightforward to conceive of the model selection problem from an evolutionary perspective. Evolution always creates appropriate and suitable solutions from the available „evolutionary model space“. That space is of size 230‘000 in the case of humans, which is a „much“ larger number than the number of species ever existent on this planet. Not a single viable configuration could have been found by pure chance. Genetics-based alignment and navigation through the model space is much more effective than chance. Hence, the so-called genetic algorithms might appear on the radar as the method of choice .
Genetics, revisited
Unfortunately, for the variable selection problem genetic algorithms15 are not suitable. The main reason for this is still the expensive calculation of single models. In order to set up the genetic procedure, one needs at least 500 instances to form the initial population. Any solution for the variable selection problem should arrive at a useful solution with less than 200 explicitly calculated models. The great advantage of genetic algorithms is their capability to deal with solution spaces that contain local extrema. They can handle even solution spaces that are inhomogeneously rugged, simply for the reason that recombination in the realm of the symbolic does not care about numerical gradients and criteria. Genetic procedures are based on combinations of symbolic encodings. The continuous switch between the symbolic (encoding) and the numerical (effect) are nothing else than the pre-cursors of the separation between genotypes and phenotypes, without which there would not be even simple forms of biological life.
For that reason we developed a specialized instantiation of the evolutionary approach (implemented in SomFluid). Described very briefly we can say that we use evolutionary weights as efficient estimators of the maximum likelihood of parameters. The estimates are derived from explicitly calculated models that vary (mostly, but not necessarily ceteris paribus) with respect to the used variables. As such estimates, they influence the further course of the exploration of the model space in a probabilistic manner. From the perspective of the evolutionary process, these estimates represent the contribution of the respective parameter to the overall fitness of the model. They also form a kind of long-term memory within the process, something like a probabilistic genome. The short-term memory in this evolutionary process is represented by the intensional profiles of the nodes in the SOM.
For the first initializing step, the evolutionary estimates can be estimated themselves by linear procedure like the PCA, or by non-parametric procedures (Kruskal-Wallis, Mann-Whitney, etc.), and are available after only a few explicitly calculated models (model here means „ceteris paribus selection of variables“).
These evolutionary weights reflect the changes of the predictive power of the model when adding or removing variables to the model. If the quality of the model improves, the evolutionary weight increases a bit, and vice versa. In other words, not the apriori parameters of the model are considered, but just the effect of the parameters. The procedure is an approximating repetition: fix the parameters of the model (method specific, sampling, variables), calculate the model, record the change of the predictive power as compared to the previous model.
Upon the probabilistic genome of evolutionary weights there are many different ways one could take to implement the “evo-devo” mechanisms, let it be the issue of how to handle the population (e.g. mixing genomes, aspects of virtual ecology, etc.), or the translational mechanisms, so to speak the “physiologies” that are used to proceed from the genome to an actual phenotype.
Since many different combinations are being calculated, the evolutionary weight represents the expectable contribution of a variable to the predictive power of the model, under whatsoever selection of variables that represents a model. Usually, a variable will not improve the quality of the model irrespective to the context. Yet, if a variable indeed would do so, we not only would say that its evolutionary weight equals 1, we also may conclude that this variable is a so-called confounder. Including a confounder into a model means that we use information about the target, which will not be available when applying the model for classification of new data; hence the model will fail disastrously. Usually, and that’s just a further benefit of dropping the independence-universalism assumption, it is not possible for a procedure to identify confounders by itself. It is also clear that the capability to do so is one of the cornerstones of autonomous learning, which includes the capability to set up the learning task.
Noise, and Noise
Optimization raises its own follow-up problems, of course. The most salient of these is so-called overfitting. This means that the model gets suitably fitted to the available observations by including a large number of parameters and variables, but it will return wrong predictions if it is going to be used on data that are even only slightly different from the observations used for learning and estimating the parameters of the model. The model represents noise, random variations without predictive value.
As we have been describing above, Poggio believes that his criterion of stability overcomes the defects with regard to the model as a generalization from observations. Poggio might be too optimistic, though, since his method still remains to be confined to the available observations.
In this situation, we apply a methodological trick. The trick consists in turning the problem into a target of investigation, which ultimately translates the problem into an appropriate rule. In this sense, we consider noise not as a problem, but as a tool.
Technically, we destroy the relevance of the differences between the observations by adding noise of a particular characteristic. If we add a small amount of normally distributed noise, nothing will probably change, but if we add a lot of noise, perhaps even of secondarily changing distribution, this will result in the mere impossibility to create a stable model at all. The scientific approach is to describe the dependency between those two unknowns, so to say, to set up a differential between noise (model for the unknown) and the model (of the unknown). The rest is straightforward: creating various data sets that have been changed by imposing different amounts of noise of a known structure, and plotting the predictive power against the the amount of noise. This technique can be combined by surrogating the actual observations via a Cholesky decomposition.
From all available models then those are preferred that combine a suitable predictive power with suitable degree of stability against noise.
In this section we have dealt with the problematics of selecting a suitable subset from all available observables (neglecting for the time being that model selection involves the method’s parameters, too). Since mostly we have more observables at our disposal than we actually presume to need, the task could be simply described as simplification, aka Occam’s Razor. Yet, it would be terribly naive to first assume linearity and then selecting the “most parsimonious” model. It is even cruel to state [9, p.1]:
It is said that Einstein once said
Make things as simple as possible, but not simpler.
I hope that I succeeded in providing some valuable hints for accomplishing that task, which above all is not a quite simple one. (etc.etc. :)
Describing Classifiers
The gold standard for describing classifiers is believed to be the Receiver-Operator-Characteristic, or short, ROC. Particularly, the area under the curve is compared across models (classifiers). The following Figure 1demonstrates the mechanics of the ROC plot.
Figure 1: Basic characteristics of the ROC curve (reproduced from Wikipedia)
Figure 2. Realistic ROC curves, though these are typical for approaches that are NOT based on sub-group structures or ensembles (for instance ANN or logistic regression). Note that models should not be selected on the basis of the Area-under-Curve. Instead the true positive rate (sensitivity) at a false positive rate FPR=0 should be used for that. As a further criterion that would indicate the stability of of the model one could use the slope of the curve at FPR=0.
Utilization of Information
There is still another harmful aspect of the universalistic stance in data analysis as compared to a pragmatic stance. This aspect considers the „reach“ of the models we are going to build.
Let us assume that we would accept a sensitivity of approx 80%, but we also expect a specificity of >99%. In other words, the cost for false positives (FP) are defined as very high, while the costs for false negatives (FN, not recognized preferred outcomes) are relatively low. The ratio of costs for error, or in short the error cost ratio err(FP)/err(FN) is high.
Table 1a: A Confusion matrix for a quite performant classifier.
Symbols: test=model; TP=true positives; FP=false positives; FN=false negatives; TN=true negatives; ppv=positive predictive value, npv=negative predictive value. FN is also called type-I-error (analogous to “rejecting the null hypothesis when it is true”), while FP is called type-II-error (analogous to “accepting the null hypothesis when it is false”), and FP/(TP+FP) is called type-II-error-rate, sometime labeled as β-error, where (1-β) is the called the “power” of the test or model. (download XLS example)
condition Pos
condition Neg
test Pos
100 (TP)
3 (FP)
test Neg
28 (FN)
1120 (TN)
Let us further assume that there are observations of our preferred outcome that we can‘t distinguish well from other cases of the opposite outcome that we try to avoid. They are too similar, and due to that similarity they form a separate group in our self-organizing map. Let us assume that the specificity of these clusters is at 86% only and the sensitivity is at 94%.
Table 1b: Confusion matrix describing a sub-group formed inside the SOM, for instance as it could be derived from the extension of a “node”.
condition Pos
condition Neg
test Pos
0 (50)
0 (39)
0.0 (0.56)
test Neg
50 (0)
39 (0)
0.44 (1.0)
0.0 (1.0)
1.0 (0.0)
Yet, this cluster would not satisfy our risk attitude. If we would use the SOM as a model for classification of new observations, and the new observation would fall into that group (by means of similarity considerations) the implied risk would violate our attitude. Hence, we have to exclude such clusters. In the ROC this cluster represents a value further to the right on the specificity (X-) axis.
Note that in the case of acceptance of the subgroup as a representative for a contributor of a positive prediction, the false negative is always 0 aposteriori, and in case of denial the true positives is always set to 0 (accordingly the figures for the condition negative).
There are now several important points to that, which are related to each other. Actually, we should be interested only in such sub-groups with specificity close to 1, such that our risk attitude is well served. [13] Likewise, we should not try to optimize the quality of the model across the whole range of the ROC, but only for the subgroups with acceptable error cost ratio. In other words, we use the available information in a very specific manner.
As a consequence, we have to set the ECR before calculating the model. Setting the ECR after the selection of a model results in a waste of information, time and money. For this reason it is strongly indicated to use methods that are based on building a representation by sub-groups. This again rules out statistical methods as they always take into account all available data. Zytkow calls such methods empirically empty [14].
The possibility to build models of a high specificity is a huge benefit of sub-group based methods like the SOM.16 To understand this better let us assume we have a SOM-based model with the following overall confusion matrix.
condition Pos
condition Neg
test Pos
test Neg
That is, the model recognizes around 35% of all preferred outcomes. It does so on the basis of sub-groups that all satisfy the respective ECR criterion. Thus we know that the implied risk of any classification is very low too. In other words, such models recognize whether it is allowed to apply them. If we apply them and get a positive answer, we also know that it is justified to apply them. Once the model identifies a preferred outcome, it does so without risk. This lets us miss opportunities, but we won’t be trapped by false expectations. Such models we could call auto-consistent.
In a practical project that has been aiming at an improvement of the post-surgery risk classification of patients (n>12’000) in a hospital we have been able to demonstrate that the achievable validated rate of implied risk can be as low as <10e-4. [15] Such a low rate is not achievable by statistical methods, simply because there are far too few incidents of wrong classifications. The subjective cut-off points in logistic regression are not quite suitable for such tasks.
At the same time, and that’s probably even more important, we get a suitable segmentation of the observations. All observations that can be identified as positive do not suffer from any risk. Thus, we can investigate the structure of the data for these observations, e.g. as particular relationships between variables, such as correlations etc. But, hey, that job is already done by the selection of the appropriate set of variables! In other words, we not only have a good model, we also have found the best possibility for a multi-variate reduction of noise, with a full consideration of the dependencies between variables. Such models can be conceived as reversed factorial experimental design.
The property of auto-consistency offers a further benefit as it is scalable, that is, “auto-consistent” is not a categorical, or symbolic, assignment. It can be easily measured as sensitivity under the condition of specificity > 1-ε, ε→0. Thus, we may use it as a random measure (it can be described by its density) or as a scale of reference in case of any selection task among sub-populations of models. Additionally, if the exploration of the model space does not succeed in finding a model of a suitable degree of auto-consistency, we may conclude that the quality of the data is not sufficient. Data quality is a function of properly selected variables (predictors) and reproducible measurement. We know of no other approach that would be able to inform about the quality of the data without referring to extensive contextual “knowledge”. Needless to say that such knowledge is never available and encodable.
There are only weak conditions that need to be satisfied. For instance, the same selection of variables need to be used within a single model for all similarity considerations. This rules out all ensemble methods, as far as different selections of variables are used for each item in the ensemble; for instance decision tree methods (a SOM with its sub-groups is already “ensemble-like”, yet, all sub-groups are affected by the same selection of variables). It is further required to use a method that performs the transition from extensions to intensions on a sub-group level,which rules out analytic methods, and even Artificial Neural Networks (ANN). The way to establish auto-consistent models is not possible for ANN. Else, the error-cost ratio must be set before calculating the model, and the models have to be calculated explicitly, which removes linear methods from the list, such as Support Vector Machines with linear kernels (regression, ANN, Bayes). If we want to access the rich harvest of auto-consistent models we have to drop the independence hypothesis and we have to refute any kind of universalism. But these costs are rather low, indeed.
Observations and Probabilities
Here we developed a particular perspective onto the transition from observations to intensional representations. There are of course some interesting relationships of our point of view to the various possibilities of “interpreting” probability (see [16] for a comprehensive list of “interpretations” and interesting references). We also provide a new answer to Hume’s problem of induction.
Hume posed the question, how often should we observe a fact until we could consider it as lawful? This question, called the “problem of induction” points to the wrong direction and will trigger only irrelevant answer. Hume, living still in times of absolute monarchism, in a society deeply structured by religious beliefs, established a short-cut between the frequency of an observation and its propositional representation. The actual question, however, is how to achieve what we call an “observation”.
In very simple, almost artificial cases like the die there is nothing to interpret. The die and its values are already symbols. It is in some way inadequate to conceive of a die or of dicing as an empirical issue. In fact, we know before what could happen. The universe of the die consists of precisely 6 singular points.
Another extreme are so-called single-case observations of structurally rich events, or processes. An event, or a setting should be called structurally rich, if there are (1) many different outcomes, and (2) many possible assignates to describe the event or the process. Such events or processes will not produce any outcome that is could be expected by symbolic or formal considerations. Obviously, it is not possible to assign a relative frequency to a unique, a singular, or a non-repeatable event. Unfortunately, however, as Hájek points out [17], any actual sequence can be conceived of as a singular event.
The important point now is that single-case observations are also not sufficiently describable as an empirical issue. Ascribing propensities to objects-in-the-world demands for a wealth of modeling activities and classifications, which have to be completed apriori to the observation under scrutiny. So-called single-case propensities are not a problem of probabilistic theory, but one of the application of intensional classes and their usage as means for organizing one’s own expectations. As we said earlier, probability as it is used in probability theory is not a concept that could be applied meaningful to observations, where observations are conceived of as primitive “givens”. Probabilities are meaningful only in the closed world of available subjectively held concepts.
We thus have to distinguish between two areas of application for the concept of probability: the observational part, where we build up classes, and the anticipatory part, where we are interested in a match of expectations and actual outcomes. The problem obviously arises by mixing them through the notion of causality.17 Yet, there is absolutely no necessity between the two areas. The concept of risk probably allows for a resolution of the problems, since risk always implies a preceding choice of a cost function, which necessarily is subjective. Yet, the cost function and the risk implied by a classification model is also the angle point for any kind of negotiation, whether this takes place on an material, hence evolutionary scale, or within a societal context.
The interesting, if not salient point is that the subjectively available intensional descriptions and classes are dependent on ones risk attitude. We may observe the same thing only if we have acquired the same system of related classes and the same habits of using them. Only if we apply extreme risk aversion we will achieve a common understanding about facts (in the Wittgensteinian sense, see above). This then is called science, for instance. Yet, it still remains a misunderstanding to equate this common understanding with objects as objects-out-there.
The problem of induction thus must be considered as a seriously ill-posed problem. It is a problem only for idealists (who then solve it in a weird way), or realists that are naive against the epistemological conditions of acting in the world. Our proposal for the transition from observations to descriptions is based on probabilism on both sides, yet, on either side there is a distinct flavor of probabilism.
Finally, a methodological remark shall be allowed, closely related to what we already described in the section about “noise” above. The perspective onto “making experience” that we have been proposing here demonstrates a significant twist.
Above we already mentioned Alan Hájek’s diagnosis that the frequentist and the Bayesian interpretation of probabilities suffer from the reference class problem. In this section we extended Hájek’s concerns to the concept of propensity. Yet, if the problem shows a high prevalence we should not conceive it as a hurdle but should try to treat it dynamically as a rule.The reference class is only a problem as long as (1) either the actual class is required as an external constant, or (2) the abstract concept of the class is treated as a fixed point. According to the rule of Lagrange-Deleuze, any constant can be rewritten into a procedure (read: rules) and less problematic constants. Constants, or fixed points on a higher abstract level are less problematic, because the empirically grounded semantics vanishes.
Indeed, the problem of the reference class simply disappears if we put the concept of the class, together with all the related issues of modeling, as the embedding frame, the condition under which any notion of probability only can make sense at all. The classes itself are results of “rule-following”, which admittedly is blind, but whose parameters are also transparently accessible. In this way, probabilistic interpretation is always performed in a universe, that is closed and in principle fully mapped. We need the probabilistic methods just because that universe is of a huge size. In other words, the space of models is a Laplacean Universe.
Since statistical methods and similar interpretations of probability are analytical techniques, our proposal for a re-positioning of statistics into such a Laplacean Universe is also well aligned with the general habit of Wittgenstein’s philosophy, which puts practiced logic (quasi-logic) second to performance.
The disappearance of the reference class problem should be expected if our relations to the world are always mediated through the activity with abstract, epistemic modeling. The usage of probability theory as a “conceptual game” aiming for sharing diverging attitudes towards risks appears as nothing else than just a particular style of modeling, though admittedly one that offers a reasonable rate of success.
The Result of Modeling
It should be clear by now, that the result of modeling is much more than just a single predictive model. Regardless whether we take the scientific perspective or a philosophical vantage point, we need to include operationalizations of the conditions of the model, that reach beyond the standard empirical risk expressed as “false classification”. Appropriate modeling provides not only a set of models with well-estimated stability and of different structures; a further goal is to establish models that are auto-consistent.
If the modeling employs a method that exposes its parameters, we even can avoid the „method hell“, that is, the results are not only reliable, they are also valid.
It is clear that only auto-consistent models are useful for drawing conclusions and in building up experience. If variables are just weighted without actually being removed, as for instance in approaches like the Support Vector Machines, the resulting methods are not auto-consistent. Hence, there is no way towards a propositional description of the observed process.
Given the population of explicitly tested models it is also possible to describe the differential contribution of any variable to the predictive power of a model. The assumption of neutrality or symmetry of that contribution, as it is for instance applied in statistical learning, is a simplistic perspective onto the variables and the system represented by them.
In this essay we described some technical aspects of the capability to experience. These technical aspects link the possibility for experience to the primacy of interpretation that gets actualized as the techné of anticipatory, i.e. predictive or diagnostic modeling. This techné does not address the creation or derivation of a particular model by means of employing one or several methods. The process of building a model could be fully automated anyway. Quite differently, it focuses the parametrization, validation, evaluation and application of models, particularly with respect to the task of extract a rule from observational data. This extraction of rules must not be conceived as a “drawing of conclusions” guided by logic. It is a constructive activity.
The salient topics in this practice are the selection of models and the description of the classifiers. We emphasized that the goal of modeling should not be conceived as the task of finding a single best model.
Methods like the Self-organizing Map which are based on sub-group segmentation of the data can be used to create auto-consistent models, which represent also an optimally de-noised subset of the measured data. This data sample could be conceived as if it would have been found by a factorial experimental design. Thus, auto-consistent models also provide quite valuable hints for the setup of the Taguchi method of quality assurance, which could be seen as a precipitation of organizational experience.
In the context of exploratory investigation of observational data one first has to determine the suitable observables (variables, predictors) and, by means of the same model(s), the suitable segment of observations before drawing domain-specific conclusions. Such conclusions are often expressed as contrasts in location or variation. In the context of designed experiments as e.g. in pharmaceutical research one first has to check the quality of the data, then to de-noise the data by removing outliers by means of the same data segmentation technique, before again null hypotheses about expected contrasts could be tested.
As such, auto-consistent models provide a perfect basis for learning and for extending the “experience” of an epistemic individual. According to our proposals this experience does not suffer from the various problems of traditional Humean empirism (the induction problem), or contemporary (defective) theories of probabilism (mainly the problem of reference classes). Nevertheless, our approach remains fully empirico-epistemological.
1. As many other philosophers Lyotard emphasized the indisputability of an attention for the incidential, not as a perception-as, but as an aisthesis, as a forming impression. see: Dieter Mersch, ›Geschieht es?‹ Ereignisdenken bei Derrida und Lyotard. available online, last accessed May 1st, 2012. Another recent source arguing into the same direction is John McDowell’s “Mind and World” (1996).
2. The label “representationalism” has been used by Dreyfus in his critique of symbolic AI, the thesis of the “computational mind” and any similar approach that assumes (1) that the meaning of symbols is given by their reference to objects, and (2) that this meaning is independent of actual thoughts, see also [2].
3. It would be inadequate to represent such a two-fold “almost” dichotomy as a 2-axis coordinate system, even if such a representation would be a metaphorical one only; rather, it should be conceived as a tetraedic space, given by two vectors passing nearby without intersecting each other. Additionally, the structure of that space must not expected to be flat, it looks much more like an inhomogeneous hyperbolic space.
4. “Experiencibility” here not understood as an individual capability to witness or receptivity, but as the abstract possibility to experience.
5. In the same way we reject Husserl’s phenomenology. Phenomena, much like the objects of positivism or the thing-as-such of idealism, are not “out there”, they are result of our experiencibility. Of course, we do not deny that there is a materiality that is independent from our epistemic acts, but that does not explain or describe anything. In other words we propose go subjective (see also [3]).
6. Again, mechanism here should not be misunderstood as a single deterministic process as it could be represented by a (trivial) machine.
7. This question refers to the famous passage in the Tractatus, that “The world is everything that is the case.“ Cases, in the terminology of the Tractatus, are facts as the existence of states of affairs. We may say, there are certain relations. In the Tractatus, Wittgenstein excluded relations that could not be explicated by the use of symbols., expressed by the 7th proposition: „Whereof one cannot speak, thereof one must be silent.“
8. We must step outside of language in order to see the working of language.
9. We just have to repeat it again, since many people develop misunderstandings here. We do not deny the material aspects of the world.
10. “individual” is quite misleading here, since our brain and even our mind is not in-divisable in the atomistic sense.
11. thus, it is also not reasonable to claim the existence of a somehow dualistic language, one part being without ambiguities and vagueness, the other one establishing ambiguity deliberately by means of metaphors. Lakoff & Johnson started from a similar idea, yet they developed it into a direction that is fundamentally incompatible with our views in many ways.
12. Of course, the borders are not well defined here.
13. “predictive power” could be operationalized in quite different ways, of course….
14. Correlational analysis is not a candidate to resolve this problem, since it can’t be used to segment the data or to identify groups in the data. Correlational analysis should be performed only subsequent to a segmentation of the data.
15. The so-called genetic algorithms are not algorithms in the narrow sense, since there is no well-defined stopping rule.
16. It is important to recognize that Artificial Neural Networks are NOT belonging to the family of sub-group based methods.
17. Here another circle closes: the concept of causality can’t be used in a meaningful way without considering its close amalgamation with the concept of information, as we argued here. For this reason, Judea Pearl’s approach towards causality [16] is seriously defective, because he completely neglects the epistemic issue of information.
• [1] Geoffrey C. Bowker, Susan Leigh Star. Sorting Things Out: Classification and Its Consequences. MIT Press, Boston 1999.
• [2] Willian Croft, Esther J. Wood, Construal operations in linguistics and artificial intelligence. in: Liliana Albertazzi (ed.) , Meaning and Cognition. Benjamins Publ, Amsterdam 2000.
• [3] Wilhelm Vossenkuhl. Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, Berlin 2009.
• [4] Douglas Hofstadter, Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought. Basic Books, New York 1996.
• [5] Nicholas F. Gier, Wittgenstein and Deconstruction, Review of Contemporary Philosophy 6 (2007); first publ. in Nov 1989. Online available.
• [6] Henk L. Mulder, B.F.B. van de Velde-Schlick (eds.), Moritz Schlick, Philosophical Papers, Volume II: (1925-1936), Series: Vienna Circle Collection, Vol. 11b, Springer, Berlin New York 1979. with Google Books
• [7] Tomaso Poggio, Ryan Rifkin, Sayan Mukherjee & Partha Niyogi (2004). General conditions for predictivity in learning theory. Nature 428, 419-422.
• [8] Vladimir Vapnik, The Nature of Statistical Learning Theory (Information Science and Statistics). Springer 2000.
• [9] Herman J. Bierens (2006). Information Criteria and Model Selection. Lecture notes, mimeo, Pennsylvania State University. available online.
• [10 ]Brian Weatherson (2007). The Bayesian and the Dogmatist. Aristotelian Society Vol.107, Issue 1pt2, 169–185. draft available online
• [11] Edward I. George (2000). The Variable Selection Problem. J Am Stat Assoc, Vol. 95 (452), pp. 1304-1308. available online, as research paper.
• [12] Alan Hájek (2007). The Reference Class Problem is Your Problem Too. Synthese 156(3): 563-585. draft available online.
• [13] Lori E. Dodd, Margaret S. Pepe (2003). Partial AUC Estimation and Regression. Biometrics 59( 3), 614–623.
• [14] Zytkov J. (1997). Knowledge=concepts: a harmful equation. 3rd Conference on Knowledge Discovery in Databases, Proceedings of KDD-97, p.104-109.AAAI Press.
• [15] Thomas Kaufmann, Klaus Wassermann, Guido Schüpfer (2007). Beta error free risk identification based on SPELA, a neuro-evolution method. presented at ESA 2007.
• [16] Alan Hájek, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Summer 2012 Edition), Edward N. Zalta (ed.), available online, or forthcoming.
• [17] Judea Pearl, Causality – Models, Reasoning, and Inference. 2nd ed. Cambridge University Press, Cambridge (Mass.) 2008 [2000].
Analogical Thinking, revisited. (II)
March 20, 2012 § Leave a comment
In this second part of the essay about a fresh perspective on
Crossing over
Elementarizations I: C.o.p.y.c.a.t.
Elementarizations II: S.O.M.
• – operationalization of the target into a target variable;
• – validation by separate samples;
SOM-based Abstraction
• – the preparation of the observations into probabilistic contexts;
So let us summarize the possibilities provided by the SOM.
The Extended SOM
The Transfer
Copycat extended SOM
5. Kind of concepts locational, positional symmetries, any
MIT Press, Cambridge 1995.
Where Am I?
You are currently browsing entries tagged with idealism at The "Putnam Program". |
ca04a55ae492bd88 | Like us on Facebook and Follow us on Twitter
PowerPedia:Quantum Ring Theory
• 24 errors has been found on this page. Administrator will correct this soon.
Quantum Ring Theory (QRT) is a theory developed by Wladimir Guglinski between 1993 and 2004, published in a book form by the Bäuu Institute Press in August 2006, two years after Dr. PowerPedia:Eugene Mallove had encouraged Guglinski to put his several papers on a book form. The book presents 24 scientific papers, in which the author shows that some principles and models of Modern Physics must be replaced.
In the atomic level, QRT follows the interpretation preconized by There was an error working with the wiki: Code[1] on the successes of the Bohr's hydrogen atom. Schrödinger stated that “It is difficult to believe that this result is merely an accidental mathematical consequence of the quantum conditions, and has no deeper physical meaning1?. He believed that Bohr’s successes would be consequence of unknown mechanisms, and he tried to find them.
That’s why Schrödinger discovered the There was an error working with the wiki: Code[2] in the There was an error working with the wiki: Code[3]’s equation, and interpreted it as a helical trajectory of the electron.
A rival interpretation was supported by Heisenberg, who believed that the Theoretical Physics cannot be developed in dependency of the discovery of “metaphysical? mechanisms not suitable to be measured (observed) in experiments. He stated that only “observables variables are of the science’s interest.
The interpretation for the zitterbewegung in the Heisenbergian viewpoint was proposed in 2004 by Krekora2
So, we realize that in the 20th Century there was a divergence between Schrödinger and Heisenberg on the question of what is the aim of the scientific method.
It seems that the confrontation between the Shrödingerian and Heisenbergian viewpoints will be decided by the Cold fusion experiments. See: Cold fusion theories.
Some principles and models of Modern Physics seemed so much strange to the author. That’s why in 1991 he wrote a book, where he proposed a new theory according to which the neutron must be composed by proton+electron (n=p+e), and the space must be fulfilled by the There was an error working with the wiki: Code[4], which would be responsible for the equilibrium of the electrons within the electrosphere of the atoms. He submitted the book to several publishing houses in Brazil, but no editor had interest in his book.
Nine years later, in 2001 Guglinski discovered that Don Borghi et al. had published a paper3 in 1993, describing an experiment that confirms the model n=p+e.
::In 1992 he registered the manuscript of his book in the Brazilian National Library, where the typewritten manuscript stays placed in the archives till today.
Because no publisher accepted his book for publication, he decided to prove that his new theory was correct. That’s why in 1993 as self-taught he started to study in depth the foundations of Quantum mechanics, in order to prove that some principles of QM must be replaced (see Cold fusion theories.
For the proposal of a new theory, as a point of departure he had considered the seven fundamental points as follows:
1- There was an error working with the wiki: Code[5]’s model of neutron seemed to be impossible to work, since it violates two fundamental laws of Physics: the Newton’s action-reaction law, and the energy-matter conservation law. So Guglinski felt that would be necessary to prove that from the model n=p+e one could explain all the properties of the neutron inferred from the experiments.
2- Another question that worried him was the missing of the aether in the current theories of Modern Physics. Something was wrong with Einstein interpretation. A new theory replacing the empty space by a space fulfilled with the aether would be required.
3- The There was an error working with the wiki: Code[6] could not be entirely correct, by several reasons:
3.1- There was an error working with the wiki: Code[7] is not correct, since it is unable to explain the fine structures. However his model has so many spectacular successes, as for instance one calculates the There was an error working with the wiki: Code[8] from the Bohr’s model, with an accuracy that cannot be accidental, because it’s impossible to consider it accidental from the laws of probability. Nevertheless nowadays the quantum theorists claim that Bohr’s successes are accidental. Such hypothesis is unacceptable, and one has to consider that there is something true (at least partially) in his model, while from the concepts of Quantum Mechanics there is need to consider Bohr’s model as totally wrong. But as from the mathematical probabilty it's necessary to consider that Bohr’s model is at least partially correct, this implies that the QM’s model cannot be entirely correct.
3.2- It is hard to believe that there is no trajectory of elementary particles as proposed in QM, since everybody see that electron’s trajectory exists within the chamber fog.
3.3- The hydrogen atom of QM is undulatory. But there are phenomena that require a corpuscular model to be explained. And it’s hard to believe in the absurd There was an error working with the wiki: Code[9] proposed by Bohr, according to which incompatible models must be used for explaining the phenomena. Indeed, it’s hard to believe that the Nature sometimes uses a corpuscular model, and sometimes she uses a undulatory model.
4- There is not a unique nuclear model in current There was an error working with the wiki: Code[10]. There are several models, and they are incompatible. Besides, the current nuclear theory is unable to explain some nuclear properties and many behavior of the nuclei. It’s hard to believe that Nature works by using several incompatible models for the production of the nuclear phenomena. Thereby it was indispensable to look for a unique nuclear model capable to explain all the nuclear phenomena.
5- Nowadays the theorist consider that the light is a duality wave-particle, and there is not any model of photon for explaining the light behavior. The light in Modern Physics is described by pure abstract mathematical equations, and it’s hard to believe that mathematical equations can produce physical phenomena as those produced by the light. So, Guglinski felt the need of looking for a physical model of photon, capable to generate the There was an error working with the wiki: Code[11] and to reproduce theoretically all the phenomena of the light, as its There was an error working with the wiki: Code[12], the There was an error working with the wiki: Code[13], etc.
6- It’s hard to believe that the duality wave-particle is a property of the matter. There is an alternative solution for explaining the duality wave-particle: by considering the There was an error working with the wiki: Code[14] of the elementary particles. The zitterbewegung appears in the Dirac’s equation of the electron, and therefore the duality can be considered as a property of the helical trajectory of the elementary particles. Such new interpretation for the duality is used in the new hydrogen atom proposed in Quantum Ring Theory.
7- But Quantum mechanics and the There was an error working with the wiki: Code[15] are two successful theories. And it’s hard to believe that they are completely wrong. So Guglinski felt the need of discovering why they are so successful, in spite of they cannot be entirely correct. In another words: where does live the cause of the success of these two theories ?
The answers for these questions are proposed in the Quantum Ring Theory:
:It's proposed a new model of neutron n=p+e. A reviewer of the There was an error working with the wiki: Code[16] magazine wrote about the paper The Stern-Gerlach Experiment and the Helical Trajectory:
::“The basic question here is can a classical model (which postulates a trajectory for the electron) cast any light on the inner workings of the nucleus ? Most physicists would respond with a resounding NO. However, it generally happens that classical models have quantum analogs and thus can prove suggestive in at least a qualitative way. For instance, without the classical Hamiltonian energy expression there would be no clue to how to write the Schrödinger equation. And the classical energy expression would not exist without trajectory pictorization. Therefore one cannot reject Guglinski’s ‘helical trajectory’ model (or similar models due to Bergman and others) out of hand as useless to physics. We don’t know what the final physics will be, if any. Moreover, Guglinski’s model may solve the problem of spin of the electron in the nucleus?.
:It's proposed a new hydrogen atom that conciliates the Bohr’s model with the Schrödinger equation, and he discovered why the QM is successful in explaining the phenomena. The new hydrogen atom of QRT has an unknown property to the quantum theorists: there is a dilation of the aether within the electrospheres of the proton and the electron.
:It's proposed a model of photon that generates the Maxwell equations and explain the light behavior, as its duality, the polarization, the There was an error working with the wiki: Code[17], the There was an error working with the wiki: Code[18], the There was an error working with the wiki: Code[19], etc.
::Concerning the nucleus, Guglinski discovered a nuclear model that explains all the nuclear phenomena, and I also discovered that the nucleus has some behavior unknown by the nuclear theorists, as for instance the According-Effect.
In 1995 he tried to publish his paper New Model of Neutron in the journal Speculations in Science and Technology. The reviewer rejected it.
In the end of 1998 he submitted his paper A Model of Photon to the journal There was an error working with the wiki: Code[20]. The paper has been rejected for publication, because the reviewers of that journal are sure that the duality is a property of the matter (as proposed originally by There was an error working with the wiki: Code[21]), and they don’t accept to replace the original de Broglie’s interpretation by a new one that considers the duality as a property of the zitterbewegung.
But the editor Nancy Kolenda sent to Guglinski a copy of the journal in which Mike Carrell talks about cold fusion, and then in the first time in his life Guglinski took acknowledge on the occurrence of that phenomenon.
He sent a letter to Mike, who sent to Guglinski a copy of the Infinite Energy magazine in which Elio Conte published an article4 describing his experiment. So Guglinski realized that Conte's experiment had confirmed his new model of neutron n=p+e.
As earlier he had discovered a new nuclear model that explains all the ordinary nuclear phenomena, obviously a question arose in his mind: would his new nuclear model be able to explain cold fussion occurrence ?
And he had another strong reason to believe that cold fusion explanation would require a new theory with new fundamental principles missing in Quantum Mechanics. Indeed, he knew that the current Nuclear Physics is unable to explain many properties of the nuclei. Therefore, as the theory is unable to explain ordinary phenomena, it would be hard to believe that Nuclear Physcis could be able to explain cold fusion occurrence, since it defies the foundations of QM.
So he started to read some papers on cold fusion experiments, in order to try to understand the cold fusion occurrence from the viewpoint of the nuclear properties of his new nuclear model.
In 2000 his paper New Model of Neutron has been published by the Journal of New Energy.
In the beginning of 2001 Guglinski discovered the existence of Borghi’s experiment. In the same year he suited in law two universities of Brazil, trying to oblige them to repeat the There was an error working with the wiki: Code[22] in their laboratories. The Brazilian Constitution prescribes that the universities must support any experimental research that imply in the interest of the science’s development, and so he used such argument to support his request. Unfortunatelly the judge decided that there is no judicial support that obliges an university to perform any experiment. That was not true, because the support was given by the Brazilian Constitution. But it is known that there is a conspiracy against the prevalence of the scientific method when it defies the current theories.
In 2002 the Infinite Energy magazine has published his paper What is Missing in Les Case’s Catalytic Fusion5, where he suggested some improvements to be adopted in Case's experiment, and proposed the hypothesis of the cause why often it’s hard to get replicabilty in cold fusion experiments.
In the end of 2002 he submitted more seven papers to Infinite Energy magazine.
In 2003 Dennys Lets and Dennys Cravens exhibited in There was an error working with the wiki: Code[23] their experiment, in which the suggestions proposed in Guglinski's paper published by IE in 2002 had been adopted.
In the same year he wrote the paper Lets-Cravens Experiment and the Accordion-Effect, in which he proposes that cold fusion can occur in especial conditions when there is resonance between the oscillation of a nucleus due to its accordion-effect and the oscillation of a deuteron due to the Zero-point energy. The alignment of the deuterons with the nucleus by applying an external magnetic field (Letts and Cravens used a magnet) helps the resonance, which is also reinforced by a suit frequence of an oscillatory electromagnetic field (the laser used in their experiment)
In January 2004 Dr. PowerPedia:Eugene Mallove said that Guglinski's ideas “ are intriguing and interesting?, and encouraged him to put all his most than 20 papers in a book form. The Iinfinite Energy would advertise and sell the book.
As Dr. Mallove died in May-2004, he had to look for another publisher.
In August 2006 his theory was published in a book form entitled Quantum Ring Theory-Foundations for Cold Fusion, by the Bäuu Press.
Reviews on QRT posted in the link of Barnes&Noble and Bäuu Press website:
Claudio Nassif, PhD theoretical physicist (
I am the author of Symmetrical Special Relativity, which first paper was published by the journal Pramanas in July 2008 under the title: 'Deformed special relativity with an invariant minimum speed and its cosmological implications'. We, theoretical physicists, develop theories by using the mathematics, some theorems, many axioms, supporting fundamental principles, but there is not a physical reality underlying our theories. Actually one of achivements of the 20th Century is that a physical reality is unatainable in Modern Physics. But Guglinski's theory just supplies physical models to Theoretical Physics. In his theory are proposed physical models for the photon, the fermions, the neutron, the hydrogen atom, the nucleus, and the aether, and his QRT proposes the fundamental principles from which those physical models work. My SSR and Guglinski's QRT are complementary. A future consistent agglutination of SSR and QRT will perform a New Grand Unified Theory which, if confirmed by experiments, will constitute the New Physics of the 21th Century.
Nancy Kolenda, editor - Frontier Perspectives ( Temple University )
In Quantum Ring Theory Guglinski presents a new theory concerning the fundamental nature of physics. Here, the author argures that the current understanding of physics does not showcase an accurate model of the world. Instead, he argues that we must consider the “aether”, a notion originally developed by Greeck philosophers, and by considering the nature of “aether” and its role in physical processes, Guglinski is able to create a theory that reconciles quantum physics with the Theory of Relativity. As part of his new theory, Guglinski showcases a new model of the neutron and this model has been confirmed by contemporary physical experiments.
Paul W. Schoening ( Mechanical Engineer
A new interpretation of elementary particles and the atom's nuclear structure. Guglinski provides an entirely new understanding of the structure and the mechanics of the atom and the atomic nucleus, which in the presented way requires the interaction with the ether to explain, for instance, the quantum weirdness of the behavior of single, isolated photons and electrons. I highly recommend it.
Naveen, A Reviewer
WHOA!!...we have a breakthrough here!!! Hi I just came across this book 'Quantum Ring Theory' by Wladimir Guglinkski and found it quite exhilarating and thrilling. The thrill is in the way Quantum Theory is being treated in this book which is totally a new approach to physics. The proposed structure of the Neutron in terms of n=p+e, the ZOOM Effect, Helical trajectory, a completely new interpretation of DUALITY are some of the most original works of the author. I don't think I have seen any of the Modern Physicists as original as Wladimir. I must say that any serious physicist must go through this book and I would be glad if some of the universities come out with funds to perform certain experiments to establish Guglinski's Quantum Ring Theory. WLADIMIR.......HATS OFF MAN!!!!!!
1- E. Schrödinger , On a Remarkable Property of the Quantum-Orbits of a Single Electron, 1922
2- Krekora et al. , Phys. Rev. Lett., v. 93, 043004-1, 2004
3- C. Borghi, C. Giori, A.A. Dall’Ollio, Experimental Evidence of Emission of Neutrons from Cold Hydrogen Plasma, American Institute of Physics (Phys. At. Nucl.), vol 56, no 7, 1993.
4- E. Conte, M. Pieralice, An Experiment Indicates the Nuclear Fusion of the Proton and Electron into a Neutron, Infinite Energy, vol 4, no 23-1999, p 67.
5- W. Guglinski, What is Missing in Les Case's Catalytic Fusion, Infinite Energy Vol. 8 , No. 46 , 2002
See also
Aether Structure for unification between gravity and electromagnetism (2015)
Cold fusion mystery finally deciphered
Physical mechanism behind the quantum entanglement
Why a new physics theory could rewrite the textbooks
There was an error working with the wiki: Code[24]
Law suit against European Physical Journal
Image:Monopole spin 95x95.jpg
Latest: Directory:Magnets / PowerPedia:Quantum Ring Theory > Article:Magnetic monopole - new experiment corroborates Quantum Ring Theory - Researchers from the Helmholtz Centre Berlin, in cooperation with colleagues from Dresden, St. Andrews, La Plata and Oxford, have for the first time observed magnetic monopoles and how they emerge in a real material. They published their results in the journal Science. W. Guglinski provides commentary. (PESWiki Sept. 12, 2009)
Image:Quantum Ring Theory 95x95.jpg
PowerPedia:Quantum Ring Theory > Directory:Grand Unified TheoriesArticle:The Successor of Quantum Mechanics - Wladimir Guglinski compares his Quantum Ring Theory with Dr. Cláudio Nassif's Symmetrical Special Relativity, purporting that these "will constitute the New Physics of the 21th Century." (PESWiki August 9, 2008)
Directory:Cold Fusion > PowerPedia:Foundations for Cold Fusion - W. Guglinski argues for the need for new foundational premises for cold fusion because otherwise there is no way to surpass some theoretical troubles such as are shown in PowerPedia:Don Borghi's experiment. One such remedy is supposedly found in the Quantum Ring Theory. (PESWiki Nov. 23, 2007)
Image:AAAfig6-GAMOWparadox 95x95.gif
Directory:Nuclear > Directory:Cold Fusion > Article:Cold Fusion and Gamow's Paradox - Wladimir Guglinski provides an additional argument for the need for new foundations for nuclear fusion physics, saying that the existing theories are unable to explain the alpha decay of U238 ("unsatisfactorily explained by Gamow’s theory"). (PESWiki Mar. 15, 2008)
Similarity between Wave Structure of Matter and Quantum Ring Theory
PowerPedia:Successes of the Bohr atom
PowerPedia:Quantum Ring Theory at Temple University
PowerPedia:Quantum Ring Theory burnt in a Brazillian university
PowerPedia:Foundations for Cold Fusion
Heisenberg's Paradox
Article:Cold Fusion and Gamow's Paradox
PowerPedia:on the indistinguishibility of Quantum Mechanics
PowerPedia:magnetic monopole - new experiment corroborates Quantum Ring Theory
PowerPedia:quantum computer will never be constructed
PowerPedia:... and Schrödinger wins the duel with Heisenberg
PowerPedia:the mistery on the Andrea Rossi's catalyzer
PowerPedia:z-axis of atomic nuclei predicted in Quantum Ring Theory
PowerPedia:Mechanism for the entanglement in Gabriela’s experiment
PowerPedia:Collapse of Heisenberg’s Uncertainty and Bohr’s Complementarity
Can Quantum Mechanics be saved by Queen Elizabeth II ?
Article:The Impossible Beryllium
PowerPedia:Don Borghi's experiment
PowerPedia:Cold Fusion Theories
PowerPedia:Cold fusion, Don Borghi's Experiment, and hydrogen atom
PowerPedia:Einstein and entanglement: Guglinski interviews Dr. John Stachel
PowerPedia:Are there five fundamental forces in Nature?
Repulsive gravity within the hydrogen atom
Script on the film Quantum Ring Theory:
PowerPedia:Guglinski’s Model of the Photon
PowerPedia:Guglinski on the De Broglie Paradox
PowerPedia:Demystifying the EPR Paradox
PowerPedia:Zitterbewegung Hydrogen Atom of Quantum Ring Theory
PowerPedia:New model of neutron: explanation for cold fusion
Article: How magnet motors work
Article: New nuclear model of Quantum Ring Theory corroborated by John Arrington’s experiment
Article:Quantum Field Theory is being developed in the wrong way
Directory:Quantum Particles
Site:LRP:Quantum Physics - Quantum Mechanics (Qualified and Quantified)
Site:LRP:The Quantum Potential & Overunity Asymmetric Systems
Directory:Fix the World Organization's Quantum Energy Generator
Directory:Chukanov Quantum Energy LLC
Directory:Paintable plastic solar cells using quantum dots
Reviews:Book:The Quantum Key
Article:The Successor of Quantum Mechanics
PowerPedia:Quantum Ring Theory
Article:Magnetic monopole - new experiment corroborates Quantum Ring Theory
Article:Stability of light nuclei isotopes according to Quantum Ring Theory
Paper:A New Foundation for Physics, by Quantum Aether Dynamics Institute
Article:Quantum Field Theory is being developed in the wrong way
- PowerPedia
- Main Page |
e749b9e683a1d825 | Jump to: navigation, search
3D (left and center) and 2D (right) representations of the terpenoid molecule atisane.
In chemistry, a molecule is defined as a sufficiently stable electrically neutral group of at least two atoms in a definite arrangement held together by strong chemical bonds.[1][2] In organic chemistry and biochemistry, the term molecule is used less strictly and also is applied to charged organic molecules and biomolecules. Molecules are distinguished from polyatomic ions in the strict sense.
This definition has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties.[3] This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of atoms or ions, but are not made of molecules.
In the kinetic theory of gases the term molecule is often used for any gaseous particle regardless of their composition.[4] According to this definition noble gases would also be considered molecules despite the fact that they are composed of a single non-bonded atom.
The term "molecule", from the French molécule meaning "extremely minute particle," was coined by French philosopher Rene Descartes in the 1620s. Although the existence of molecules was accepted by many chemists since the early 19th century as a result of Dalton's laws of Definite and Multiple Proportions (1803-1808) and Avogadro's law (1811), there was some resistance among positivists and physicists such as Mach, Boltzmann, Maxwell, and Gibbs, who saw molecules merely as convenient mathematical constructs. The work of Perrin on Brownian motion (1911) is considered to be the final proof of the existence of molecules.
In a molecule, at least two atoms are joined by shared pairs of electrons in a covalent bond. It may consist of atoms of the same chemical element, as with oxygen (O2), or of different elements, as with water (H2O). Atoms and complexes connected by non-covalent bonds such as hydrogen bonds or ionic bonds are generally not considered single molecules.
No typical molecule can be defined for ionic (salts) and covalent crystals (network solids) which are composed of repeating unit cells that extend either in a plane (such as in graphite) or three-dimensionally (such as in diamond or sodium chloride).
The science of molecules is called molecular chemistry or molecular physics, depending on the focus. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) comprising two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose-Einstein condensates.
Molecular size
Most molecules are far too small to be seen with the naked eye, but there are exceptions. DNA, a macromolecule, can reach macroscopic sizes, as can molecules of many polymers. The smallest molecule is the diatomic hydrogen (H2), with an overall length of roughly twice the 74 picometres (0.74 Å) bond length. Molecules commonly used as building blocks for organic synthesis have a dimension of a few Å to several dozen Å. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules.
Molecular formula
The empirical formula of a molecule is the simplest integer ratio of the chemical elements that constitute the compound. For example, in their pure forms, water is always composed of a 2:1 ratio of hydrogen to oxygen, and ethyl alcohol or ethanol is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely - dimethyl ether has the same ratio as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. The empirical formula is often the same as the molecular formula but not always. For example the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular formula reflects the exact number of atoms that compose a molecule.
The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12th of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations.
Molecular geometry
Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomers, may have very similar physico-chemical properties and at the same time very different biochemical activities.
Molecular Theory
There are four statements of facts concerning molecules. These facts are:
1. All matter are composed of tiny particles called molecules.
2. There are spaces between molecules.
3. Molecules are constantly moving.
4. Molecules attract one another.
These statements together form the Molecular Theory.
Molecular spectroscopy
Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to Planck's formula). Scattering theory provides the theoretical background for spectroscopy.
The probing signal used in spectroscopy can be an electromagnetic wave or a beam of particles (electrons, positrons, etc.) The molecular response can consist of signal absorption (absorption spectroscopy), the emission of another signal (emission spectroscopy), fragmentation, or chemical changes.
Spectroscopy is recognized as a powerful tool in investigating the microscopic properties of molecules, in particular their energy levels. In order to extract maximum microscopic information from experimental results, spectroscopy is often coupled with chemical computations.
Theoretical aspects
The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively-charged protons and one negatively-charged electron bound by photon exchange, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry.
When trying to define rigorously whether an arrangement of atoms is "sufficiently stable" to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state".[1] This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly-bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state but is so loosely bound that it is only likely to be observed at very low temperatures.
See also
1. 1.0 1.1 International Union of Pure and Applied Chemistry (1994). "molecule". Compendium of Chemical Terminology Internet edition.
2. Pauling, Linus (1970). General Chemistry. New York: Dover Publications, Inc. ISBN 0-486-65622-5.
Ebbin, Darrell, D. (1990). General Chemistry, 3th Ed. Boston: Houghton Mifflin Co. ISBN 0-395-43302-9.
Brown, T.L. (2003). Chemistry – the Central Science, 9th Ed. New Jersey: Prentice Hall. ISBN 0-13-066997-0.
Chang, Raymond (1998). Chemistry, 6th Ed. New York: McGraw Hill. ISBN 0-07-115221-0.
Zumdahl, Steven S. (1997). Chemistry, 4th ed. Boston: Houghton Mifflin. ISBN 0-669-41794-7.
3. Molecule Definition (Frostburg State University)
4. E.g. see [1]
External links
af:Molekuul als:Molekül ar:جزيء ast:Molécula zh-min-nan:Hun-chú bs:Molekula bg:Молекула ca:Molècula cs:Molekula da:Molekyle de:Molekül et:Molekul el:Μόριοeo:Molekulo fo:Mýlgl:Molécula ko:분자 hr:Molekule io:Molekulo id:Molekul is:Sameind it:Molecola he:מולקולה kn:ಮಹತ್ಕಣ ka:მოლეკულა ku:Molekul la:Molecula lv:Molekula lt:Molekulė hu:Molekula mk:Молекула ms:Molekul nl:Molecuulno:Molekyl nn:Molekyl nrm:Molétchule uz:Molekula nds:Molekülqu:Iñuwasq:Molekula simple:Molecule sk:Molekula sl:Molekula sr:Молекул sh:Molekula su:Molekul fi:Molekyyli sv:Molekyl tl:Molekula ta:மூலக்கூறு th:โมเลกุลuk:Молекула yi:מאלעקיול zh-yue:分子 |
93c3466b9078d8d3 | Wednesday, February 10, 2010
“It is easy to explain something to a layman. It is easier to explain the same thing to an expert. But even the most knowledgeable person cannot explain something to one who has limited half-baked knowledge.” ------------- (Hitopadesha).
“To my mind there must be, at the bottom of it all, not an equation, but an utterly simple idea. And to me that idea, when we finally discover it, will be so compelling, so inevitable, that we will say to one another: ‘Oh, How wonderful! How could it have been otherwise.” -----------(John Wheeler).
“All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken”. --------------- Einstein, 1954
Twentieth century was a marvel in technological advancement. But except for the first quarter, the advancement of theoretical physics has nothing much to be written about. The principle of mass-energy equivalence, which is treated as the corner-stone principle of all nuclear interactions, binding energies of atoms and nucleons, etc., enters physics only as a corollary of the transformation equations between frames of references in relative motion. Quantum Mechanics (QM) cannot justify this equivalence principle on its own, even though it is the theory concerned about the energy exchanges and interactions of fundamental particles. Quantum Field Theory (QFT) is the extension of QM (dealing with particles) over to fields. In spite of the reported advancements in QFT, there is very little back up experimental proof to validate many of its postulates including Higgs mechanism, bare mass/charge, infinite charge etc. It seems almost impossible to think of QFT without thinking of particles which are accelerated and scattered in colliders. But interestingly, the particle interpretation has the best arguments against QFT. Till recently, the Big Bang hypothesis held the center stage in cosmology. Now Loop Quantum Cosmology (LQC) with its postulates of the “Big Bounce” is taking over. Yet there are two distinctly divergent streams of thought on this subject also. The confusion surrounding interpretation of quantum physics is further compounded by the modern proponents, who often search historical documents of discarded theories and come up with new meanings to back up their own theories. For example, the cosmological constant, first proposed and subsequently rejected as the greatest blunder of his life by Einstein; has made a come back in cosmology. Bohr’s complementarity principle, originally central to his vision of quantum particles, has been reduced to a corollary and is often identified with the frameworks in Consistent Histories.
There are a large number of different approaches or formulations to the foundations of Quantum Mechanics. There is the Heisenberg’s Matrix Formulation, Schrödinger’s Wave-function Formulation, Feynman’s Path Integral Formulation, Second Quantization Formulation, Wigner’s Phase Space Formulation, Density Matrix Formulation, Schwinger’s Variational Formulation, de Broglie-Bohm’s Pilot Wave Formulation, Hamilton-Jacobi Formulation etc. There are several Quantum Mechanical pictures based on placement of time-dependence. There is the Schrödinger Picture: time-dependent Wave-functions, the Heisenberg Picture: time-dependent operators and the Interaction Picture: time-dependence split. The different approaches are in fact, modifications of the theory. Each one introduces some prominent new theoretical aspect with new equations, which needs to be interpreted or explained. Thus, there are many different interpretations of Quantum Mechanics, which are very difficult to characterize. Prominent among them are; the Realistic Interpretation: wave-function describes reality, the Positivistic Interpretation: wave-function contains only the information about reality, the famous Copenhagen Interpretation: which is the orthodox Interpretation. Then there is Bohm’s Causal Interpretation, Everett’s Many World’s Interpretation, Mermin’s Ithaca Interpretation, etc. With so many contradictory views, quantum physics is not a coherent theory, but truly weird.
General relativity breaks down when gravity is very strong: for example when describing the big bang or the heart of a black hole. And the standard model has to be stretched to the breaking point to account for the masses of the universe’s fundamental particles. The two main theories; quantum theory and relativity, are also incompatible, having entirely different notions: such as for the concept of time. The incompatibility of quantum theory and relativity has made it difficult to unite the two in a single “Theory of everything”. There are almost infinite numbers of the “Theory of Everything” or the “Grand Unified Theory”. But none of them are free from contradictions. There is a vertical split between those pursuing the superstrings route and others, who follow the little Higgs route.
The energy “uncertainty” introduced in quantum theory combines with the mass-energy equivalence of special relativity to allow the creation of particle/anti-particle pairs by quantum fluctuations when the theories are merged. As a result there is no self-consistent theory which generalizes the simple, one-particle Schrödinger equation into a relativistic quantum wave equation. Quantum Electro-Dynamics began not with a single relativistic particle, but with a relativistic classical field theory, such as Maxwell’s theory of electromagnetism. This classical field theory was then “quantized” in the usual way and the resulting quantum field theory is claimed to be a combination of quantum mechanics and relativity. However, this theory is inherently a many-body theory with the quanta of the normal modes of the classical field having all the properties of physical particles. The resulting many-particle theory can be relatively easily handled if the particles are heavy on the energy scale of interest or if the underlying field theory is essentially linear. Such is the case for atomic physics where the electron-volt energy scale for atomic binding is about a million times smaller than the energy required to create an electron positron pair and where the Maxwell theory of the photon field is essentially linear.
However, the situation is completely reversed for the theory of the quarks and gluons that compose the strongly interacting particles in the atomic nucleus. While the natural energy scale of these particles, the proton, r meson, etc. is on the order of hundreds of millions of electron volts, the quark masses are about one hundred times smaller. Likewise, the gluons are quanta of a Yang-Mills field which obeys highly non-linear field equations. As a result, strong interaction physics has no known analytical approach and numerical methods is said to be the only possibility for making predictions from first principles and developing a fundamental understanding of the theory. This theory of the strongly interacting particles is called quantum chromodynamics or QCD, where the non-linearities in the theory have dramatic physical effects. One coherent, non-linear effect of the gluons is to “confine” both the quarks and gluons so that none of these particles can be found directly as excitations of the vacuum. Likewise, a continuous “chiral symmetry”, normally exhibited by a theory of light quarks, is broken by the condensation of chirally oriented quark/anti-quark pairs in the vacuum. The resulting physics of QCD is thus entirely different from what one would expect from the underlying theory, with the interaction effects having a dominant influence.
It is known that the much celebrated Standard Model of Particle Physics is incomplete as it relies on certain arbitrarily determined constants as inputs - as “givens”. The new formulations of physics such as the Super String Theory and M-theory do allow mechanisms where these constants can arise from the underlying model. However, the problem with these theories is that they postulate the existence of extra dimensions that are said to be either “extra-large” or “compactified” down to the Planck length, where they have no impact on the visible world we live in. In other words, we are told to blindly believe that extra dimensions must exist, but on a scale that we cannot observe. The existence of these extra dimensions has not been proved. However, they are postulated to be not fixed in size. Thus, the ratio between the compactified dimensions and our normal four space-time dimensions could cause some of the fundamental constants to change! If this could happen then it might lead to physics that are in contradiction to the universe we observe.
The concept of “absolute simultaneity” – an off-shoot of quantum entanglement and non-locality, poses the gravest challenge to Special Relativity. But here also, a different interpretation is possible for the double-slit experiment, Bell’s inequality, entanglement and decoherence, which can rub them off of their mystic character. The Ives - Stilwell experiment conducted by Herbert E. Ives and G. R. Stilwell in 1938 is considered to be one of the fundamental tests of the special theory of relativity. The experiment was intended to use a primarily longitudinal test of light wave propagation to detect and quantify the effect of time dilation on the relativistic Doppler effect of light waves received from a moving source. Also it intended to indirectly verify and quantify the more difficult to detect transverse Doppler effect associated with detection at a substantial angle to the path of motion of the source - specifically the effect associated with detection at a 90° angle to the path of motion of the source. In both respects it is believed that, a longitudinal test can be used to indirectly verify an effect that actually occurs at a 90° transverse angle to the path of motion of the source.
Based on recent theoretical findings of the relativistic transverse Doppler effect, some scientists have shown that such comparison between longitudinal and transverse effects is fundamentally flawed and thus invalid; because it assumes compatibility between two different mathematical treatments. The experiment was designed to detect the predicted time dilation related red-shift effect (increase in wave-length with corresponding decrease in frequency) of special relativity at the fundamentally longitudinal angles at or near 00 and 1800, even though the time dilation effect is based on the transverse angle of 900. Thus, the results of the said experiment do not prove anything. More specifically, it can be shown that the mathematical treatment of special relativity to the transverse Doppler effect is invalid and thus incompatible with the longitudinal mathematical treatment at distances close to the moving source. Any direct comparisons between the longitudinal and transverse mathematical predictions under the specified conditions of the experiment are invalid.
Cosmic rays are particles - mostly protons but sometimes heavy atomic nuclei - that travel through the universe at close to the speed of light. Some cosmic rays detected on Earth are produced in violent events such as supernovae, but physicists still don’t know the origins of the highest-energy particles, which are the most energetic particles ever seen in nature. As cosmic-ray particles travel through space, they lose energy in collisions with the low-energy photons that pervade the universe, such as those of the cosmic microwave background radiation. Special theory of relativity dictates that any cosmic rays reaching Earth from a source outside our galaxy will have suffered so many energy-shedding collisions that their maximum possible energy cannot exceed 5 × 1019 electron-volts. This is known as the Greisen-Zatsepin-Kuzmin limit. Over the past decade, University of Tokyo’s Akeno Giant Air Shower Array - 111 particle detectors have detected several cosmic rays above the GZK limit. In theory, they could only have come from within our galaxy, avoiding an energy-sapping journey across the cosmos. However, astronomers cannot find any source for these cosmic rays in our galaxy. One possibility is that there is something wrong with the observed results. Another possibility is that Einstein was wrong. His special theory of relativity says that space is the same in all directions, but what if particles found it easier to move in certain directions? Then the cosmic rays could retain more of their energy, allowing them to beat the GZK limit. A recent report (Physical Letters B, Vol. 668, p-253) suggests that the fabric of space-time is not as smooth as Einstein and others have predicted.
During 1919, Eddington started his much publicised eclipse expedition to observe the bending of light by a massive object (here the Sun) to verify the correctness of General Relativity. The experiment in question concerned the problem of whether light rays are deflected by gravitational forces, and took the form of astrometric observations of the positions of stars near the Sun during a total solar eclipse. The consequence of Eddington’s theory-led attitude to the experiment, along with alleged data fudging, was claimed to favor Einstein’s theory over Newton’s when in fact the data supported no such strong construction. In reality, both the predictions were based on Einstein’s calculations in 1908 and again in 1911 using Newton’s theory of gravitation. In 1911, Einstein wrote: “A ray of light going past the Sun would accordingly undergo deflection to an amount of 4’10-6 = 0.83 seconds of arc”. He did not clearly explain which fundamental principle of physics used in his paper and giving the value of 0.83 seconds of arc (dubbed half deflection) was wrong. He revised his calculation in 1916 to hold that light coming from a star far away from the Earth and passing near the Sun will be deflected by the Sun’s gravitational field by an amount that is inversely proportional to the star’s radial distance from the Sun (1.745” at the Sun’s limb - dubbed full deflection). Einstein never explained why he revised his earlier figures. Eddington was experimenting which of the above two values calculated by Einstein is correct.
Specifically it has been alleged that a sort of data fudging took place when Eddington decided to reject the plates taken by the one instrument (the Greenwich Observatory’s Astrographic lens, used at Sobral), whose results tended to support the alternative “Newtonian” prediction of light bending (as calculated by Einstein). Instead the data from the inferior (because of cloud cover) plates taken by Eddington himself at Principe and from the inferior (because of a reduced field of view) 4-inch lens used at Sobral were promoted as confirming the theory. While he claimed that the result proved Einstein right and Newton wrong, an objective analysis of the actual photographs shows no such clear cut result. Both theories are consistent with the data obtained. It may be recalled that when someone said that there are only two persons in the world besides Einstein who understood relativity, Eddington had replied that he does not know who the other person was. This arrogance clouded his scientific acumen, as was confirmed by his distaste for the theories of Dr. S Chandrasekhar, which subsequently won the Nobel Prize.
Heisenberg’s Uncertainty relation is still a postulate, though many of its predictions have been verified and found to be correct. Heisenberg never called it a principle. Eddington was the first to call it a principle and others followed him. But as Karl Popper pointed out, uncertainty relations cannot be granted the status of a principle because theories are derived from principles, but uncertainty relation does not lead to any theory. We can never derive an equation like the Schrödinger equation or the commutation relation from the uncertainty relation, which is an inequality. Einstein’s distinction between “constructive theories” and “principle theories” does not help, because this classification is not a scientific classification. Serious attempts to build up quantum theory as a full fledged Theory of Principle on the basis of the uncertainty relation have never been carried out. At best it can be said that Heisenberg created “room” or “freedom” for the introduction of some non-classical mode of description of experimental data. But these do not uniquely lead to the formalism of quantum mechanics.
There are a plethora of other postulates in Quantum Mechanics; such as: the Operator postulate, the Hermitian property postulate, Basis set postulate, Expectation value postulate, Time evolution postulate, etc. The list goes on and on and includes such undiscovered entities as strings and such exotic particles as the Higg’s particle (which is dubbed as the “God particle”) and graviton; not to speak of squarks et all. Yet, till now it is not clear what quantum mechanics is about? What does it describe? It is said that quantum mechanical systems are completely described by its wave function? From this it would appear that quantum mechanics is fundamentally about the behavior of wave-functions. But do the scientists really believe that wave-functions describe reality? Even Schrödinger, the founder of the wave-function, found this impossible to believe! He writes (Schrödinger 1935): “That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message”. Rather, he was worried about the “blurring” suggested by the spread-out character of the wave-function, which he describes as, “affects macroscopically tangible and visible things, for which the term ‘blurring’ seems simply wrong”.
Schrödinger goes on to note that it may happen in radioactive decay that “the emerging particle is described … as a spherical wave … that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however, does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot …”. He observed further that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Resorting to epistemology cannot save such doctrines.
The situation was further made complicated by Bohr with interpretation of quantum mechanics. But how many scientists truly believe in his interpretation? Apart from the issues relating to the observer and observation, it usually is believed to address the measurement problem. Quantum mechanics is fundamentally about the micro-particles such as quarks and strings etc, and not the macroscopic regularities associated with measurement of their various properties. But if these entities are somehow not to be identified with the wave-function itself and if the description is not about measurements, then where is their place in the quantum description? Where is the quantum description of the objects that quantum mechanics should be describing? This question has led to the issues raised in the EPR argument. As we will see, this question has not been settled satisfactorily.
The formulations of quantum mechanics describe the deterministic unitary evolution of a wave-function. This wave-function is never observed experimentally. The wave-function allows computation of the probability of certain macroscopic events of being observed. However, there are no events and no mechanism for creating events in the mathematical model. It is this dichotomy between the wave-function model and observed macroscopic events that is the source of the various interpretations in quantum mechanics. In classical physics, the mathematical model relates to the objects we observe. In quantum mechanics, the mathematical model by itself never produces observation. We must interpret the wave-function in order to relate it to experimental observation. Often these interpretations are related to the personal and socio-cultural bias of the scientist, which gets weightage based on his standing in the community. Thus, the arguments of Einstein against Bohr’s position has roots in Lockean notions of perception, which opposes the Kantian metaphor of the “veil of perception” that pictures the apparatus of observation as like a pair of spectacles through which a highly mediated sight of the world can be glimpsed. According to Kant, “appearances” simply do not reflect an independently existing reality. They are constituted through the act of perception in such a way that conform them to the fundamental categories of sensible intuitions. Bohr maintained that “measurement has an essential influence on the conditions on which the very definition of physical quantities in question rests” (Bohr 1935, 1025).
In modern science, there is no unambiguous and precise definition of the words time, space, dimension, numbers, zero, infinity, charge, quantum particle, wave-function etc. The operational definitions have been changed from time to time to take into account newer facts that facilitate justification of the new “theory”. For example, the fundamental concept of the quantum mechanical theory is the concept of “state”, which is supposed to be completely characterized by the wave-function. However, till now it is not certain “what” a wave-function is. Is the wave-function real - a concrete physical object or is it something like a law of motion or an internal property of particles or a relation among spatial points? Or is it merely our current information about the particles? Quantum mechanical wave-functions cannot be represented mathematically in anything smaller than a 10 or 11 dimensional space called configuration space. This is contrary to experience and the existence of higher dimensions is still in the realm of speculation. If we accept the views of modern physicists, then we have to accept that the universe’s history plays itself out not in the three dimensional space of our everyday experience or the four-dimensional space-time of Special Relativity, but rather in this gigantic configuration space, out of which the illusion of three-dimensionality somehow emerges. Thus, what we see and experience is illusory! Maya?
The measurement problem in quantum mechanics is the unresolved problem of how (or if) wave-function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. If it is postulated that a particle does not have a value before measurement, there has to be conclusive evidence to support this view. The wave-function in quantum mechanics evolves according to the Schrödinger equation into a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was “discovered” to be in when the measurement was made, implying that the measurement “did something” to the process under examination. Whatever that “something” may be does not appear to be explained by the basic theory. Further, quantum systems described by linear wave-functions should be incapable of non-linear behavior. But chaotic quantum systems have been observed. Though chaos appears to be probabilistic, it is actually deterministic. Further, if the collapse causes the quantum state to jump from superposition of states to a fixed state, it must be either an illusion or an approximation to the reality at quantum level. We can rule out illusion as it is contrary to experience. In that case, there is nothing to suggest that events in quantum level are not deterministic. We may very well be able to determine the outcome of a quantum measurement provided we set up an appropriate measuring device!
The operational definitions and the treatment of the term wave-function used by researchers in quantum theory progressed through intermediate stages. Schrödinger viewed the wave-function associated with the electron as the charge density of an object smeared out over an extended (possibly infinite) volume of space. He did not regard the waveform as real nor did he make any comment on the waveform collapse. Max Born interpreted it as the probability distribution in the space of the electron’s position. He differed from Bohr in describing quantum systems as being in a state described by a wave-function which lives longer than any specific experiment. He considered the waveform as an element of reality. According to this view, also known as State Vector Interpretation, measurement implied the collapse of the wave function. Once a measurement is made, the wave-function ceases to be smeared out over an extended volume of space and the range of possibilities collapse to the known value. However, the nature of the waveform collapse is problematic and the equations of Quantum Mechanics do not cover the collapse itself.
The view known as “Consciousness Causes Collapse” regards measuring devices also as quantum systems for consistency. The measuring device changes state when a measurement is made, but its wave-function does not collapse. The collapse of the wave-function can be traced back to its interaction with a conscious observer. Let us take the example of measurement of the position of an electron. The waveform does not collapse when the measuring device initially measures the position of the electron. Human eye can also be considered a quantum system. Thus, the waveform does not collapse when the photon from the electron interacts with the eye. The resulting chemical signals to the brain can also be treated as a quantum system. Hence it is not responsible for the collapse of the wave-form. However, a conscious observer always sees a particular outcome. The wave-form collapse can be traced back to its first interaction with the consciousness of the observer. This begs the question: what is consciousness? At which stage in the above sequence of events did the wave-form collapse? Did the universe behave differently before life evolved? If so, how and what is the proof for that assumption? No answers.
Many-worlds Interpretation tries to overcome the measurement problem in a different way. It regards all possible outcomes of measurement as “really happening”, but holds that somehow we select only one of those realities (or in their words - universes). But this view clashes with the second law of thermodynamics. The direction of the thermodynamic arrow of time is defined by the special initial conditions of the universe which provides a natural solution to the question of why entropy increases in the forward direction of time. But what is the cause of the time asymmetry in the Many-worlds Interpretation? Why do universes split in the forward time direction? It is said that entropy increases after each universe-branching operation – the resultant universes are slightly more disordered. But some interpretations of decoherence contradict this view. This is called macroscopic quantum coherence. If particles can be isolated from the environment, we can view multiple interference superposition terms as a physical reality in this universe. For example, let us consider the case of the electric current being made to flow in opposite directions. If the interference terms had really escaped to a parallel universe, then we should never be able to observe them both as physical reality in this universe. Thus, this view is questionable.
Transactional Interpretation accepts the statistical nature of waveform, but breaks it into an “offer” wave and an “acceptance” wave, both of which are treated as real. Probabilities are assigned to the likelihood of interaction of the offer waves with other particles. If a particle interacts with the offer wave, then it “returns” a confirmation wave to complete the transaction. Once the transaction is complete, energy, momentum, etc., are transferred in quanta as per the normal probabilistic quantum mechanics. Since Nature always takes the shortest and the simplest path, the transaction is expected to be completed at the first opportunity. But once that happens, classical probability and not quantum probability will apply. Further, it cannot explain how virtual particles interact. Thus, some people defer the waveform collapse to some unknown time. Since the confirmation wave in this theory is smeared all over space, it cannot explain when the transaction begins or is completed and how the confirmation wave determines which offer wave it matches up to.
Quantum decoherence, which was proposed in the context of the many-worlds interpretation, but has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories, allows physicists to identify the fuzzy boundary between the quantum micro-world and the world where the classical intuition is applicable. But it does not describe the actual process of the wave-function collapse. It only explains the conversion of the quantum probabilities (that are able to interfere) to the ordinary classical probabilities. Some people have tried to reformulate quantum mechanics as probability or logic theories. In some theories, the requirements for probability values to be real numbers have been relaxed. The resulting non-real probabilities correspond to quantum waveform. But till now a fully developed theory is missing.
Hidden Variables Theories treat Quantum mechanics as incomplete. Until a more sophisticated theory underlying Quantum mechanics is discovered, it is not possible to make any definitive statement. It views quantum objects as having properties with well-defined values that exist separately from any measuring devices. According to this view, chance plays no roll at all and everything is fully deterministic. Every material object invariably does occupy some particular region of space. This theory takes the form of a single set of basic physical laws that apply in exactly the same way to every physical object that exists. The waveform may be a purely statistical creation or it may have some physical role. The Causal Interpretation of Bohm and its latter development, the Ontological Interpretation, emphasize “beables” rather than the “observables” in contradistinction to the predominantly epistemological approach of the standard model. This interpretation is causal, but non-local and non-relativistic, while being capable of being extended beyond the domain of the current quantum theory in several ways.
There are divergent views on the nature of reality and the role of science in dealing with reality. Measuring a quantum object was supposed to force it to collapse from a waveform into one position. According to quantum mechanical dogma, this collapse makes objects “real”. But new verifications of “collapse reversal” suggest that we can no longer assume that measurements alone create reality. It is possible to take a “weak” measurement of a quantum particle continuously partially collapsing the quantum state, and then “unmeasure” it altering certain properties of the particle and perform the same weak measurement again. In one such experiment reported in Nature News, the particle was found to have returned to its original quantum state just as if no measurement had ever been taken. This implies that, we cannot assume that measurements create reality because; it is possible to erase the effects of a measurement and start again.
Newton gave his laws of motion in the second chapter, entitled “Axioms, or Laws of motion” of his book Principles of Natural Philosophy published in 1687 in Latin language. The second law says that the change of motion is proportional to the motive force impressed. Newton relates the force to the change of momentum (not to the acceleration as most textbooks do). Momentum is accepted as one of two quantities that, taken together, yield the complete information about a dynamic system at any instant. The other quantity is position, which is said to determine the strength and direction of the force. Since then the earlier ideas have changed considerably. The pairing of momentum and position is no longer viewed in the Euclidean space of three dimensions. Instead, it is viewed in phase space, which is said to have six dimensions, three for position and three for momentum. But here the term dimension has actually been used for direction, which is not a scientific description.
In fact most of the terms used by modern scientists have not been precisely defined - they have only an operational definition, which is not only incomplete, but also does not stand scientific scrutiny, though it is often declared as “reasonable”. This has been done not by chance, but by design, as modern science is replete with such instances. For example, we quote from the paper of Einstein and his colleagues Boris Podolsky and Nathan Rosen, which is known as the EPR argument (Phys. Rev. 47, 777 (1935):
“A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which, we regard as reasonable. If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur. Regarded not as necessary, but merely as a sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality.”
Prima facie, what Einstein and his colleagues argued was that under ideal conditions, observation (includes measurement) functions like a mirror reflecting an independently existing, external reality. The specific criterion for describing reality characterizes it in terms of objectivity understood as independence from any direct measurement. This implies that, when a direct measurement of physical reality occurs, it merely passively reflects rather than actively constitutes the object under observation. It further implies that ideal observations not only reflect the state of the object during observation, but also before and after observation just like a photograph taken. It has a separate and fixed identity than the object whose photograph has been taken. While the object may be evolving in time, the photograph depicts a time invariant state. Bohr and Heisenberg opposed this notion based on the Kantian view by describing acts of observation and measurement more generally as constitutive of phenomena. More on this will be discussed later.
The fact that our raw sense impressions and experiences are compatible with widely differing concepts of the world has led some philosophers to suggest that we should dispense with the idea of an “objective world” altogether and base our physical theories on nothing but direct sense impressions only. Berkeley expressed the positivist identification of sense impressions with objective existence by the famous phrase “esse est percipi” (to be is to be perceived). This has led to the changing idea of “objective reality”. However, if we can predict with certainty “the value of a physical quantity”, it only means that we have partial and not complete “knowledge” – which is the “total” result of “all” measurements - of the system. It has not been shown that knowledge is synonymous with reality. We may have the “knowledge” of mirage, but it is not real. Based on the result of our measurement, we may have knowledge that something is not real, but only apparent.
The partial definition of reality is not correct as it talks about “the value of a physical quantity” and not “the value of all physical quantities”. We can predict with certainty “the value of a physical quantity” such as position or momentum, which are classical concepts, without in any way disturbing the system. This has been accepted for past events by Heisenberg himself, which has been discussed in latter pages. Further, measurement is a process of comparison between similars and not bouncing light off something to disturb it. This has been discussed in detail while discussing the measurement problem. We cannot classify an object being measured (observed) separately from the apparatus performing the measurement (though there is lot of confusion in this area). They must belong to the same class. This is clearly shown in the quantum world where it is accepted that we cannot divorce the property we are trying to measure from the type of observation we make: the property is dependent on the type of measurement and the measuring instrument must be designed to use that particular property. However, this interpretation can be misleading and may not have anything to do with reality as described below. Such limited treatment of the definition of “reality” has given the authors the freedom to manipulate the facts to suit their convenience. Needless to say; the conclusions arrived at in that paper has been successively proved wrong by John S. Bell, Alain Aspect, etc, though for a different reason.
In the double slit experiment, it is often said that whether the electron has gone through the hole No.1 or No. 2 is meaningless. The electron, till we observe which hole it goes through, exists in a superposition state of equal measure of probability wave for going through the hole 1 and through the hole 2. This is a highly misleading notion as after it went through, we can always see its imprint on the photographic plate at a particular position and that is real. Before such observation we do not know which hole it went through, but there is no reason to presume that it went through a mixed state of both holes. Our inability to measure or know cannot change physical reality. It can only limit our knowledge of such physical reality. This aspect and the interference phenomenon have been discussed elaborately in later pages.
If, we accept the modern view of superposition of states, we land in many complex situations. Suppose the Schrödinger’s cat is somewhere in deep space and a team of astronauts were sent to measure its state According to the Copenhagen interpretation, the astronauts by opening the box and performing the observation have now put the cat into a definite quantum state; say find it alive. For them, the cat is no longer in a superposition state of equal measure of probability of living or dead. But for their Earth bound colleagues, the cat and the astronauts on board the space shuttle who know the state of the cat (did they change to a quantum state?), are still in a probability wave superposition state of live cat and dead cat. Finally, when the astronauts communicate with a computer down on earth, they pass on the information that is stored in the magnetic memory of the computer. After the computer receives the information, but before its memory is read by the earth-bound scientists, the computer is part of the superposition state for the earth-bound scientists. Finally, in reading the computer output, the earth-bound scientists reduce the superposition state to a definite one. Reality springs into being or rather from being to becoming only after we observe it. Is the above description sensible?
What really happens is that the cat interacts with the particles around it – protons, electrons, air molecules, dust particles, radiation, etc, which has the effect of “observing” it. The state is accessed by each of the conscious observers (as well as the other particles) by intercepting on its/our retina a small fraction of the light that has interacted with the cat. Thus, in reality, the field set up by his retina is perturbed and the impulse is carried out to the brain, where it is compared with previous similar impressions. If the impression matches with any previous impressions, we cognize it to be like that. Thereafter only we cognize the result of the measurement: the cat is alive or dead at the moment of observation. Thus, the process of measurement is carried out constantly without disturbing the system and evolution of the observed has nothing to do with the observation. This has been elaborated while discussing the measurement problem.
Further someone has put the cat and the deadly apparatus in the box. Thus according to the generally accepted theory, the wave-function has collapsed for him at that time. The information is available to us. Only afterwards, the evolutionary state of the cat – whether living or dead – is not known to us including the person who put the cat in the box in the first place. But according to the above description, the cat, whose wave-function has collapsed for the person who put the cat in the box, again goes into a “superposition of states of both alive and dead” and needs another observation – directly or indirectly through a set of apparatus - to describe its proper state at any subsequent time. This implies that after the second observation, the cat again goes into a “superposition of states of both alive and dead” till it is again observed and so on ad infinitum till it is found dead. But then the same story repeats for the dead cat – this time about his state of decomposition!
The cat example shows three distinct aspects: the state of the cat, i.e., dead or alive at the moment of observation (which information is time invariant as it is fixed), the state of the cat prior to and after the moment of observation (which information is time variant as the cat will die at some unspecified time due to unspecified reasons), and the cognition of these information by a conscious observer, which is time invariant but about the time evolution of the states of the cat. In his book “Popular Astronomy”, Prof. Bigelow says; Force, Mass, Surface, Electricity, Magnetism, etc., “are apprehended only during instantaneous transfer of energy”. He further adds; “Energy is the great unknown quantity, and its existence is recognized only during its state of change”. This is an eternal truth. We endorse the above view. It is well-known that the Universe is so called because everything in it is ever moving. Thus the view that observation not only describes the state of the object during observation, but also the state before and after it, is misleading. The result of measurement is the description of a state frozen in time, thus a fixed quantity. Its time evolution is not self-evident in the result of measurement. It has any meaning only after it is cognized by a conscious agent, as consciousness is time invariant. Thus, the observable, observation and observer depict three aspects of confined mass, displacing energy and revealing radiation of a single phenomenon depicting reality. Quantum physics has to explain these phenomena scientifically. We will discus it later.
When one talks about what an electron is “doing”, one implies what sort of a wave function is associated with it. But the wave function is not a physical object in the sense a proton or an electron or a billiard ball. In fact, the rules of quantum theory do not even allot a unique wave function to a given state of motion, since multiplying the wave function by a factor of modulus unity does not change any physical consequence. Thus, Heisenberg opined that “the atoms or elementary particles are not as real; they form a world of potentialities or possibilities rather than one of things or facts”. This shows the helplessness of the physicists to explain the quantum phenomena in terms of the macro world. The activities of the elementary particles appear essential as long as we believe in the independent existence of fundamental laws that we can hope to understand better.
Reality cannot differ from person to person or from measurement to measurement because it has existence independent of these factors. The elements of our “knowledge” are actually derived from our raw sense impressions, by automatically interpreting them in conventional terms based on our earlier impressions. Since these impressions vary, our responses to the same data also vary. Yet, unless an event is observed, it has no meaning by itself. Thus, it can be said that while observables have a time evolution independent of observation, it depends upon observation for any meaningful description in relation to others. For this reason the individual responses/readings to the same object may differ based on their earlier (at a different time and may be space) experience/environment. As the earlier example of the cat shows, it requires a definite link between the observer and the observed – a split (from time evolution), and a link (between the measurement representing its state and the consciousness of the observer for describing such state in communicable language). This link varies from person to person. At every interaction, the reality is not “created”, but the “presently evolved state” of the same reality gets described and communicated. Based on our earlier experiences/experimental set-up, it may return different responses/readings.
There is no proof to show that a particle does not have a value before measurement. The static attributes of a proton or an electron such as its charge or its mass have well defined properties and will remain so even before and after observation even though it may change its position or composition due to the effect of the forces acting on it – spatial translation. The dynamical attributes will continue to evolve – temporal translation. The life cycles of stars and galaxies will continue till we notice their extinction in a supernova explosion. The moon will exist even when we are not observing it. The proof for this is their observed position after a given time matches our theoretical calculation. Before measurement, we do not know the “present” state. Since present is a dynamical entity describing time evolution of the particle, it evolves continuously from past to future. This does not mean that the observer creates reality – after observation at a given instant he only discovers the spatial and temporal state of its static and dynamical aspects.
The prevailing notion of superposition (an unobserved proposition) only means that we do not know how the actual fixed value after measurement has been arrived at (described elaborately in later pages), as the same value could be arrived at by infinite numbers of ways. We superimpose our ignorance on the particle and claim that the value of that particular aspect is undetermined whereas in reality the value might already have been fixed (the cat might have died). The observer cannot influence the state of the observed (moment of death of the cat) before or after observation. He can only report the “present state”. Quantum mechanics has failed to describe the collapse mechanism satisfactorily. In fact many models (such as the Copenhagen interpretation) treat the concept of collapse as non-sense. The few models that accept collapse as real are incomplete and fail to come up with a satisfactory mechanism to explain it. In 1932, John von Neumann showed that if electrons are ordinary objects with inherent properties (which would include hidden variables) then the behavior of those objects must contradict the predictions of quantum theory. Because of his stature in those days, no one contradicted him. But in 1952, David Bohm showed that hidden variables theories were plausible if super-luminal velocities are possible. Bohm’s mechanics has returned predictions equivalent to other interpretations of quantum mechanics. Thus, it cannot be discarded lightly. If Bohm is right, then Copenhagen interpretation and its extensions are wrong.
There is no proof to show that the characteristics of particle states are randomly chosen instantaneously at the time of observation/measurement. Since the value remains fixed after measurement, it is reasonable to assume that it remained so before measurement also. For example, if we measure the temperature of a particle by a thermometer, it is generally assumed that a little heat is transferred from the particle to the thermometer thereby changing the state of the particle. This is an absolutely wrong assumption. No particle in the Universe is perfectly isolated. A particle inevitably interacts with its environment. The environment might very well be a man-made measuring device.
Introduction of the thermometer does not change the environment as all objects in the environment are either isothermic or heat is flowing from higher concentration to lower concentration. In the former case there is no effect. In the latter case also it does not change anything as the thermometer is isothermic with the environment. Thus the rate of heat flow from the particle to the thermometer remains constant – same as that of the particle to its environment. When exposed to heat, the expansion of mercury shows a uniform gradient in proportion to the temperature of its environment. This is sub-divided over a randomly chosen range and taken as the unit. The expansion of mercury when exposed to the heat flow from a particle till both become isothermic is compared with this unit and we get a scalar quantity, which we call the result of measurement at that instant. Similarly, the heat flow to the thermometer does not affect the object as it was in any case continuing with the heat flow at a steady rate and continued to do so even after measurement. This is proved from the fact that the thermometer reading does not change after sometime (all other conditions being unchanged). This is common to all measurements. Since the scalar quantity returned as the result of measurement is a number, it is sometimes said that numbers are everything.
While there is no proof that measurement determines reality, there is proof to the contrary. Suppose we have a random group of people and we measure three of their properties: sex, height and skin-color. They can be male or female, tall or short and their skin-color could be fair or brown. If we take at random 30 people and measure the sex and height first (male and tall), and then the skin-color (fair) for the same sample, we will get one result (how many tall men are fair). If we measure the sex and the skin-color first (male and fair), and then the height (tall), we will get a different result (how many fare males are tall). If we measure the skin-color and the height first (fair and tall), and then the sex (male), we will get a yet different result (how many fare and tall persons are male). Order of measurement apparently changes result of measurement. But the result of measurement really does not change anything. The tall will continue to be tall and the fair will continue to be fair. The male and female will not change sex either. This proves that measurement does not determine reality, but only exposes selected aspects of reality in a desired manner – depending upon the nature of measurement. It is also wrong to say that whenever any property of a microscopic object affects a macroscopic object, that property is observed and becomes physical reality. We have experienced situations when an insect bite is not really felt (measure of pain) by us immediately even though it affects us. A viral infection does not affect us immediately.
We measure position, which is the distance from a fixed reference point in different coordinates, by a tape of unit distance from one end point to the other end point or its sub-divisions. We measure mass by comparing it with another unit mass. We measure time, which is the interval between events by a clock, whose ticks are repetitive events of equal duration (interval) which we take as the unit, etc. There is no proof to show that this principle is not applicable to the quantum world. These measurements are possible when both the observer with the measuring instrument and the object to be measured are in the same frame of reference (state of motion); thus without disturbing anything. For this reason results of measurement are always scalar quantities – multiples of the unit. Light is only an accessory for knowing the result of measurement and not a pre-condition for measurement. Simultaneous measurement of both position and momentum is not possible, which is correct, though due to different reasons explained in later pages. Incidentally, both position and momentum are regarded as classical concepts.
In classical mechanics and electromagnetism, properties of a point mass or properties of a field are described by real numbers or functions defined on two or three dimensional sets. These have direct, spatial meaning, and in these theories there seems to be less need to provide a special interpretation for those numbers or functions. The accepted mathematical structure of quantum mechanics, on the other hand, is based on fairly abstract mathematics (?), such as Hilbert spaces, (which is the quantum mechanical counterpart of the classical phase-space) and operators on those Hilbert spaces. Here again, there is no precise definition of space. The proof for the existence and justification of the different classification of “space” and “vacuum” are left unexplained.
When developing new theories, physicists tend to assume that quantities such as the strength of gravity, the speed of light in vacuum or the charge on the electron are all constant. The so-called universal constants are neither self-evident in Nature nor have been derived from fundamental principles (though there are some claims to the contrary, each has some problem). They have been deduced mathematically and their value has been determined by actual measurement. For example, the fine structure constant has been postulated in QED, but its value has been derived only experimentally (We have derived the measured value from fundamental principles). Yet, the regularity with which such constants of Nature have been discovered points to some important principle underlying it. But are these quantities really constant?
The velocity of light varies according to the density of the medium. The acceleration due to gravity “g” varies from place to place. We have measured the value of “G” from earth. But we do not know whether the value is the same beyond the solar system. The current value of the distance between the Sun and the Earth has been pegged at 149,597,870.696 kilometers. A recent (2004) study shows that the Earth is moving away from the Sun @ 15 cm per annum. Since this value is 100 times greater than the measurement error, something must really be pushing Earth outwards. While one possible explanation for this phenomenon is that the Sun is losing enough mass via fusion and the solar wind, alternative explanations include the influence of dark matter and changing value of G. We will explain it later.
Einstein proposed the Cosmological Constant to allow static homogeneous solutions to his equations of General Relativity in the presence of matter. When the expansion of the Universe was discovered, it was thought to be unnecessary forcing Einstein to declare was it was his greatest blunder. There have been a number of subsequent episodes in which a non-zero cosmological constant was put forward as an explanation for a set of observations and later withdrawn when the observational case evaporated. Meanwhile, the particle theorists are postulating that the cosmological constant can be interpreted as a measure of the energy density of the vacuum. This energy density is the sum of a number of apparently unrelated contributions: potential energies from scalar fields and zero-point fluctuations of each field theory degree of freedom as well as a bare cosmological constant λ0, each of magnitude much larger than the upper limits on the cosmological constant as measured now. However, the observed vacuum energy is very very small in comparison to the theoretical prediction: a discrepancy of 120 orders of magnitude between the theoretical and observational values of the cosmological constant. This has led some people to postulate an unknown mechanism which would set it precisely to zero. Others postulate the mechanism to suppress the cosmological constant by just the right amount to yield an observationally accessible quantity. However, all agree that this illusive quantity does play an important dynamical role in the Universe. The confusion can be settled if we accept the changing value of G, which can be related to the energy density of the vacuum. Thus, the so-called constants of Nature could also be thought of as the equilibrium points, where different forces acting on a system in different proportions balance each other.
For example, let us consider the Libration points called L4 and L5, which are said to be places that gravity forgot. They are vast regions of space, sometimes millions of kilometers across, in which celestial forces cancel out gravity and trap anything that falls into them. The Libration points, known as ¨ÉxnùÉäSSÉ and {ÉÉiÉ in earlier times, were rediscovered in 1772 by the mathematician Joseph-Louis Lagrange. He calculated that the Earth’s gravitational field neutralizes the gravitational pull of the sun at five regions in space, making them the only places near our planet where an object is truly weightless. Astronomers call them Libration points; also Lagrangian points, or L1, L2, L3, L4 and L5 for short. Of the five Libration points, L4 and L5 are the most intriguing.
Two such Libration points sit in the Earth’s orbit also, one marching ahead of our planet, the other trailing along behind. They are the only ones that are stable. While a satellite parked at L1 or L2 will wander off after a few months unless it is nudged back into place (like the American satellite SOHO), any object at L4 or L5 will stay put due to a complex web of forces (like the asteroids). Evidence for such gravitational potholes appears around other planets too. In 1906, Max Wolf discovered an asteroid outside of the main belt between Mars and Jupiter, and recognized that it was sitting at Jupiter’s L4 point. The mathematics for L4 uses the “brute force approach” making it approximate. Lying 150 million kilometers away along the line of Earth’s orbit, L4 circles the sun about 60 degrees (slightly more, according to our calculation) in front of the planet while L5 lies at the same angle behind. Wolf named it Achilles, leading to the tradition of naming these asteroids after characters from the Trojan wars.
The realization that Achilles would be trapped in its place and forced to orbit with Jupiter, never getting much closer or further away, started a flurry of telescopic searches for more examples. There are now more than 1000 asteroids known to reside at each of Jupiter’s L4 and L5 points. Of these, about ⅔ reside at L4 while the rest ⅓ are at L5. Perturbations by the other planets (primarily Saturn) causes these asteroids to oscillate around L4 and L5 by about 15-200 and at inclinations of up to 400 to the orbital plane. These oscillations generally take between 150 years and 200 years to complete. Such planetary perturbations may also be the reason why there have been so few Trojans found around other planets. Searches for “Trojan” asteroids around other planets have met with mixed results. Mars has 5 of them at L5 only. Saturn seemingly has none. Neptune has two.
The asteroid belt surrounds the inner Solar system like a rocky, ring-shaped moat, extending out from the orbit of Mars to that of Jupiter. But there are voids in that moat in distinct locations called Kirkwood gaps that are associated with orbital resonances with the giant planets - where the orbital influence of Jupiter is especially potent. Any asteroid unlucky enough to venture into one of these locations will follow chaotic orbits and will be perturbed and ejected from the cozy confines of the belt, often winding up on a collision course with one of the inner, rocky planets (such as Earth) or the moon. But Jupiter’s pull cannot account for the extent of the belt’s depletion seen at present or for the spotty distribution of asteroids across the belt - unless there was a migration of planets early in the history of the solar system. According to a report (Nature 457, 1109-1111 dated 26 February 2009), the observed distribution of main belt asteroids does not fill uniformly even those regions that are dynamically stable over the age of the Solar System. There is a pattern of excess depletion of asteroids, particularly just outward of the Kirkwood gaps associated with the 5:2, the 7:3 and the 2:1 Jovian resonances. These features are not accounted for by planetary perturbations in the current structure of the Solar System, but are consistent with dynamical ejection of asteroids by the sweeping of gravitational resonances during the migration of Jupiter and Saturn.
Some researchers designed a computer model of the asteroid belt under the influence of the outer “gas giant” planets, allowing them to test the distribution that would result from changes in the planets’ orbits over time. A simulation wherein the orbits remained static, did not agree with observational evidence. There were places where there should have been a lot more asteroids than we saw. On the other hand, a simulation with an early migration of Jupiter inward and Saturn outward - the result of interactions with lingering planetesimals (small bodies) from the creation of the solar system - fit the observed layout of the belt much better. The uneven spacing of asteroids is readily explained by this planet-migration process that other people have also worked on. In particular, if Jupiter had started somewhat farther from the sun and then migrated inward toward its current location, the gaps it carved into the belt would also have inched inward, leaving the belt looking much like it does now. The good agreement between the simulated and observed asteroid distributions is quite remarkable.
One significant question not addressed in this paper is the pattern of migration - whether the asteroid belt can be used to rule out one of the presently competing theories of migratory patterns. The new study deals with the speed at which the planets’ orbits have changed. The simulation presumes a rather rapid migration of a million or two million years, but other models of Neptune’s early orbital evolution tend to show that migration proceeds much more slowly, over millions of years. We hold this period as 4.32 million years for the Solar system. This example shows that the orbits of planets, which are stabilized due to balancing of the centripetal force and gravity, might be changing from time to time. This implies that either the masses of the Sun and the planets or their distance from each other or both are changing over long periods of time (which is true). It can also mean that G is changing. Thus, the so-called constants of Nature may not be so constants after all.
Earlier, a cosmology with changing physical values for the gravitational constant G was proposed by P.A.M. Dirac in 1937. Field theories applying this principle have been proposed by P. Jordan and D.W. Sciama and in 1961 by C. Brans and R.H. Dicke. According to these theories the value of G is diminishing. Brans and Dicke suggested a change of about 0.00000000002 per year. This theory has not been accepted on the ground that it would have profound effect on the phenomena ranging from the evolution of the Universe to the evolution of the Earth. For instance, stars evolve faster if G is greater. Thus, the stellar evolutionary ages computed with constant G at its present value would be too great. The Earth compressed by gravitation would expand having a profound effect on surface features. The Sun would have been hotter than it is now and the Earth’s orbit would have been smaller. No one bothered to check whether such a scenario existed or is possible. Our studies in this regard show that the above scenario did happen. We have data to prove the above point.
Precise measurements in 1999 gave so divergent values of G from the currently accepted value that the result had to be pushed under the carpet, as otherwise most theories of physics would have tumbled. Presently, physicists are measuring gravity by bouncing atoms up and down off a laser beam (arXiv:0902.0109). The experiments have been modified to perform atom interferometry, whereby quantum interference between atoms can be used to measure tiny accelerations. Those still using the earlier value of G in their calculations, land in trajectories much different from their theoretical calculations. Thus, modern science is based on a value of G that has been proved to be wrong. The Pioneer and Fly-by anomalies and the change of direction of Voyager 2 after it passed the orbit of Saturn have cast a shadow on the authenticity of the theory of gravitation. Till now these have not been satisfactorily explained. We have discussed these problems and explained a different theory of gravitation in later pages.
According to reports published in several scientific journals, precise measurements of the light from distant quasars and the only known natural nuclear reactor, which was active nearly 2 billion years ago at what is now Oklo in Gabon suggest that the value of the fine-structure constant may have changed over the history of the universe (Physical Review D, vol 69, p 121701). If confirmed, the results will be of enormous significance for the foundations of physics. Alpha is an extremely important constant that determines how light interacts with matter - and it shouldn’t be able to change. Its value depends on, among other things, the charge on the electron, the speed of light and Planck’s constant. Could one of these really have changed?
If the fine-structure constant changes over time, it allows postulating that the velocity of light might not be constant. This would explain the flatness, horizon and monopole problems in cosmology. Recent work has shown that the universe appears to be expanding at an ever faster rate, and there may well be a non-zero cosmological constant. There is a class of theories where the speed of light is determined by a scalar field (the force making the cosmos expand, the cosmological constant) that couples to the gravitational effect of pressure. Changes in the speed of light convert the energy density of this field into energy. One off-shoot of this view is that in a young and hot universe during the radiation epoch, this prevents the scalar field dominating the universe. As the universe expands, pressure-less matter dominates and variations in c decreases making α (alpha) fixed and stable. The scalar field begins to dominate, driving a faster expansion of the universe. Whether the variation of the fine-structure constant claimed exists or not, putting bounds on the rate of change puts tight constraints on new theories of physics.
One of the most mysterious objects in the universe is what is known as the black hole – a derivative of the general theory of relativity. It is said to be the ultimate fate of a super-massive star that has exhausted its fuel that sustained it for millions of years. In such a star, gravity overwhelms all other forces and the star collapses under its own gravity to the size of a pinprick. It is called a black hole as nothing – not even light – can escape it. A black hole has two parts. At its core is a singularity, the infinitesimal point into which all the matter of the star gets crushed. Surrounding the singularity is the region of space from which escape is impossible - the perimeter of which is called the event horizon. Once something enters the event horizon, it loses all hope of exiting. It is generally believed that a large star eventually collapses to a black hole. Roger Penrose conjectured that the formation of a singularity during stellar collapse necessarily entails the formation of an event horizon. According to him, Nature forbids us from ever seeing a singularity because a horizon always cloaks it. Penrose’s conjecture is termed the cosmic censorship hypothesis. It is only a conjecture. But some theoretical models suggest that instead of a black hole, a collapsing star might become a naked singularity.
Most physicists operate under the assumption that a horizon must indeed form around a black hole. What exactly happens at a singularity - what becomes of the matter after it is infinitely crushed into oblivion - is not known. By hiding the singularity, the event horizon isolates this gap in our knowledge. General relativity does not account for the quantum effects that become important for microscopic objects, and those effects presumably intervene to prevent the strength of gravity from becoming truly infinite. Whatever happens in a black hole stays in a black hole. Yet Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not form, so that the singularity remains exposed to our view. Physicists call it a naked singularity. In such a case, Matter and radiation can both fall in and come out, whereas matter falling into the singularity inside a black hole would land in a one-way trip.
In principle, we can come as close as we like to a naked singularity and return back. Naked singularities might account for unexplained high-energy phenomena that astronomers have seen, and they might offer a laboratory to explore the fabric of the so-called space-time on its finest scales. The results of simulations by different scientists show that most naked singularities are stable to small variations of the initial setup. Thus, these situations appear to be generic and not contrived. These counterexamples to Penrose’s conjecture suggest that cosmic censorship is not a general rule.
The discovery of naked singularities would transform the search for a unified theory of physics, not the least by providing direct observational tests of such a theory. It has taken so long for physicists to accept the possibility of naked singularities because they raise a number of conceptual puzzles. A commonly cited concern is that such singularities would make nature inherently unpredictable. Unpredictability is actually common in general relativity and not always directly related to cosmic censorship violation described above. The theory permits time travel, which could produce causal loops with unforeseeable outcomes, and even ordinary black holes can become unpredictable. For example, if we drop an electric charge into an uncharged black hole, the shape of space-time around the hole radically changes and is no longer predictable. A similar situation holds when the black hole is rotating.
Specifically, what happens is that space-time no longer neatly separates into space and time, so that physicists cannot consider how the black hole evolves from some initial time into the future. Only the purest of pure black holes, with no charge or rotation at all, is fully predictable. The loss of predictability and other problems with black holes actually stem from the occurrence of singularities; it does not matter whether they are hidden or not. Cosmologists dread the singularity because at this point gravity becomes infinite, along with the temperature and density of the universe. As its equations cannot cope with such infinities, general relativity fails to describe what happens at the big bang.
In the mid 1980s, Abhay Ashtekar rewrote the equations of general relativity in a quantum-mechanical framework to show that the fabric of space-time is woven from loops of gravitational field lines. The theory is called the loop quantum gravity. If we zoom out far enough, the space appears smooth and unbroken, but a closer look reveals that space comes in indivisible chunks, or quanta, 10-35 square meters in size. In 2000, some scientists used loop quantum gravity to create a simple model of the universe. This is known as the LQC. Unlike general relativity, the physics of LQC did not break down at the big bang. Some others developed computer simulations of the universe according to LQC. Early versions of the theory described the evolution of the universe in terms of quanta of area, but a closer look revealed a subtle error. After this mistake was corrected it was found that the calculations now involved tiny volumes of space. It made a crucial difference. Now the universe according to LQC agreed brilliantly with general relativity when expansion was well advanced, while still eliminating the singularity at the big bang. When they ran time backwards, instead of becoming infinitely dense at the big bang, the universe stopped collapsing and reversed direction. The big bang singularity had disappeared (Physical Review Letters, vol.96, p-141301). The era of the Big Bounce has arrived. But the scientists are far from explaining all the conundrums.
Often it is said that the language of physics is mathematics. In a famous essay, Wigner wrote about the “unreasonable effectiveness of mathematics”. Most physicists resonate with the perplexity expressed by Wigner and Einstein’s dictum that “the most incomprehensible thing about the universe is that it is comprehensible”. They marvel at the fact that the universe is not anarchic - that atoms obey the same laws in distant galaxies as in the lab. Yet, Gödel’s Theorem implies that we can never be certain that mathematics is consistent: it leaves open the possibility that a proof exists demonstrating that 0=1. The quantum theory tells that, on the atomic scale, nature is intrinsically fuzzy. Nonetheless, atoms behave in precise mathematical ways when they emit and absorb light, or link together to make molecules. Yet, is Nature mathematical?
Language is a means of communication. Mathematics cannot communicate in the same manner like a language. Mathematics on its own does not lead to a sensible universe. The mathematical formula has to be interpreted in communicable language to acquire some meaning. Thus, mathematics is only a tool for describing some and not all ideas. For example, “observer” has an important place in quantum physics. Everett addressed the measurement problem by making the observer an integral part of the system observed: introducing a universal wave function that links observers and objects as parts of a single quantum system. But there is no equation for the “observer”.
We have not come across any precise and scientific definition of mathematics. Concise Oxford Dictionary defines mathematics as: “the abstract science of numbers, quantity, and space studied in its own right”, or “as applied to other disciplines such as physics, engineering, etc”. This is not a scientific description as the definition of number itself leads to circular reasoning. Even the mathematicians do not have a common opinion on the content of mathematics. There are at least four views among mathematicians on what mathematics is. John D Barrow calls these views as:
Platonism: It is the view that concepts like groups, sets, points, infinities, etc., are “out there” independent of us – “the pie is in the sky”. Mathematicians discover them and use them to explain Nature in mathematical terms. There is an offshoot of this view called “neo-Platonism”, which likens mathematics to the composition of a cosmic symphony by independent contributors, each moving it towards some grand final synthesis. Proof: completely independent mathematical discoveries by different mathematicians working in different cultures so often turn out to be identical.
Conceptualism: It is the anti-thesis of Platonism. According to this view, scientists create an array of mathematical structures, symmetries and patterns and force the world into this mould, as they find it so compelling. The so-called constants of Nature, which arise as theoretically undetermined constants of proportionality in the mathematical equations, are solely artifacts of the peculiar mathematical representation they have chosen to use for different purposes.
Formalism: This was developed during the last century, when a number of embarrassing logical paradoxes were discovered. There was proof which established the existence of particular objects, but offered no way of constructing them explicitly in a finite number of steps. Hilbert’s formalism belongs to this category, which defines mathematics as nothing more than the manipulation of symbols according to specified rules (not natural, but sometimes un-physical man-made rules). The resultant paper edifice has no special meaning at all. If the manipulations are done correctly, it should result in a vast collection of tautological statements: an embroidery of logical connections.
Intuitionism: Prior to Cantor’s work on infinite sets, mathematicians had not made use of actual infinities, but only exploited the existence of quantities that could be made arbitrarily large or small – the concept of limit. To avoid founding whole areas of mathematics upon the assumption that infinite sets share the “obvious” properties possessed by finite one’s, it was proposed that only quantities that can be constructed from the natural numbers 1,2,3,…, in a finite number of logical steps, should be regarded as proven true.
None of the above views is complete because it neither is a description derived from fundamental principles nor conforms to a proper definition of mathematics, whose foundation is built upon logical consistency. The Platonic view arose from the fact that mathematical quantities transcend human minds and manifests the intrinsic character of reality. A number, say three or five codes some information differently in various languages, but conveys the same concept in all civilizations. They are abstract entities and mathematical truth means correspondence between the properties of these abstract objects and our system of symbols. We associate the transitory physical objects such as three worlds or five sense organs to these immutable abstract quantities as a secondary realization. These ideas are somewhat misplaced. Numbers are a property of all objects by which we distinguish between similars. If there is nothing similar to an object, it is one. If there are similars, the number is decided by the number of times we perceive such similars (we may call it a set). Since perception is universal, the concept of numbers is also universal.
Believers in eternal truth often point to mathematics as a model of a realm with timeless truths. Mathematicians explore this realm with their minds and discover truths that exist outside of time, in the same way that we discover the laws of physics by experiment. But mathematics is not only self-consistent, but also plays a central role in formulating fundamental laws of physics, which the physics Nobel laureate Eugene Wigner once referred to as the “unreasonable success of mathematics in physics”. One way to explain this “success” within the dominant metaphysical paradigm of the timeless multiverse is to suppose that physical reality is mathematical, i.e. we are creatures within the timeless Platonic realm. The cosmologist Max Tegmark calls this the mathematical universe hypothesis. A slightly less provocative approach is to posit that since the laws of physics can be represented mathematically, not only is their essential truth outside of time, but there is in the Platonic realm a mathematical object, a solution to the equations of the final theory, that is “isomorphic” in every respect to the history of the universe. That is, any truth about the universe can be mapped into a theorem about the corresponding mathematical object. If nothing exists or is true outside of time, then this description is void. However, if mathematics is not the description of a different timeless realm of reality, what is it? What are the theorems of mathematics about if numbers, formulas and curves do not exist outside of our world?
Let us consider a game of chess. It was invented at a particular time, before which there is no reason to speak of any truths of chess. But once the game was invented, a long list of facts became demonstrable. These are provable from the rules and can be called the theorems of chess. These facts are objective in that any two minds that reason logically from the same rules will reach the same conclusions about whether a conjectured theorem is true or not. Platonists would say that chess always existed timelessly in an infinite space of mathematically describable games. By such an assertion, we do not achieve anything except a feeling of doing something elevated. Further, we have to explain how we finite beings embedded in time can gain knowledge about this timeless realm. It is much simpler to think that at the moment the game was invented, a large set of facts become objectively demonstrable, as a consequence of the invention of the game. There is no need to think of the facts as eternally existing truths, which are suddenly discoverable. Instead we can say they are objective facts that are evoked into existence by the invention of the game of chess. The bulk of mathematics can be treated the same way, even if the subjects of mathematics such as numbers and geometry are inspired by our most fundamental observations of nature. Mathematics is no less objective, useful or true for being evoked by and dependent on discoveries of living minds in the process of exploring the time-bound universe.
The Mandelbrot Set is often cited as a mathematical object with an independent existence of its own. Mandelbrot Set is produced by a remarkably simple mathematical formula – a few lines of code (f(z) = z2+c) describing a recursive feed-back loop – but can be used to produce beautiful colored computer plots. It is possible to endlessly zoom in to the set revealing ever more beautiful structures which never seem to repeat themselves. Penrose called it “not an invention of the human mind: it was a discovery”. It was just out there. On the other hand, fractals – geometrical shapes found through out Nature – are self-similar because how far you zoom into them; they still resemble the original structure. Some people use these factors to plead that mathematics and not evolution is the sole factor in designing Nature. They miss the deep inner meaning of these, which will be described later while describing the structure of the Universe.
The opposing view reflects the ideas of Kant regarding the innate categories of thought whereby all our experience is ordered by our minds. Kant pointed out the difference between the internal mental models we build of the external world and the real objects that we know through our sense organs. The views of Kant have many similarities with that of Bohr. The Consciousness of Kant is described as intelligence by Bohr. The sense organs of Kant are described as measuring devices by Bohr. Kant’s mental models are Bohr’s quantum mechanical models. This view of mathematics stresses more on “mathematical modeling” than mathematical rules or axioms. In this view, the so-called constants of Nature that arise as theoretically determined constants of proportionality in our mathematical equations, are solely artifacts of the particular mathematical representation we have chosen to use for explaining different natural phenomena. For example, we use G as the Gravitational constant because of our inclination to express the gravitational interaction in a particular way. This view is misleading as the large number of the so-called constants of Nature points to some underlying reality behind it. We will discuss this point later.
The debate over the definition of “physical reality” led to the notion that it should be external to the observer – an observer-independent objective reality. The statistical formulation of the laws of atomic and sub-atomic physics has added a new dimension to the problem. In quantum mechanics, the experimental arrangements are treated in classical terms, whereas the observed objects are treated in probabilistic terms. In this way, the measuring apparatus and the observer are effectively joined into one complex system which has no distinct, well defined parts, and the measuring apparatus does not have to be described as an isolated physical entity.
As Max Tegmark in his External Reality Hypothesis puts it: If we assume that reality exists independently of humans, then for a description to be complete, it must also be well-defined according to non-human entities that lack any understanding of human concepts like “particle”, “observation”, etc. A description of objects in this external reality and the relations between them would have to be completely abstract, forcing any words or symbols to be mere labels with no preconceived meanings what-so-ever. To understand the concept, you have to distinguish between two ways of viewing reality. The first is from outside, like the overview of a physicist studying its mathematical structure – a bird’s eye view. The second way is the inside view of an observer living in the structure – the view of a frog in the well.
Though Tegmark’s view is nearer the truth (it will be discussed later), it has been contested by others on the ground of contradicting logical consistency. Tegmark relies on a quote of David Hilbert: “Mathematical existence is merely freedom from contradiction”. This implies that mathematical structures simply do not exist unless they are logically consistent. They cite the Russell’s paradox (discussed in detail in later pages) and other paradoxes - such as the Zermelo-Frankel set theory that avoids the Russell’s paradox - to point out that mathematics on its own does not lead to a sensible universe. We seem to need to apply constraints in order to obtain consistent physical reality from mathematics. Unrestricted axioms lead to Russell’s paradox.
Conventional bivalent logic is assumed to be based on the principle that every proposition takes exactly one of two truth values: “true” or “false”. This is a wrong conclusion based on European tradition as in the ancient times students were advised to: observe, listen (to teachings of others), analyze and test with practical experiments before accepting anything as true. Till it is conclusively proved or disproved, it was “undecided”. The so-called discovery of multi-valued logic is nothing new. If we extend the modern logic then why stop at ternary truth values: it could be four or more-valued logic. But then what are they? We will discuss later.
Though Euclid with his Axioms appears to be a Formalist, his Axioms were abstracted from the real physical world. But the focus of attention of modern Formalists is upon the relations between entities and the rules governing them, rather than the question of whether the objects being manipulated have any intrinsic meaning. The connection between the Natural world and the structure of mathematics is totally irrelevant to them. Thus, when they thought that the Euclidean geometry is not applicable to curved surfaces, they had no hesitation in accepting the view that the sum of the three angles of a triangle need not be equal to 1800. It could be more or less depending upon the curvature. This is a wholly misguided view. The lines or the sides drawn on a curved surface are not straight lines. Hence the Axioms of Euclid are not violated, but are wrongly applied. Riemannian geometry, which led to the chain of non-Euclidean geometry, was developed out of his interest in trying to solve the problems of distortion of metal sheets when they were heated. Einstein used this idea to suggest curvature of space-time without precisely defining space or time or spece-time. But such curvature is a temporary phenomenon due to the application of heat energy. The moment the external heat energy is removed, the metal plate is restored to its original position and Euclidean geometry is applicable. If gravity changes the curvature of space, then it should be like the external energy that distorts the metal plate. Then who applies gravity to mass or what is the mechanism by which gravity is applied to mass. If no external agency is needed and it acts perpetually, then all mass should be changing perpetually, which is contrary to observation. This has been discussed elaborately in latter pages.
Once the notion of the minimum distance scale was firmly established, questions were raised about infinity and irrational numbers. Feynman raised doubts about the relevance of infinitely small scales as follows: “It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space”. Paul Davies asserted: “the use of differential equations assumes the continuity of space-time on arbitrarily small scales.
The frequent appearance of π implies that their numerical values may be computed to arbitrary precision by an infinite sequence of operations. Many physicists tacitly accept these mathematical idealizations and treat the laws of physics as implementable in some abstract and perfect Platonic realm. Another school of thought, represented most notably by Wheeler and Landauer, stresses that real calculations involve physical objects, such as computers, and take place in the real physical universe, with its specific available resources. In short, information is physical. That being so, it follows that there will be fundamental physical limitations to what may be calculated in the real world”. Thus, Intuitionism or Constructivism divides mathematical structures into “physically relevant” and “physically irrelevant”. It says that mathematics should only include statements which can be deduced by a finite sequence of step-by-step constructions starting from the natural numbers. Thus, according to this view, infinity and irrational numbers cannot be part of mathematics.
Infinity is qualitatively different from even the largest number. Finite numbers, however large, obey the laws of arithmetic. We can add, multiply and divide them, and put different numbers unambiguously in order of size. But infinity is the same as a part of itself, and the mathematics of other numbers is not applicable to it. Often the term “Hilbert’s hotel” is used as a metaphor to describe infinity. Suppose a hotel is full and each guest wants to bring a colleague who would need another room. This would be a nightmare for the management, who could not double the size of the hotel instantly. In an infinite hotel, though, there is no problem. The guest from room 1 goes into room 2, the guest in room 2 into room 4, and so on. All the odd-numbered rooms are then free for new guests. This is a wrong analogy. The numbers are divided into two categories based on whether there is similar perception or not. If after the perception of one object there is further similar perception, they are many, which can range from 2,3,4,…..n depending upon the sequence of perceptions? If there is no similar perception after the perception of one object, then it is one. In the case of Infinity, neither of the above conditions applies. However, Infinity is more like the number ‘one’ – without a similar – except for one characteristic. While one object has a finite dimension, infinity has infinite dimensions. The perception of higher numbers is generated by repetition of ‘one’ that many number of times, but the perception of infinity is ever incomplete.
Since interaction requires a perceptible change anywhere in the system under examination or measurement, normal interactions are not applicable in the case of infinity. For example, space and time in their absolute terms are infinite. Space and time cannot be measured, as they are not directly perceptible through our sense organs, but are deemed to be perceived. Actually what we measure as space is the interval between objects or points on objects. These intervals are mental constructs and have no physical existence other than the objects, which are used to describe space through alternative symbolism. Similarly, what we measure as time is the interval between events. Space and time do not and cannot interact with each other or with other objects or events as no mathematics is possible between infinities. Our measurements of an arbitrary segment of space or time (which are really the intervals) do not affect space or time in any way. We have explained the quantum phenomena with real numbers derived from fundamental principles and correlated them to the macro world. The quantities like π and φ etc have other significances, which will be discussed later.
The fundamental “stuff” of the Universe is the same and the differences arise only due to the manner of their accumulation and reduction – magnitude and sequential arrangement. Since number is a property of all particles, physical phenomena have some associated mathematical basis. However, the perceptible structures and processes of the physical world are not the same as their mathematical formulations, many of which are neither perceptible nor feasible. Thus the relationship between physics and mathematics is that of the map and the territory. Map facilitates study of territory, but it does not tell all about territory. Knowing all about the territory from the map is impossible. This creates the difficulty. Science is increasingly becoming less objective. The scientists are presenting data as if it is absolute truth merely liberated by their able hands for the benefit of lesser mortals. Thus, it has to be presented to the lesser mortals in a language that they do not understand – thus do not question. This leads to misinterpretations to the extent that some classic experiments become dogma even when they are fatally flawed. One example is the Olber’s paradox.
In order to understand our environment and interact effectively with it, we engage in the activities of counting the total effect of each of the systems. Such counting is called mathematics. It covers all aspects of life. We are central to everything in a mathematical way. As Barrow points out; “While Copernicus’s idea that our position in the universe should not be special in every sense is sound, it is not true that it cannot be special in any sense”. If we consider our positioning as opposed to our position in the Universe, we will find our special place. For example, if we plot a graph with mass of the star relative to the Sun (with Sun at 1) and radius of orbit relative to Earth (with Earth at 1) and consider scale of the planets, its distance from the Sun, its surface conditions, the positioning of the neighboring planets etc; and consider these variables in a mathematical space, we will find that the Earth’s positioning is very special indeed. It is in a narrow band called the Habitable zone (For details, please refer to Wikipedia on planetary habitability hypothesis).
If we imagine the complex structure of the Mandelbrot Set as representative of the Universe (since it is self similar), then we could say that we are right in the border region of the fractal structure. If we consider the relationship between different dimensions of space or a (bubble), then we find their exponential nature. If we consider the center of the bubble as 0 and the edge as 1 and map it in a logarithmic scale, we will find an interesting zone at 0.5. Starting for the Galaxy, to the Sun to Earth to the atoms, everything comes in this zone. For example, we can consider the galactic core as the equivalent of the S orbital of the atom, the bars as equivalent of the P orbital, the spiral arms as equivalent of the D orbital and apply the logarithmic scale, we will find the Sun at 0.5 position. The same is true for Earth. It is known that both fusion and fission push atoms towards iron. The element finds itself in the middle group of the middle period of the periodic table; again 0.5. Thus, there can be no doubt that Nature is mathematical. But the structures and the processes of the world are not the same as mathematical formulations. The map is not the territory. Hence there are various ways of representing Nature. Mathematics is one of them. However, only mathematics cannot describe Nature in any meaningful way.
Even the modern mathematician and physicists do not agree on many concepts. Mathematicians insist that zero has existence, but no dimension, whereas the physicists insist that since the minimum possible length is the Planck scale; the concept of zero has vanished! The Lie algebra corresponding to SU (n) is a real and not a complex Lie algebra. The physicists introduce the imaginary unit i, to make it complex. This is different from the convention of the mathematicians. Mathematicians treat any operation involving infinity is void as it does not change by addition or subtraction of or multiplication or division by any number. History of development of science shows that whenever infinity appears in an equation, it points to some novel phenomenon or some missing parameters. Yet, physicists use renormalization by manipulation to generate another infinity in the other side of the equation and then cancel both! Certainly it is not mathematics!
Modern scientists claim to depend solely on mathematics. But most of what is called as “mathematics” in modern science fails the test of logical consistency that is a corner stone for judging the truth content of a mathematical statement. For example, mathematics for a multi-body system like a lithium or higher atom is done by treating the atom as a number of two body systems. Similarly, the Schrödinger equation in so-called one dimension (it is a second order equation as it contains a term x2, which is in two dimensions and mathematically implies area) is converted to three dimensional by addition of two similar factors for y and z axis. Three dimensions mathematically imply volume. Addition of three areas does not generate volume and x2+y2+z2 ≠ (x.y.z). Similarly, mathematically all operations involving infinity is void. Hence renormalization is not mathematical. Thus, the so called mathematics of modern physicists is not mathematical at all!
In fact, some recent studies appear to hint that perception is mathematically impossible. Imagine a black-and-white line drawing of a cube on a sheet of paper. Although this drawing looks to us like a picture of a cube, there are actually infinite numbers of other three-dimensional objects that could have produced the same set of lines when collapsed on the page. But we don’t notice any of these alternatives. The reason for the same is that, our visual systems have more to go on than just bare perceptual input. They are said to use heuristics and short cuts, based on the physics and statistics of the natural world, to make the “best guesses” about the nature of reality. Just as we interpret a two-dimensional drawing as representing a three-dimensional object, we interpret the two-dimensional visual input of a real scene as indicating a three-dimensional world. Our perceptual system makes this inference automatically, using educated guesses to fill in the gaps and make perception possible. Our brains use the same intelligent guessing process to reconstruct the past and help in perceiving the world.
Memory functions differently than a video-recording with a moment-by-moment sensory image. In fact, it’s more like a puzzle: we piece together our memories, based on both what we actually remember and what seems most likely given our knowledge of the world. Just as we make educated guesses – inferences - in perception, our minds’ best inferences help “fill in the gaps” of memory, reconstructing the most plausible picture of what happened in our past. The most striking demonstration of the minds’ guessing game occurs when we find ways to fool the system into guessing wrong. When we trick the visual system, we see a “visual illusion” - a static image might appear as if it’s moving, or a concave surface will look convex. When we fool the memory system, we form a false memory - a phenomenon made famous by researcher Elizabeth Loftus, who showed that it is relatively easy to make people remember events that never occurred. As long as the falsely remembered event could plausibly have occurred, all it takes is a bit of suggestion or even exposure to a related idea to create a false memory.
Earlier, visual illusions and false memories were studied separately. After all, they seem qualitatively different: visual illusions are immediate, whereas false memories seemed to develop over an extended period of time. A recent study blurs the line between these two phenomena. The study reveals an example of false memory occurring within 42 milliseconds - about half the amount of time it takes to blink your eye. It relied upon a phenomenon known as “boundary extension”, an example of false memory found when recalling pictures. When we see a picture of a location - say, a yard with a garbage can in front of a fence - we tend to remember the scene as though more of the fence were visible surrounding the garbage can. In other words, we extend the boundaries of the image, believing that we saw more fence than was actually present. This phenomenon is usually interpreted as a constructive memory error - our memory system extrapolates the view of the scene to a wider angle than was actually present. The new study, published in the November 2008 issue of the journal Psychological Science, asked how quickly this boundary extension happens.
Although it is still possible that boundary extension is purely a result of our memory system, the incredible speed of this phenomenon suggests a more parsimonious explanation: that boundary extension may in part be caused by the guesses of our visual system itself. The new dataset thus blurs the boundaries between the initial representation of a picture (via the visual system) and the storage of that picture in memory. This raises the question: is boundary extension a visual illusion or a false memory? Perhaps these two phenomena are not as different as previously thought. False memories and visual illusions both occur quickly and easily, and both seem to rely on the same cognitive mechanism: the fundamental property of perception and memory to fill in gaps with educated guesses, information that seems most plausible given the context. The work adds to a growing movement that suggests that memory and perception may be simply two sides of the same coin. This, in turn, implies that mathematics, which is based on perception of numbers and other visual imagery, could be misleading for developing theories of physics.
The essence of creation is accumulation and reduction of the number of particles in each system in various combinations. Thus, Nature has to be mathematical. But then physics should obey the laws of mathematics, just as mathematics should comply with the laws of physics. We have shown elsewhere that all of mathematics cannot be physics. We may have a mathematical equation without a corresponding physical explanation. Accumulation or reduction can be linear or non-linear. If they are linear, the mathematics is addition and subtraction. If they are non-linear, the mathematics is multiplication and division. Yet, this principle is violated in a large number of equations. For an example, the Schrödinger’s equation in one dimension has been discussed earlier. Then there are unphysical combinations. For example, certain combinations of protons and neutrons are prohibited physically, though there is no restriction on devising one such mathematical formula. There is no equation for the observer. Thus, sole dependence on mathematics for discussing physics is neither desirable nor warranted.
We accept “proof” – mathematical or otherwise - to validate the reality of any physical phenomena. We depend on proof to validate a theory as long as it corresponds to reality. The modern system of proof takes five stages: observation/experiment, developing hypothesis, testing the hypothesis, acceptance or rejection or modification of hypothesis based on the additional information and lastly, reconstruction of the hypothesis if it was not accepted. We also adopt a five stage approach to proof. First we observe/experiment and hypothesize. Then we look for corroborative evidence. In the third stage we try to prove that the opposite of the hypothesis is wrong. In the fourth stage we try to prove whether the hypothesis is universally valid or has any limitations. In the last stage we try to prove that any theory other than this is wrong.
Mathematics is one of the tools of “proof” because of its logical constancy. It is a universal law that the tools are selected based on the nature of operations and not vice-versa. The tools can only restrict the choice of operations. Hence mathematics by itself does not provide proof, but the proof may use mathematics as a tool. We also depend on symmetry, as it is a fundamental property of Nature. In our theory, different infinities co-exist and do not interact with each other. Thus, we agree that the evolutionary process of the Universe could be explained mathematically, as basically it is a process of non-linear accumulation and corresponding reduction of particles and energies in different combinations. But we differ on the interpretation of the equation. For us, the left hand side of the equation represents the cause and the right hand side the effect, which is reversible only in the same order. If the magnitudes of the parameters of one side are changed, the effect on the other side also correspondingly changes. But such changes must be according to natural laws and not arbitrary changes. For example, we agree that e/m = c2 or m/e = 1/c2, which we derive from fundamental principles. But we do not agree that e = mc2. This is because we treat mass and energy as inseparable conjugates with variable magnitude and not interchangeable, as each has characteristics not found in the other. Thus, they are not fit to be used in an equation as cause and effect. Simultaneously, we agree with c2 as energy flow is perceived in fields, which are represented by second order quantities.
If we accept the equation e = mc2, according to modern principles, it will lead to m = e/c2. In that case, we will land in many self contradicting situations. For example, if photon has zero rest mass, then m0 = 0/c2 (at rest, external energy that moves a particle has to be zero. Internal energy is not relevant, as a stable system has zero net energy). This implies that m0c2 = 0, or e = 0, which makes c2 = 0/0, which is meaningless. But if we accept e/m = c2 and both sides of the equation as cause and effect, then there is no such contradiction. As we have proved in our book “Vaidic Theory of Numbers”, all operations involving zero except multiplication are meaningless. Hence if either e or m becomes zero, the equation becomes meaningless and in all other cases, it matches the modern values. Here we may point out that the statement that the rest mass of matter is determined by its total energy content is not susceptible of a simple test since there is no independent measure of the later quantity. This proves our view that mass and energy are inseparable conjugates.
The domain that astronomers call “the universe” - the space, extending more than 10 billion light years around us and containing billions of galaxies, each with billions of stars, billions of planets (and maybe billions of biospheres) - could be an infinitesimal part of the totality. There is a definite horizon to direct observations: a spherical shell around us, such that no light from beyond it has had time to reach us since the big bang. However, there is nothing physical about this horizon. If we were in the middle of an ocean, it is conceivable that the water ends just beyond your horizon - except that we know it doesn’t. Likewise, there are reasons to suspect that our universe - the aftermath of our big bang - extends hugely further than we can see.
An idea called eternal inflation suggested by some cosmologists envisages big bangs popping off, endlessly, in an ever-expanding substratum. Or there could be other space-times alongside ours - all embedded in a higher-dimensional space. Ours could be but one universe in a multiverse. Other branches of mathematics then may become relevant. This has encouraged the use of exotic mathematics such as the transfinite numbers. It may require a rigorous language to describe the number of possible states that a universe could possess and to compare the probability of different configurations. It may just be too hard for human brains to grasp. A fish may be barely aware of the medium in which it lives and swims; certainly it has no intellectual powers to comprehend that water consists of interlinked atoms of hydrogen and oxygen. The microstructure of empty space could, likewise, be far too complex for unaided human brains to grasp. Can we guarantee that with the present mathematics we can overcome all obstacles and explain all complexities of Nature? Should we not resort to the so-called exotic mathematics? But let us see where it lands us.
The manipulative mathematical nature of the descriptions of quantum physics has created difficulties in its interpretation. For example, the mathematical formalism used to describe the time evolution of a non-relativistic system proposes two somewhat different kinds of transformations:
· Reversible transformations described by unitary operators on the state space. These transformations are determined by solutions to the Schrödinger equation.
· Non-reversible and unpredictable transformations described by mathematically more complicated transformations. Examples of these transformations are those that are undergone by a system as a result of measurement.
The truth content of a mathematical statement is judged from its logical consistency. We agree that mathematics is a way of representing and explaining the Universe in a symbolic way because evolution is logically consistent. This is because everything is made up of the same “stuff”. Only the quantities (number or magnitude) and their ordered placement or configuration create the variation. Since numbers are a property by which we differentiate between similar objects and all natural phenomena are essentially accumulation and reduction of the fundamental “stuff” in different permissible combinations, physics has to be mathematical. But then mathematics must conform to Natural laws: not un-physical manipulations or the brute force approach of arbitrarily reducing some parameters to zero to get a result that goes in the name of mathematics. We suspect that the over-dependence on mathematics is not due to the fact that it is unexceptionable, but due to some other reason described below.
In his book “The Myth of the Framework”, Karl R Popper, acknowledged as the major influence in modern philosophy and political thought, has said: “Many years ago, I used to warn my students against the wide-spread idea that one goes to college in order to learn how to talk and write “impressively” and incomprehensibly. At that time many students came to college with this ridiculous aim in mind, especially in Germany …………. They unconsciously learn that highly obscure and difficult language is the intellectual value par excellence……………Thus arose the cult of incomprehensibility, of “impressive” and high sounding language. This was intensified by the impenetrable and impressive formalism of mathematics…………….” It is unfortunate that even now many Professors, not to speak of their students, are still devotees of the above cult.
The modern Scientists justify the cult of incomprehensibility in the garb of research methodology - how “big science” is really done. “Big science” presents a big opportunity for methodologists. With their constant meetings and exchanges of e-mail, collaboration scientists routinely put their reasoning on public display (not the general public, but only those who subscribe to similar views), long before they write up their results for publication in a journal. In reality, it is done to test the reaction of others as often bitter debate takes place on such ideas. Further, when particle physicists try to find a particular set of events among the trillions of collisions that occur in a particle accelerator, they focus their search by ignoring data outside a certain range. Clearly, there is a danger in admitting a non-conformist to such raw material, since a lack of acceptance of their reasoning and conventions can easily lead to very different conclusions, which may contradict their theories. Thus, they offer their own theory of “error-statistical evidence” such as in the statement, “The distinction between the epistemic and causal relevance of epistemic states of experimenters may also help to clarify the debate over the meaning of the likelihood principle”. Frequently they refer to ceteris paribus (other things being equal), without specifying which other things are equal (and then face a challenge to justify their statement).
The cult of incomprehensibility has been used even the most famous scientists with devastating effect. Even the obvious mistakes in their papers have been blindly accepted by the scientific community and remained un-noticed for hundreds of years. Here we quote from an article written by W.H. Furry of Department of Physics, Harvard University, published in March 1, 1936 issue of Physical Review, Volume 49. The paper “Note on the Quantum-Mechanical Theory of Measurement” was written in response to the famous EPR Argument and its counter by Bohr. The quote relates to the differentiation between “pure state” and “mixture state”.
Our statistical information about a system may always be expressed by giving the expectation values of all observables. Now the expectation value of an arbitrary observable F, for a state whose wave function is φ, is
If we do not know the state of the system, but know that wi
are the respective probabilities of its being in states whose wave functions are φi, then we must assign as the expectation value of F the weighted average of its expectation values for the states φi. Thus,
This formula for is the appropriate one when our system is one of an ensemble of systems of which numbers proportional to wi are in the states φi. It must not be confused with any such formula as
which corresponds to the system’s having a wave function which is a linear combination of the φi. This last formula is of the type of (1), while (2) is an altogether different type.
An alternative way of expressing our statistical information is to give the probability that measurement of an arbitrary observable F will give as result an arbitrary one of its eigenvalues, say δ. When the system is in the state φ, this probability is
where xδ is the eigenfunction of F corresponding to the eigenvalues δ. When we know only that wi are the probabilities of the system’s being in the states φi, the probability in question is
Formula (2’) is not the same as any special case of (1’) such as
It differs generically from (1’) as (2) does from (1).
When such equations as (1), (1’) hold, we say that the system is in the “pure state” whose wave function is φ. The situation represented by Eqs. (2), (2’) is called a “mixture” of the states φi with the weights wi. It can be shown that the most general type of statistical information about a system is represented by a mixture. A pure state is a special case, with only one non-vanishing wi. The term mixture is usually reserved for cases in which there is more than one non-vanishing wi.It must again be emphasized that a mixture in this sense is essentially different from any pure state whatever.”
Now we quote from a recent Quantum Reality Web site the same description of “pure state” and “quantum state”:
“The statistical properties of both systems before measurement, however, could be described by a density matrix. So for an ensemble system such as this the density matrix is a better representation of the state of the system than the vector.
So how do we calculate the density matrix? The density matrix is defined as the weighted sum of the tensor products over all the different states:
Where p and q refer to the relative probability of each state. For the example of particles in a box, p would represent the number of particles in state│ψ>, and q would represent the number of particles in state │φ>.
Let’s imagine we have a number of qubits in a box (these can take the value │0> or│1>.
Let’s say all the qubits are in the following superposition state: 0.6│0> +0.8i│1>.
In other words, the ensemble system is in a pure state, with all of the particles in an identical quantum superposition of states │0> and│1>. As we are dealing with a single, pure state, the construction of the density matrix is particularly simple: we have a single probability p, which is equal to 1.0 (certainty), while q (and all the other probabilities) are equal to zero. The density matrix then simplifies to: │ψ><ψ│
This state can be written as a column (“ket”) vector. Note the imaginary component (the expansion coefficients are in general complex numbers):
In order to generate the density matrix we need to use the Hermitian conjugate (or adjoint) of this column vector (the transpose of the complex conjugate│ψ>. So in this case the adjoint is the following row (“bra”) vector:
What does this density matrix tell us about the statistical properties of our pure state ensemble quantum system? For a start, the diagonal elements tell us the probabilities of finding the particle in the│0> or│1> eigenstate. For example, the 0.36 component informs us that there will be a 36% probability of the particle being found in the │0> state after measurement. Of course, that leaves a 64% chance that the particle will be found in the │1> state (the 0.64% component).
The way the density matrix is calculated, the diagonal elements can never have imaginary components (this is similar to the way the eigenvalues are always real). However, the off-diagonal terms can have imaginary components (as shown in the above example). These imaginary components have a associated phase (complex numbers can be written in polar form). It is the phase differences of these off-diagonal elements which produces interference (for more details, see the book Quantum Mechanics Demystified). The off-diagonal elements are characteristic of a pure state. A mixed state is a classical statistical mixture and therefore has no off-diagonal terms and no interference.
So how do the off-diagonal elements (and related interference effects) vanish during decoherence?
The off-diagonal (imaginary) terms have a completely unknown relative phase factor which must be averaged over during any calculation since it is different for each separate measurement (each particle in the ensemble). As the phase of these terms is not correlated (not coherent) the sums cancel out to zero. The matrix becomes diagonalised (all off-diagonal terms become zero. Interference effects vanish. The quantum state of the ensemble system is then apparently “forced” into one of the diagonal eigenstates (the overall state of the system becomes a mixture state) with the probability of a particular eigenstate selection predicted by the value of the corresponding diagonal element of the density matrix.
Consider the following density matrix for a pure state ensemble in which the off-diagonal terms have a phase factor of θ:
The above statement can be written in a simplified manner as follows: Selection of a particular eigenstate is governed by a purely probabilistic process. This requires a large number of readings. For this purpose, we must consider an ensemble – a large number of quantum particles in a similar state and treat them as a single quantum system. Then we measure each particle to ascertain a particular value; say color. We tabulate the results in a statement called the density matrix. Before measurement, each of the particles is in the same state with the same state vector. In other words, they are all in the same superposition state. Hence this is called a pure state. After measurement, all particles are in different classical states – the state (color) of each particle is known. Hence it is called a mixed state.
In common-sense language, what it means is that: if we take a box of billiard balls of say 100 numbers of random colors - say blue and green, before counting balls of each color, we could not say what percentage of balls are blue and what percentage green. But after we count the balls of each color and tabulate the results, we know that (in the above example) 36% of the balls belong to one color and 64% belong to another color. If we have to describe the balls after counting, we will give the above percentage or say that 36 numbers of balls are blue and 64 numbers of balls are green. That will be a pure statement. But before such measurement, we can describe the balls as 100 balls of blue and green color. This will be a mixed state.
As can be seen, our common-sense description is opposite of the quantum mechanical classification, which are written by two scientists about 75 years apart and which is accepted by all scientists unquestioningly. Thus, it is no wonder that one scientist jokingly said that: “A good working definition of quantum mechanics is that things are the exact opposite of what you thought they were. Empty space is full, particles are waves, and cats can be both alive and dead at the same time.”
We quote another example from the famous EPR argument of Einstein and others (Phys. Rev. 47, 777 (1935): “To illustrate the ideas involved, let us consider the quantum-mechanical description of the behavior of a particle having a single degree of freedom. The fundamental concept of the theory is the concept of state, which is supposed to be completely characterized by the wave function ψ, which is a function of the variables chosen to describe the particle’s behavior. Corresponding to each physically observable quantity A there is an operator, which may be designated by the same letter.
If ψ is an eigenfunction of the operator A, that is, if ψ’ ≡ Aψ = aψ (1)
where a is a number, then the physical quantity A has with certainty the value a whenever the particle is in the state given by ψ. In accordance with our criterion of reality, for a particle in the state given by ψ for which Eq. (1) holds, there is an element of physical reality corresponding to the physical quantity A”.
We can write the above statement and the concept behind it in various ways that will be far easier to understand by the common man. We can also give various examples to demonstrate the physical content of the above statement. However, such statements and examples will be difficult to twist and interpret differently when necessary. Putting the concept in an ambiguous format helps in its subsequent manipulation, as is explained below citing from the same example:
Since this probability is independent of a, but depends only upon the difference b - a, we see that all values of the coordinate are equally probable”.
The above statement is highly misleading. The law of commutation is a special case of non-linear accumulation as explained below. All interactions involve application of force which leads to accumulation and corresponding reduction. Where such accumulation is between similars, it is linear accumulation and its mathematics is called addition. If such accumulation is not fully between similars, but partially similars (and partially dissimilar) it is non-linear accumulation and its mathematics is called multiplication. For example, 10 cars and another 10 cars are twenty cars through addition. But if there are 10 cars in a row and there are two rows of cars, then rows of cars is common to both statements, but one statement shows the number of cars in a row while the other shows the number of rows of cars. Because of this partial dissimilarity, the mathematics has to be multiplication of 10 x 2 or 2 x 10. We are free to use any of the two sequences and the result will be the same. This is the law of commutation. However, no multiplication is possible if the two factors are not partially similar. In such cases, the two factors are said to be non-commutable. If the two terms are mutually exclusive, i.e., one of the terms will always be zero, the result of their multiplication will always be zero. Hence they may be said to be not commutable though in reality they are commutable, but the result of their multiplication is always zero. This implies that the knowledge of one precludes the knowledge of the other. The commutability or otherwise depend on the nature of the quantities – whether they are partially related and partially non-related to each other or not.
Position is a fixed co-ordinate in a specific frame of reference. Momentum is a mobile co-ordinate in the same frame of reference. Both fixedity and mobility are mutually exclusive. If a particle has a fixed position, its momentum is zero. If it has momentum, it does not have a fixed position. Since “particle” is similar in both the above statements, i.e., since both are related to the particle, they can be multiplied, hence commutable. But since one or the other factors is always zero, the result will always be zero and the equation AB ≠ BA does not hold. In other words, while uncertainty is established due to other reasons, the equation Δx. Δp ≥ h is a mathematically wrong statement, as mathematically the answer will always be zero. The validity of a physical statement is judged by its correspondence to reality or as Einstein and others put it: “by the degree of agreement between the conclusions of the theory and human experience”. Since in this case the degree of agreement between the conclusions of the theory and human experience is zero, it cannot be a valid physical statement either. Hence, it is no wonder that the Heisenberg’s Uncertainty relation is still a hypothesis and not proven. In latter pages we have discussed this issue elaborately.
In modern science there is a tendency of generalization or extension of one principle to others. For example; the Schrödinger equation in the so-called one dimension (actually it contains a second order term; hence cannot be an equation in one dimension) is generalized (?) to three dimensions by adding two more terms for y and z dimensions (mathematically and physically it is a wrong procedure). We have discussed it in latter pages. While position and momentum are specific quantities, the generalizations are done by replacing these quantities with A and B. When a particular statement is changed to a general statement by following algebraic principles, the relationship between the quantities of the particular statement is not changed. However, physicists often bypass or over-look this mathematical rule. A and B could be any set of two quantities. Since they are not specified, it is easy to use them in any way one wants. Even if the two quantities are commutable, since they are not precisely described, it gives one the freedom to manipulate by claiming that they are not commutable and vice-versa. Modern science is full of such manipulations.
Here we give another example to prove that physics and modern mathematics are not always compatible. Bell’s Inequality is one of the important equations used by all quantum physicists. We will discuss it repeatedly for different purposes. Briefly the theory holds that if a system consists of an ensemble of particles having three Boolean properties A, B and C, and there is a reciprocal relationship between the values of measurement of A on two particles, the same type relationship exists between the particles with respect to the quantity B, the value of one particle measured and found to be a, and the value of another particle measured and found to be b, then the first particle must have started with state (A = a, B = b). In that event, the Theorem says that P (A, C) ≤ P (A, B) + P (B, C). In the case of classical particles, the theorem appears to be correct.
Quantum mechanically: P(A, C) = ½ sin2 (θ), where θ is the angle between the analyzers. Let an apparatus emit entangled photons that pass through separate polarization analysers. Let A, B and C be the events that a single photon will pass through analyzers with axis set at 00, 22.50, and 450 to vertical respectively. It can be proved that C → C.
Thus, according to Bell’s theorem: P(A, C) ≤ P(A, B) + P(B, C),
Or ½ sin2 (450) ≤ ½ sin2 (22.50) + ½ sin2 (22.50),
Or 0.25 ≤0.1464, which is clearly absurd.
This inequality has been used by quantum physicists to prove entanglement and distinguish quantum phenomena from classical phenomena. We will discuss it in detail to show that the above interpretation is wrong and the same set of mathematics is applicable to both macro and the micro world. The real reason for such deviation from common sense is that because of the nature of measurement, measuring one quantity affects the measurement of another. The order of measurement becomes important in such cases. Even in the macro world, the order of measurement leads to different results. However, the real implication of Bell’s original mathematics is much deeper and points to one underlying truth that will be discussed later.
A wave function is said to describe all possible states in which a particle may be found. To describe probability, some people give the example of a large, irregular thundercloud that fills up the sky. The darker the thundercloud, the greater the concentration of water vapor and dust at that point. Thus by simply looking at a thundercloud, we can rapidly estimate the probability of finding large concentrations of water and dust in certain parts of the sky. The thundercloud may be compared to a single electron's wave function. Like a thundercloud, it fills up all space. Likewise, the greater its value at a point, the greater the probability of finding the electron there! Similarly, wave functions can be associated with large objects, like people. As one sits in his chair, he has a Schrödinger probability wave function. If we could somehow see his wave function, it would resemble a cloud very much in the shape of his body. However, some of the cloud would spread out all over space, out to Mars and even beyond the solar system, although it would be vanishingly small there. This means that there is a very large likelihood that his, in fact, sitting here in his chair and not on the planet Mars. Although part of his wave function has spread even beyond the Milky Way galaxy, there is only an infinitesimal chance that he is sitting in another galaxy. This description is highly misleading.
The mathematics for the above assumption is funny. Suppose we choose a fixed point A and walked in the north-eastern direction by 5 steps. We mark that point as B. There are an infinite number of ways of reaching the point B from A. For example, we can walk 4 steps to the north of A and then walk 3 steps to the east. We will reach at B. Similarly, we can walk 6 steps in the northern direction, 3 steps in the eastern direction and 2 steps in the Southern direction. We will reach at B. Alternatively; we can walk 8 steps in the northern direction, 6 steps in the eastern direction and 5 steps in the South-eastern direction. We will reach at B. It is presumed that since the vector addition or “superposition” of these paths, which are different sorts from the straight path, lead to the same point, the point B could be thought of as a superposition of paths of different sort from A. Since we are free to choose any of these paths, at any instant, we could be “here” or “there”. This description is highly misleading.
To put the above statement mathematically, we take a vector V which can be resolved into two vectors V1 and V2 along the directions 1 and 2, we can write: V = V1 + V2. If a unit of displacement along the direction 1 is represented by 1, then V1 = V11, wherein V1 denotes the magnitude of the displacement V1. Similarly, V2 = V22. Therefore:
V = V1 + V2 = V11 + V22. [1 and 2 are also denoted as (1,0) and (0,1) respectively].
This equation is also written as: V = λ1 + λ2, where λ is treated as the magnitude of the displacement. Here V is treated as a superposition of any standard vectors (1,0) and (0,1) with coefficients given by the numbers (ordered pair) (V1 , V2). This is the concept of a vector space. Here the vector has been represented in two dimensions. For three dimensions, this equation is written as V = λ1 + λ2 + λ3. For an n-tuple in n dimensions, the equation is written as V = λ1 + λ2 + λ3 +…… λn.
It is said that the choice of dimensions appropriate to a quantum mechanical problem depends on the number of independent possibilities the system possesses. In the case of polarization of light, there are only two possibilities. The same is true for electrons. But in the case of electrons, it is not dimensions, but spin. If we choose a direction and look at the electron’s spin in relation to that direction, then either its axis of rotation points along that direction or it is wholly in the reverse direction. Thus, electron spin is described as “up” and “down”. Scientists describe the spin of electron as something like that of a top, but different from it. In reality, it is something like the nodes of the Moon. At one node, Moon appear to be always going in the northern direction and at the other node, it always appears to be going in the southern direction. It is said that the value of “up” and “down” for an electron spin is always valid irrespective of the directions we may choose. There is no contradiction here, as direction is not important in the case of nodes. It is only the lay out of the two intersecting planes that is relevant. In many problems, the number of possibilities is said to be unbounded. Thus, scientists use infinite dimensional spaces to represent them. For this they use something called the Hilbert space. We will discuss about these later.
Any intelligent reader would have seen through the fallacy of the vector space. Still we are describing it again. Firstly, as we have described in the wave phenomena in later pages, superposition is a merger of two waves, which lose their own identity to create something different. What we see is the net effect, which is different from the individual effects. There are many ways in which it could occur at one point. But all waves do not stay in superposition. Similarly, the superposition is momentary, as the waves submit themselves to the local dynamics. Thus, only because there is a probability of two waves joining to cancel the effect of each other and merge to give a different picture, we cannot formulate a general principle such as the equation: V = λ1 + λ2 to cover all cases, because the resultant wave or flat surface is also transitory.
Secondly, the generalization of the equation V = λ1 + λ2 to V = λ1 + λ2 + λ3 +…… λn is mathematically wrong as explained below. Even though initially we mentioned 1 and 2 as directions, they are essentially dimensions, because they are perpendicular to each other. Direction is the information contained in the relative position of one point with respect to another point without the distance information. Directions may be either relative to some indicated reference (the violins in a full orchestra are typically seated to the left of the conductor), or absolute according to some previously agreed upon frame of reference (Kolkata lies due north-east of Puri). Direction is often indicated manually by an extended index finger or written as an arrow. On a vertically oriented sign representing a horizontal plane, such as a road sign, “forward” is usually indicated by an upward arrow. Mathematically, direction may be uniquely specified by a unit vector in a given basis, or equivalently by the angles made by the most direct path with respect to a specified set of axes. These angles can have any value and their inter-relationship can take an infinite number of values. But in the case of dimensions, they have to be at right angles to each other which remain invariant under mutual transformation.
According to Vishwakaema the perception that arises from length is the same that arises from the perception of breadth and height – thus they belong to the same class, so that the shape of the particle remains invariant under directional transformations. There is no fixed rule as to which of the three spreads constitutes either length or breadth or height. They are exchangeable in re-arrangement. Hence, they are treated as belonging to one class. These three directions have to be mutually perpendicular on the consideration of equilibrium of forces (for example, electric field and the corresponding magnetic field) and symmetry. Thus, these three directions are equated with “forward-backward”, “right-left”, and “up-down”, which remain invariant under mutual exchange of position. Thus, dimension is defined as the spread of an object in mutually perpendicular directions, which remains invariant under directional transformations. This definition leads to only three spatial dimensions with ten variants. For this reason, the general equation in three dimensions uses x, y, and z (and/or c) co-ordinates or at least third order terms (such as a3+3a2b+3ab2+b3), which implies that with regard to any frame of reference, they are not arbitrary directions, but fixed frames at right angles to one another, making them dimensions. A one dimensional geometric shape is impossible. A point has imperceptible dimension, but not zero dimensions. The modern definition of a one dimensional sphere or “one sphere” is not in conformity with this view. It cannot be exhibited physically, as anything other than a point or a straight line has a minimum of two dimensions.
While the mathematicians insist that a point has existence, but no dimensions, the Theoretical Physicists insist that the minimum perceptible dimension is the Planck length. Thus, they differ over the dimension of a point from the mathematicians. For a straight line, the modern mathematician uses the first order equation, ax + by + c = 0, which uses two co-ordinates, besides a constant. A second order equation always implies area in two dimensions. A three dimensional structure has volume, which can be expressed only by an equation of the third order. This is the reason why Born had to use the term “d3r” to describe the differential volume element in his equations.
The Schrödinger equation was devised to find the probability of finding the particle in the narrow region between x and x+dx, which is denoted by P(x) dx. The function P(x) is the probability distribution function or probability density, which is found from the wave function ψ(x) in the equation P(x) = [ψ(x)]2. The wave function is determined by solving the Schrödinger’s differential equation: d2ψ/dx2 + 8π2m/h2 [E-V(x)]ψ = 0, where E is the total energy of the system and V(x) is the potential energy of the system. By using a suitable energy operator term, the equation is written as Hψ = Eψ. The equation is also written as iħ ∂/∂tψ› = Hψ›, where the left hand side represents iħ times the rate of change with time of a state vector. The right hand side equates this with the effect of an operator, the Hamiltonian, which is the observable corresponding to the energy of the system under consideration. The symbol ψ indicates that it is a generalization of Schrödinger’s wave-function. The equation appears to be an equation in one dimension, but in reality it is a second order equation signifying a two dimensional field, as the original equation and the energy operator contain a term x2. A third order equation implies volume. Three areas cannot be added to create volume. Thus, the Schrödinger equation described above is an equation not in one, but in two dimensions. The method of the generalization of the said Schrödinger equation to the three spatial dimensions does not stand mathematical scrutiny.
Three areas cannot be added to create volume. Any simple mathematical model will prove this. Hence, the Schrödinger equation could not be solved for other than hydrogen atoms. For many electron atoms, the so called solutions simply consider them as many one-electron atoms, ignoring the electrostatic energy of repulsion between the electrons and treating them as point charges frozen to some instantaneous position. Even then, the problem remains to be solved. The first ionization potential of helium is theorized to be 20.42 eV, against the experimental value of 24.58 eV. Further, the atomic spectra show that for every series of lines (Lyman, Balmer, etc) found for hydrogen, there is a corresponding series found at shorter wavelengths for helium, as predicted by theory. But in the spectrum of helium, there are two series of lines observed for every single series of lines observed for hydrogen. Not only does helium possess the normal Balmer series, but also it has a second “Balmer” series starting at λ = 3889 Å. This shows that, for the helium atom, the whole series repeats at shorter wavelengths.
For the lithium atom, it is even worse, as the total energy of repulsion between the electrons is more complex. Here, it is assumed that as in the case of hydrogen and helium, the most stable energy of lithium atom will be obtained when all three electrons are placed in the 1s atomic orbital giving the electronic configuration of 1s3, even though it is contradicted by experimental observation. Following the same basis as for helium, the first ionization potential of lithium is theorized to be 20.4 eV, against the experimental value of 202.5 eV to remove all three electrons and only 5.4 eV to remove one electron from lithium. Experimentally, it requires less energy to ionize lithium than it does to ionize hydrogen, but the theory predicts ionization energy one and half times larger. More serious than this is the fact that, the theory should never predict the system to be more stable than it actually is. The method should always predict energy less negative than is actually observed. If this is not found to be the case, then it means that an incorrect assumption has been made or that some physical principle has been ignored.
Further, it contradicts the principle of periodicity, as the calculation places each succeeding electron in the 1s orbital as it increases nuclear charge by unity. It must be remembered that, with every increase in n, all the preceding values of l are repeated, and a new l value is introduced. The reasons why more than two electrons could not be placed in the 1s orbit has not been explained. Thus, the mathematical formulations are contrary to the physical conditions based on observation. To overcome this problem, scientists take the help of operators. An operator is something which turns one vector into another. Often scientists describe robbery as an operator that transforms a state of wealth to a state of penury for the robbed and vice versa for the robber. Another example of an operator often given is the operation that rotates a frame clockwise or anticlockwise changing motion in northern direction to that in eastern or western directions. The act of passing light through a polarizer is called an operator as it changes the physical state of the photons polarization. Thus, the use of a polarizer is described as measurement of polarization, since the transmitted beam has to have its polarization in the direction perpendicular to it. We will come back to operators later.
The probability does not refer to (as is commonly believed) whether the particle will be observed at any specific position at a specific time or not. Similarly the description of different probability of finding the particle at any point of space is misleading. A particle will be observed only at a particular position at a particular time and no where else. Since a mobile particle does not have a fixed position, the probability actually refers to the state in which the particle is likely to be observed. This is because all the forces acting on it and their dynamics, which influence the state of the particle, may not be known to us. Hence we cannot predict with certainty whether the particle will be found here or elsewhere. After measurement, the particle is said to acquire a time invariant “fixed state” by “wave-function collapse”. This is referred to as the result of measurement, which is an arbitrarily frozen time invariant non-real (since in reality, it continues to change) state. This is because; the actual state with all influences on the particle has been measured at “here-now”, which is a perpetually changing state. Since all mechanical devices are subject to time variance in their operational capacities, they have to be “operated” by a “conscious agent” – directly or indirectly - because, as will be shown later, only consciousness is time invariant. This transition from a time variant initial state to a time invariant hypothetical “fixed state” through “now” or “here-now” is the dividing line between quantum physics and the classical physics, as well as conscious actions and mechanical actions. To prove the above statement, we have examined what is “information” in latter pages, because only conscious agents can cognize information and use it to achieve the desired objects. However, before that we will briefly discuss the chaos prevailing in this area among the scientists.
Modern science fails to answer the question “why” on many occasions. In fact it avoids such inconvenient questions. Here we may quote an interesting anecdote from the lives of two prominent persons. Once, Arthur Eddington was explaining the theory of the expanding universe to Bertrand Russell. Eddington told Russell that the expansion was so rapid and powerful that even a most powerful dictator would not be able to control the entire universe. He explained that even if the orders were sent with the speed of light, they would not reach the farthest parts of the universe. Bertrand Russell asked, “If that is so, how does God supervise what is going on in those parts?” Eddington looked keenly at Russell and replied, “That, dear Bertrand does not lie in the province of the physicists.” This begs the question: What is physics? We cannot take the stand that the role of physics is not to explain, but to describe reality. Description is also an explanation. Otherwise, why and to whom do you describe? If the validity of a physical statement is judged by its correspondence to reality, we cannot hide behind the veil of reductionism, but explain scientifically the theory behind the seemingly “acts of God”.
There is a general belief that we can understand all physical phenomenon if we can relate it to the interactions of atoms and molecules. After all, the Universe is made up of these particles only. Their interactions – in different combinations – create everything in the Universe. This is called a reductionist approach because it is claimed that everything else can be reduced to this supposedly more fundamental level. But this approach runs into problem with thermodynamics and its arrow of time. In the microscopic world, no such arrow of time is apparent, irrespective of whether it is being described by Newtonian mechanics, relativistic or quantum mechanics. One consequence of this description is that there can be no state of microscopic equilibrium. Time-symmetric laws do not single out a special end-state where all potential for change is reduced to zero, since all instants in time are treated as equivalent.
The apparent time reversibility of motion within the atomic and molecular regimes, in direct contradiction to the irreversibility of thermodynamic processes constitutes the celebrated irreversibility paradox put forward by in 1876 by Loschmidt among others (L. Boltzmann: Lectures on Gas Theory – University of California Press, 1964, page 9). The paradox suggests that the two great edifices – thermodynamics and mechanics – are at best incomplete. It represents a very clear problem in need of an explanation which should not be swept under carpet. As Lord Kelvin says: If the motion of every particle of matter in the Universe were precisely reversed at any instant, the course of Nature would be simply reversed for ever after. The bursting bubble of foam at the foot of a waterfall would reunite and descend into water. The thermal motions would reconcentrate energy and throw the mass up the fall in drops reforming in a close column of ascending water. Living creatures would grow backwards – from old age to infancy till they are unborn again – with conscious knowledge of the future but no memory of the past. We will solve this paradox in later pages.
The modern view on reductionism is faulty. Reductionism is based on the concept of differentiation. When an object is perceived as a composite that can be reduced to different components having perceptibly different properties which can be differentiated from one another and from the composite as a whole, the process of such differentiation is called reductionism. Some objects may generate similar perception of some properties or the opposite of some properties from a group of substances. In such cases the objects with similar properties are grouped together and the objects with opposite properties are grouped together. The only universally perceived aspect that is common to all objects is physical existence in space and time, as the radiation emitted by or the field set up by all objects create a perturbation in our sense organs always in identical ways. Since intermediate particles exhibit some properties similar with other particles and are similarly perceived with other such objects and not differentiated from others, reductionism applies only to the fundamental particles. This principle is violated in most modern classifications.
To give one example, x-rays and γ-rays exhibit exclusive characteristics that are not shared by other rays of the electromagnetic spectrum or between themselves – such as the place of their origin. Yet, they are clubbed under one category. If wave nature of propagation is the criterion for such categorisation, then sound waves that travel through a medium such as air or other gases in addition to liquids and solids of all kinds should also have been added to the classification. Then there are mechanical waves, such as the waves that travel though a vibrating string or other mechanical object or surface, waves that travel through a fluid or along the surface of a fluid, such as water waves. If electromagnetic properties are the criteria for such categorisation, then it is not scientific, as these rays do not interact with electromagnetic fields. If they have been clubbed together on the ground that theoretically they do not require any medium for their propagation, then firstly there is no true vacuum and secondly, they are known to travel through various mediums such as glass. There are many such examples of wrong classification due to reductionism and developmental history.
The cults of incomprehensibility and reductionism have led to another deficiency. Both cosmology and elementary particle physics share the same theory of the plasma and radiation. They have independent existence that is seemingly eternal and may be cyclic. Their combinations lead to the sub-atomic particles that belong to the micro world of quantum physics. The atoms are a class by itself, whose different combinations lead to the perceivable particles and bodies that belong to the macro world of the so-called classical physics. The two worlds merge in the stars, which contain plasma of the micro world and the planetary system of the macro world. Thus, the study of the evolution of stars can reveal the transition from the micro world to the macro world. For example, the internal structures of planet Jupiter and protons are identical and like protons, Jupiter-like stars are abundant in the stars. Yet, in stead of unification of all branches of science, Cosmology and nuclear physics have been fragmented into several “specialized” branches.
Here we are reminded of an anecdote related to Lord Chaitanya. While in his southern sojourn, a debate was arranged between him and a great scholar of yore. The scholar went off explaining many complex doctrines while Lord Chaitanya sat quietly and listened with rapt attention without any response. Finally the scholar told Lord Chaitanya that he was not responding at all to his discourse. Was it too complex for him? The Scholar was sure from the look on Lord Chaitanya’s face that he did not understand anything. To this, Lord Chaitanya replied; “I fully understand what you are talking about. But I was wondering why you are making the simple things look so complicated?” Then he explained the same theories in plain language after which the scholar fell at his feet.
There has been very few attempts to list out the essence of all branches and develop “one” science. Each branch has its huge data bank with its specialized technical terms glorifying some person at the cost of a scientific nomenclature thereby enhancing incomprehensibility. Even if we read the descriptions of all six proverbial blind men repeatedly, one who has not seen an elephant cannot visualize it. This leaves the students with little opportunity to get a macro view of all theories and evaluate their inter-relationship. The educational system with its examination method of emphasizing the aspects of “memorization and reproduction at a specific instant” compounds the problem. Thus, the students have to accept many statements and theories as “given” without questioning it even on the face of ambiguities. Further, we have never come across any book on science, which does not glorify the discoveries in superlative terms, while leaving out the uncomfortable and ambiguous aspects, often with an assurance that they are correct and should be accepted as such. This creates an impression on the minds of young students to accept the theories unquestioningly making them superstitious. Thus, whenever some deficiencies have been noticed in any theory, there is an attempt at patch work within the broad parameters of the same theories. There have been few attempts to review the theories ab initio. Thus, the scientists cannot relate the tempest at a distant land to the flapping of the wings of the butterfly elsewhere.
Till now scientists do not know “what” are electrons, photons, and other subatomic objects that have made the amazing technological revolution possible? Even the modern description of the nucleus and the nucleons leave many aspects unexplained. Photo-electric effect, for which Einstein got his Noble Prize, deals with electrons and photons. But it does not clarify “what” are these particles. The scientists, who framed the current theories, were not gifted with the benefit of the presently available data. Thus, without undermining their efforts, it is necessary to ab initio re-formulate the theories based on the presently available data. Only this way we can develop a theory whose correspondence resembles to reality. Here is an attempt in this regard from a different perspective. Like the child revealing the secret of the Emperor’s clothes, we, a novice in this field, are attempting to point the lamp in the direction of the Sun.
Thousands of papers are read every year in various forums on as yet undiscovered particles. This reminds us of the saying which means: after taking bath in the water of the mirage, wearing the flower of the sky in the head, holding the bow made of the horns of a rabbit, here goes the son of the barren woman! Modern scientists are precisely making similar statements. This is a sheer waste of not only valuable time but also public money worth trillions for the pleasure of a few. In addition, this amounts to misguiding general public for generations. This is unacceptable because a scientific theory must stand up to experimental scrutiny within a reasonable time period. Till it is proved or disproved, it cannot be accepted, though not rejected either. We cannot continue for three quarters and more of a century to develop “theories” based on such unproven postulates in the hope that we may succeed someday – may be after a couple of centuries! We cannot continue research on the properties of the “flowers of the sky” on the ground that someday it may be discovered.
Experiments with the subatomic phenomena show effects that have not been reconciled with our normal view of an objective world. Yet, they cannot be treated separately. This implies the existence of two different states – classical and quantum – with different dynamics, but linked to each other in some fundamentally similar manner. Since the validity of a physical statement is judged by its correspondence to reality, there is a big question mark on the direction in which theoretical physics is moving. Technology has acquired a pre-eminent position in the global epistemic order. However, Engineers and Technologists, who progress by trial and error methods, have projected themselves as experimental scientists. Their search for new technology has been touted as the progress of science, questioning whose legitimacy is projected as deserving a sacrament. Thus, everything that exposes the hollowness or deficiencies of science is consigned to defenestration. The time has come to seriously consider the role, the ends and the methods of scientific research. If we are to believe that the sole objective of the scientists is to make their impressions mutually consistent, then we lose all motivation in theoretical physics. These impressions are not of a kind that occurs in our daily life. They are extremely special, are produced at great cost, time and effort. Hence it is doubtful whether the mere pleasure their harmony gives to a selected few can justify the huge public spending on such “scientific research”.
A report published in the Notices of the American Mathematical Society, October 2005 issue shows that the Theory of Dynamical Systems that is used for calculating the trajectories of space flights and the Theory of Transition States for chemical reactions share the same mathematics. This is the proof of a universally true statement that both microcosm and the macrocosm replicate each other. The only problem is to find the exact correlations. For example, as we have repeated pointed out, the internal structure of a proton and that of planet Jupiter are identical. We will frequently use this and other similarities between the microcosm and the macrocosm (from astrophysics) in this presentation to prove the above statement. Also we will frequently refer to the definitions of technical terms as defined precisely in our book “Vaidic Theory of Numbers”.
No comments:
Post a Comment
let noble thoughts come to us from all around |
4d3140afc3071743 | Course Descriptions
First Year
Physics 125-1 Mechanics
• A. Vector kinematics, dynamics, free body problems, work-energy theorem, angular momentum, torque, conservation of energy and momentum, rigid body motion, rotating coordinate systems, central force fields and plane motion, two-body problems, gravity, Kepler's laws, harmonic motion.
• B. Laboratory in particle motions and dynamics, collisions, and oscillations.
Physics 125-2 Electricity and Magnetism
• A. Electrostatics, electric field, flux and Gauss' Law, electric potential, gradient of potential, divergence theorem, differential form of Gauss' Law, DC circuits and Kirchhoff's Laws, conductors, capacitors, RC circuits.
• B. Fields of moving charges, magnetic field, vector potential, Hall effect, electromagnetic induction, self-inductance, displacement current, Maxwell's equations, alternating current circuits, electric fields in matter, dipole distributions, polarizability tensor, polarized matter, electric susceptibility, dielectrics, magnetic fields in matter, field of a current loop, field of a permanent magnet, relativistic invariance and transformations.
• C. Laboratory in electrostatics, DC circuits, oscilloscope, e/m ratio of electron, RC circuits.
Physics 125-3 Waves and Oscillations
• A. Damped and driven oscillations, the superposition principle, coupled oscillators, resonance, traveling waves, refraction and dispersion, energy flux, reflection and transmission, wave packets, group velocity, waves in two and three dimensions, wave guides, polarization, geometrical optics, interference and diffraction, Huygen's principle, apertures and Fresnel integrals.
• B. Waves in solids and fluids, intro to materials physics, basic concepts of structure of matter from smallest to largest size scale.
• C. Concepts of quantum mechanics, Schrödinger equation, wave equation, particle in a box, particle waves in atoms.
Math 281-1 Multidimensional Calculus
• A. Vectors in 3-space, vector functions and their calculus, dot and cross products, lines, planes, line integrals.
• B. Graphing, quadric surfaces, functions of several variables, partial and directional derivatives, gradients, tangent planes, chain rule, cylindrical and spherical coordinates, double and triple integrals, improper integrals.
• C. Parametric surfaces, surface area, surface integrals, vector fields, conservative fields.
Math 281-2 Vector Operators and Ordinary Differential Equations
• A. Gauss', Green's, and Stokes' theorems, ordinary differential equations (exact, first order linear, second order linear), vector operators, existence and uniqueness theorems, graphical and numerical methods.
• B. Sequences and series, convergence tests, power series, Taylor series in one and several variables, error estimates, critical points, Lagrange multipliers.
Math 281-3 Systems of Differential Equations, Linear Algebra, and Infinite Series
• A. Series solutions of differential equations at regular singular points, Bessel's equation.
• B. Linear algebra: matrices, Gaussian elimination, rank, vector spaces, linear independence and bases.
• C. Linear systems of differential equations and related linear algebra: determinants, eigenvalues and eigenvectors, normal modes, principal axis theorem.
• D. Nonlinear systems.
Chemistry 171-0 Accelerated General Inorganic Chemistry
• A. Chemistry of solids, liquids and gases.
• B. Gas laws and stoichiometry.
• C. Chemical periodicity and atomic structure.
• D. Atoms, molecules, and continuous structures.
• E. Lewis theory of chemical bonding, valence bond and molecular orbital descriptions of bonding.
• F. Chemical processes in past, present and future technologies.
Chemistry 172-0 Accelerated General Physical Chemistry
• A. Chemical equilibrium.
• B. Acid-base equilibria.
• C. Dissolution and precipitation equilibria.
• D. Thermodynamic processes and thermochemistry.
• E. Spontaneous change and equilibrium.
• F. Electrochemistry.
• G. Chemical kinetics.
• H. How it all fits together: hemoglobin.
Second Year
Math 381-0 Fourier Analysis and Boundary Value Problems for ISP
• A. Orthogonal functions, Sturm-Liouville theory, Fourier series, convergence in mean, Parseval theorem, heat equation, initial-BVP for Heat equation, numerical methods for heat equation.
• B. Fourier transforms, Fourier inversion formula and the normal density function, heat equation for the infinite rod, Gauss-Weierstrass convolution.
• C. Vibrating string, perturbed and struck string: initial-BVP for the finite string, infinite string, and d'Alambert's formula.
• D. Sturm-Liouville problems in two dimensions, vibrating membrane, cylindrical and spherical coordinates, Bessel functions, perturbed and struck Drumhead: Initial-BVP for the circular membrane.
• E. Steady-state (Laplace's equation) in rectangular, circular, and spherical geometry, Poisson integral representation theorem, Maximum principle, spherical harmonics and Legendre functions.
Math 382-0 Complex Analysis and Group Theory for ISP
A. Types of groups, symmetrical groups, matrix representations, homomorphisms and
isomorphisms, reducible and irreducible representations, oscillating particle problem.
B. Complex numbers and the complex plane, polar form and roots, complex functions and differentiation, Cauchy-Riemann equations and harmonic functions, Line integrals and the Cauchy-Goursat theorem, Analytic functions, Cauchy's theorem and Cauchy's formula, antiderivatives, Cauchy inequalities, Maximum modulus theorem, Laurent series and the residue theorem, isolated singularities, Residue theorem and application to evaluation of integrals.
C. Analytic functions as mappings, conformal mappings and linear fractional transformations, analytic and harmonic functions in applications, residue calculation of the Gauss-Weierstrass kernel, Harmonic functions as solutions to steady-state problems, finite groups and their matrix representations.
Earth and Planetary Sciences 350 Physics of the Earth for ISP
A. Basic facts about the Earth as a planet and the plate tectonics system, plate kinematics, composition of earth layers.
B. Gravitational attraction in the solar system, motion of planets and satellites, Kepler's laws, influence of rotation on the Earth's gravity field, shape of the Earth, evolution of the Earth-Moon system, gravity anomalies, isostasy.
C. Seismology, elementary continuum mechanics, strain and stress, waves in musical instruments: strings and winds, seismic waves: body waves, surface waves, normal modes, reflection and refraction of seismic waves, the case of a spherical Earth, seismic profiles inside the Earth, Adams-Williamson's model.
D. The Earth's magnetic field, principles of paleomagnetism, field reversals, simple models of dynamos, magnetic anomalies on the ocean floor, dating of oceanic lithosphere.
E. Heat transfer, radiation: greenhouse effect, conduction: Heat equation, simple models of the cooling of the oceanic lithosphere, Lord Kelvin's estimate of the age of the Earth, the failure of conductive models, convection: simple models.
F. Radiometric dating and geochemistry, elementary radioactive systems, radioactive dating: 14-C; 40-K; 87-Rb; the Pb series, initial ratios and reservoirs, mantle partitioning, Patterson's measurement of the age of the Earth.
Chemistry 212-1 Organic Chemistry
A. Basic concepts of organic chemistry including the structure and properties of organic molecules, acid-base reactions, SN1, SN2, E1, and E2 reactions, reaction mechanisms, condensed and line-angle structures.
B. Structure, naming, and stereochemistry of alkanes and cycloalkanes, chemical reactions of alkanes, conformational analysis, structure, synthesis and reactions of alkyl halides, alkenes and alcohols, nomenclature and stereochemistry of organic molecules,
chiral and achiral compounds, optical activity.
C. Laboratory: thin layer chromatography, distillations, extractions, filtrations, synthesis of organic compunds, JOC style lab reports.
Chemistry 348-0 Physical Chemistry for ISP
A. Work and heat, first, second, and third laws, bond energies, state functions, entropy, free energy functions.
B. Fundamental equations, Maxwell Relations, chemical potential, phase equilibrium, ideal mixtures, colligative properties, real mixtures, phase rule, phase diagrams, reaction equilibrium.
C. Statistical mechanics: postulates, distributions, canonical ensemble, partition and thermodynamic functions, heat capacities, equilibrium constants, elementary kinetics.
D. Reaction mechanisms, complex reactions, collision theory, activated complex theory, molecular reaction dynamics.
Biology 241 ISP Biochemistry
A. Cellular environments: polar and nonpolar interactions, hydrophobic driving force, properties of water.
B. Structure of proteins: primary, secondary, tertiary, quaternary, amino acid structures, motifs, domains, dynamics, and folding, Rasmol, structural/functional relationships, enzyme function and kinetics, biochemistry of active sites.
C. Nucleic acid structure/function, biological membranes: lipid and protein components, transport proteins.
D. Metabolic energy generation, glycolysis, gluconeogenesis, citric acid cycle, biological oxidation: electron and proton transport, energy coupling: ATP synthesis, energy capture: photosynthesis.
E. Lab: gel electrophoresis, SDS Polyacrylamide gel electrophoresis (proteins), DNA transformation, restriction enzyme mapping, Western-blots, chimeric proteins, yeast twohybrid assay, enzyme kinetics, recombinant DNA technology.
Biology 240 ISP Molecular and Cell Biology
A. Chromosomes, mitosis and meiosis, heredity and genetics, nucleic acids, recombinant
DNA technology, DNA replication and repair, transcription, translation, control of gene expression.
B. Membrane structure and transport, cells, protein sorting and transport, cell signaling, cytoskeleton, cell cycle control, cell division, tissues, cancer.
Physics 339-1 Quantum Mechanics
A. Need for a quantum theory, review of classical wave theory, introduction to quantum mechanics, Schrödinger’s equation, particle flux and number conservation.
B. Simple systems, one-dimensional systems, step potential, rectangular barrier, Dirac delta function, function barrier, particle in a box, introduction to the harmonic oscillator.
C. Formalism of quantum mechanics, operators and functions, commutation relations, observables and operators, physical postulates: correspondence principle and
complementarity principle, eigenfunctions and eigenvalues, Hermitian operators and observables, review of Fourier series and Fourier integrals, Dirac bracket notation, relation between classical and quantum mechanics, review of Lagrangian and
Hamiltonian formulation of classical mechanics: Poisson brackets, Heisenberg uncertainty principle, harmonic oscillator: ladder operators.
D. Systems of several particles, multiple particle wavefunctions, identical particles, permutation and exchange operators, fermions and bosons, Fermi and Bose statistics,
slater determinants.
E. Electrons in metals, free electron model, Born-von Karman boundary conditions,
Fermi sphere, thermodynamic properties of free electrons, electrons in a periodic potential: Bloch functions.
F. Angular momentum, general properties of angular momentum operators, commutation relations, eigenfunctions and eigenvalues of angular momentum operators.
Physics 339-2 Quantum Mechanics
A. Central force states and the hydrogen atom.
B. Spin, addition of angular momenta, time-independent perturbation theory for nondegenerate and degenerate states.
C. WKB approximation, scattering theory, and time-dependent perturbation theory.
Third Year
Statistics 383-0 Probability and Statistics for ISP
A. Probability: Sample space, event and probability, combinations and permutations, conditional probability and Bayeh's theorem, independence and the binomial distribution.
B. Random variables: the distribution of a random variable, joint and conditional distributions, expectation and variance, propagation of error.
C. The Central Limit Theorem: Chebyshev's inequality and the law of large numbers, De
Moivre-Laplace and central limit theorems, Poisson distribution and Poisson approximation.
D. Interval estimation: Confidence intervals, chi-squared, F, and t distributions, confidence intervals for mu and sigma2.
E. Two-sample comparison: t-test, F test, comparison of two binomial proportions.
F. The method of maximum likelihood: method of moments and the MLE, score function, Fisher information, the Cramer-Rao inequality, asymptotic efficiency of the MLE.
G. Goodness-of-fit tests: parameters known and estimated, contingency tables, homogeneity and independence.
H. Regression: method of least squares, Gauss-Markov theorem, estimation in the linear model.
I. Correlation: Galton and "regression towards the mean”, bivariate normal distribution, estimation of the correlation coefficient.
Biology 323-0 OR 341-0 OR 361-0 OR 390-0 OR 354-0
Neuroscience 311-0 ISP Neurobiology
A. An indepth examination of neuronal ion channels, membrane properties, synaptic transmission, and transduction.
Physics 339-3 Particle and Nuclear Physics
A. Neutrons and protons: properties and interactions, Yukawa particle.
B. u and d quarks: baryons, mesons, decay modes.
C. Heavier quarks: particles and decays.
D. Weak interactions: standard model.
E. Deuteron: wave function, magnetic moment.
F. Complex nuclei: mass formula, shell model, other models.
G. Applied nuclear physics: fission, fusion, nuclear astrophysics.
Astronomy 331-0 Astrophysics
A. Introduction and survey of observations: distance scales, color brightness diagrams, luminosities, masses, time scales, stellar populations, star clusters.
B. Equations of stellar structure: hydrostatic equilibrium, estimates of interior conditions,
virial theorem.
C. Properties of matter: ideal gas law and radiation pressure, excitation, ionization,
Fermi-Dirac statistics and equation of state for degenerate matter; absorptive properties of matter: electron scattering, bound free absorption, free absorption.
D. Fundamental of radiation and convective energy transport.
E. Nuclear energy production: hydrogen burning: pp chain and CNO cycle, helium and heavy element burning; neutrino loss processes.
F. Simple stellar models: polytropes, Lane Emden equation, Eddington standard model, homologous Transformations.
G. Evolution: star formation, protostars, Hayashi phase, T Tauri stars, red giant branch, horizontal branch, variable stars, mass loss, asymptotic giant branch, planetary nebulae.
H. Final stages of evolution: nucleosynthesis, stellar collapse, white dwarf structures, white dwarf cooling, supernovae, neutron stars and black holes.
I. Binary stars: fundamentals of the Roche model, evolution. |
434fe7f778a25ff9 | Wednesday, April 12, 2006
Shamanic travels and p-adic physics as physics of cognition and intentionality
Below an email sent this morning. I realized that I could add it to my blog as such (apart from correcting some typos, adding some clarifications, etc.) rather than wasting time to rewriting it and loosing some of the spontaneity of the response.
Dear X and Y,
I read the chapter of the forthcoming book by you and Z. This kind of book has social order. I am myself frustrated of not having seen any clear analysis written using language understandable by a layman and demonstrating the problems of materialistic view and showing that spiritual world view is by no means in conflict with basic tenets of science. The text happened also to resonate with what I have been working just now. I add some subtitles to the comments below to make clear the red thread.
1. World view induced depression
The observation that scientists are people suffering world view induced depression is to the point. As I told Luis, when I was younger my attempt to believe in the world view that I was taught made me literally sick. I have followed discussions in physics blogs and have found that the tone is pathologically negative: crackpot, idiot, imbecille, moron,...: these words are thrown again and again against the opponent. The language used is language of power and violence. I see also this a side effect of this world view induced depression and an attempt to overcome it by aggression. Kind of monoculture of consciousness, stucking to a theory/worldview without ability to detach from it, is in question.
2. Perennial philosophy and the new number concept
I liked very much about the representation of the basic ideas of perennial philosophy. I think that the basic challenge for theories of consciousness is to understand mathematically the division of reality to the sensory world which we can study by doing experiments and the spiritual world which we can approach by various spiritual practices.
Cognition and intentionality (I use "cognitive" somewhat loosely but I do not know any better word!) should have physical and space-time correlates if the notion of physics is properly extended. Even more: we should be able to show that the physics of spiritual world is visible in the physics of the material world. Just like directly invisible quarks are visible via the physics of hadrons.
Here I find strong resonance since TGD in its recent form involves generalization of number concept involving fusion of real numbers with various p-adic number fields, one for each prime p=2,3,5,7,... This fusion is along common rational numbers (very roughly): genuinely p-adic numbers are infinite as real numbers and are analogous to transcendental real numbers representing different manner to complete rational numbers to a continuum.
The point is that one can extend also the notion of spacetime, 8-dimensional space containing space-times as 4-surfaces and speak about p-adic space-time sheets as correlates for intentionality and, there are strong indications, also for cognition.
First point: What is remarkable that non-rational points of these space-time sheets are literally at infinity and only rational points belong to the physical universe. The interpretation is that our thoughts and intentions are literally cosmic or even a super-cosmic phenomenon: cognitive body somehow looks the material universe from outside. This fits very nicely with the idea of cosmic consciousness as consciousness in which sensory input is minimal and cosmic cognitive and intentional component dominates.
Second point: That cognitive space-time sheets have discrete rational projection to real imbedding space and intersect real space-time sheets at discrete set of rational points, conforms with the fact that all physical representations of thoughts are necessary discrete and based on rational numbers. Consider only numerical computation which is bound to satisfy this constraint although cognizing mathematician can perform exact calculations.
p-Adically infinitesimal means infinite in the real sense: that is very short p-adically means very long in real sense. Therefore the continuity and smoothness of local p-adic physics at infinity means that real space-time sheets having discrete set of rational intersection points with p-adic space-time sheets obey p-adic fractality meaning very special kind of long range correlations. Local randomness with long range spatial and temporal correlations can be seen as a direct physical correlate for the existence of cognition and intentionality.
Intentional behavior is indeed characterized by temporal long range correlations. Hence we can measure the immediate implications of something which as such is not measurable! Spiritual and non-local intuitive mind would reflect itself in the properties and behavior of the material world. Without it, the material world would indeed be just a random soup of particles as materialist try hardly to believe.
Even better: these predictions are very spesific. In particular, they lead to successful elementary particle mass calculations and to quantitative understanding of basic spatial and temporal scales in nuclear, atomic and molecular physics, biology, cosmology. This is something completely new.
A possible model for the realization of intentional action is as a quantum jump transforming p-adic space-time sheet to a real one. This is possible if real space-time sheet has vanishing conserved charges such as energy, momentum, electromagnetic charge,... In TGD framework this is possible since conserved inertial energy can be both negative and positive. In principle it would seem that we could really create our physical universe by this kind of intentional action so that Eastern view about reality as a purely mental construct would be correct. I have even proposed S-matrix describing this process and in principle predicting probabilities for different intentional actions.
A question, which just occurred to me, is how reversible the transition from intention to action is: it might be that transitions from action to intention (from matter to thought) is very rare since initial system must have vanishing net quantum numbers, in particular energy and this is extremely difficult to arrange. This could mean that our geometric future is mostly p-adic and past rather stably real: dreams would be the stuff that the reality is made of! If so the flow of experienced time would correspond to the front of the p-adic-to-real phase transition propagating towards geometric future as I have proposed. Of course, infinite number of these kind of wave fronts would be there and the direction of geometric future could be also non-standard.
3. Brahman=Atman and infinite primes
This idea is second element of Perennial Philosophy. Infinite primes, integers, and rationals represent a further extension of number concept besides the fusion of p-adic and real number fields. What is fascinating from the point of physics is that the construction of infinite primes is structurally equivalent with a repeated second quantization of super-symmetric arithmetic quantum field theory. Furthermore, just as 0-dimensional points represents number, 4-D space-time surfaces represent infinite primes, integers,...
This generalization leads also to a generalization of finite numbers: one can construct infinite number of ratios of infinite rationals which are equal to 1 as real numbers but p-adically finite for any prime p. Hence the number 1 and obviously all other numbers and also space-time points have infinitely rich number-theoretical anatomy not detectable by any physical measurement. Single point of space-time can represent in its structure the quantum state of entire material Universe! Brahman=Atman in the most literal and maximal sense that one can imagine!
4. Limits of quantum theory
My view concerning the capacity of standard quantum theory to solve the riddle of consciousness should be already clear. I think that wave mechanics is far too simplistic to allow to understand consciousness. Quantum measurement theory where quantum jump, a good candidate for moment of consciousness as elementary act of creation/re-creation is taken as a fact, is a set of mere phenomenological rules, and in conflict with Schrödinger equation.
My own proposal is simple basically: quantum states are actually entire time evolutions of Schrödinger equation and quantum jumps occur between these (or their generalizations) and thus outside the realm of space-time and given quantum state. Quantum jump means a re-creation of entire time evolution of the cosmology meaning in particular that both geometric past and future are re-created but in accordance with field equations. The experienced time identified as sequence of quantum jumps is something different from geometric time and these two coincide only in certain states of consciousness. Western mode of consciousness is this kind of mode but also in this case long term memories are actually communications with the geometric past: classically as in the case of declarative memories and by quantum entanglement making possible sharing of mental images as in the case of sensory memories.
Second point. Planck constant hbar is the symbol of quantum mechanics and taken usually to be absolute constant which can be put to hbar=1 by a suitable choice of units. Quantum classical correspondence in TGD however predicts that space-time sheets which can be arbitrarily large define quantum coherence regions. This is in conflict with standard quantum mechanics predicting that macroscopic quantum coherence regions should not exist. The resolution of problem is that Planck constant is actually dynamical and quantized: the larger the value of hbar the larger the Compton length so that for instance electron can be zoomed up to an arbitrarily large size and these zoomed up electrons can overlap and form Cooper pairs and superconductor.
The implications are rather dramatic: there is entire hierarchy of values of Planck constant and these correspond to dark matter phases which are macroscopically and even astrophysically quantum coherent. TGD can "predict" the value spectrum of Planck constant and this has led to a surprisingly precise model for living matter including band and resonance structure of EEG.
This gives justification also to the notion of magnetic body (actually onion like hierarchy of them) having astrophysical size in the case of brain. These magnetic bodies carry dark matter and act as intentional agents having biological bodies as sensory receptors and motor instruments. For instance, the time delays of consciousness found by Libet can be understood in this framework.
5. Microtubuli and what shamans do during their travels?
I believe that microtubuli are involved with the realization of long term memories and neural communications: for instance, it is very difficult to understand how high frequency sounds (higher than kHz) could be communicated by nerve pulse patterns since characteristic time scale is about ms. Microtubular conformational and em field patterns are ideal for this purpose. I however think that microtubuli represent only one important level in the hierarchy and that magnetic bodies carrying the dark matter are the star players in the real sector.
At the top of hierarchy wuold be p-adic space-time sheets, p-adic/spiritual bodies representing us as eternal cosmic beings in the real sense. The travels of shamans could result by the ability of shaman's p-adic/spiritual body to partially detach from biological body and direct attention to other parts of the infinite universe.
The direction of attention could mean that shaman as a master of intential action transforms part of his infinite p-adic body at some distant corner of the universe to a real zero energy space-time sheet which can then sensorily perceive the environment. Remote mental interactions would quite generally be based on this mechanism.
Best Regards,
At 7:30 AM, Anonymous Anonymous said...
Olisiko mitenkään mahdollista saada näistä sinun ajatuksistasi suomenkielistä materiaalia kansantajuisesti. Uskon monia kiinostavan nämä kvanttifysiikan ihmeet ym uuden fysiikan saavutukset.
Kiitos etukäteen
At 8:20 PM, Blogger Matti Pitkanen said...
Kiitos mielenkiinnosta. On usein kyselty samaa. Ongelmanani on aika, ihmiselaman naurettava lyhyys.
Joudun kasittelemaan materiaalia kooltaan luokkkaa 10 tuhatta sivua ja jo nyt huolestuttavan suuri osa ajasta menee tahan liittyvaan byrokratiaan sensijaan etta keskittaisin kaikki voimavarat ideoiden kehittelyyn.
Tyokieleni on pakosta englanti ja kaantaminen suomeksi veisi kohtuuttoman paljon aikaa ja energiaa. Toisaalta suomen kielinen lukijakunta on melkoisen pieni.
On pakko tehda valinta. Toivotaan etta joskus loytyisi aikaa ja myos mielenkiintoa vaikkapa suomalaisten tiedelehtien taholta.
Post a Comment
<< Home |
e71a9457a7b36177 | MS 1 Computer algebra
Computer algebra, or symbolic computation, is devoted to solving mathematical problems using exact manipulations and algebraic algorithms. This also involves their implementation in software and hardware as well as applications. Modern computer algebra systems make a wide range of areas of pure mathematics accessible to experiments (in particular, in algebra, discrete mathematics, geometry, and number theory). Research in these areas has benefited immensely from this new approach over the past few years. The minisymposium provides an overview over some of these recent developments.
Organizers: Bettina Eick (Braunschweig), Anne Frühbis-Krüger (Hannover), Sandra Klinge (Dortmund), Eva Zerz (Aachen)
Speakers: Winfried Bruns (Osnabrueck), Tobias Moede (Braunschweig), Daniel Robertz (Plymouth), Martin Scheicher (Innsbruck), Werner Seiler (Kassel),
MS 2 Immersed boundary methods
Immersed Boundary Methods are an attractive way to represent arbitrarily shaped surfaces within a flow simulation. The formulation of immersed boundaries within (usually) Cartesian grids involves accurate numerical approximations in the fluid domain and efficient solvers. This usually goes in hand with an ease of generating the numerical grids. However, there are disadvantages concerning near wall resolution, smoothness of the solution at the wall. Accuracy, conservativity of variables and moving walls are current challenges addressed in this minisymposium.
Organizers: Stefan Hickel (München), Michael Manhart (München)
Speakers: Fabian Kurz (München), Vito Pasquariello (München), Christoph Lehrenfeld (Münster), Jochen Fröhlich (Dresden), Björn Müller (Darmstadt)
MS 3 Linear and multilinear methods for electronic structure calculations
Electronic structure calculations simulate the quantum mechanical properties of electrons moving around clamped atomic cores. Usually, the underlying many body Schrödinger equation is extremely high dimensional. However effective single particle models, like the widely used DFT (density functional theory) still require the solution of large-scale non-linear eigenvalue problems.
Linear repsonse theory, for computed excited states etc., results in Bethe Salpeter and related equations: These problems have to be solved by numerical linear algebra techniques for large scale spectral problems.
Coupled cluster are non-linear and tensor network states are multi-linear parametrizations of the original many body wave-functions, which are preventing from the prohibitive combinatorial (exponential) scaling with the size of the system.
Organizers: Chao Yang (Berkeley, USA), Reinhold Schneider (Berlin)
Speakers: Gero Friesecke (TU Munich), Benjamin Stamm (Paris VI), Huajie Chen (Warwick), Anil Damon (Stanford), Amartya Banerjee (Lawrence Berkeley), Agnieszka Miedlar (Minnesota)
MS 4 Mechanics and model based control
Mechanics and Model Based Control are both rapidly expanding scientific fields and fundamental disciplines of engineering. They share demanding mathematical and/or system-theoretic formulations and methods. One of the challenges in Mechanics and Model Based Control is utilising the ever increasing computer power with respect to both, the simulation of complex physical phenomena in mechanics, and the design and real-time implementation of novel control systems. Further challenges follow from the availability of efficient multi-functional materials, so-called smart materials, and new fast and reliable communication techniques allowing the design and implementation of new types of actuator/sensor fields, distributed control and virtual networks. The mechanical and mathematical topics of this challenging development will be addressed by this minisymposium.
Organizers: Hans Irschik (Linz), Michael Krommer (Wien), Kurt Schlacher (Linz)
Speakers: A.K. Belyaev (St. Petersburg), R. Findeisen (Magdeburg), U. Gabbert (Magdeburg), S. Jakubek (Wien), M. Schöberl (Linz), Y. Vetyukov (Wien).
MS 5 Space-time methods for parabolic and hyperbolic PDEs
Modern discretizations of time-dependent PDEs consider the full problem in the space-time cylinder and aim to overcome limitations of classical approaches such as the method of lines (first discretize in space and then solve the resulting ODE) and the Rothe method (first discretize in time and then solve the PDE). A main advantage of a holistic space-time method is the direct access to time-space adaptivity and to the backward problem. Moreover, this allows for parallel solution strategies simultaneously in time and space.
Organizers: Stefan Sauter (Zürich), Olaf Steinbach (Graz)
Speakers: Roman Andreev (Paris), Matthias Maischak (Uxbridge), Martin Neumueller (Linz), Martin Schanz (Graz), Johannes Tausch (Austin), Christian Wieners (Karlsruhe)
MS 6 Robust simulation of mechanical systems with uncertainties
In the process of product development, the simulation of systems has gained increasing importance, and product design is primarily based on the simulation results. To guarantee the achievement of meaningful results, the parameters of the models have to be well known, which, however, is often not the case. In fact, the model parameters can exhibit a high level of uncertainty, arising as a consequence of different sources, such as natural variability or scatter, simplification and idealization in modeling, or deficiencies in identification. To solve this limitation, sophisticated strategies for a well-directed inclusion of uncertainties in the simulation process should be applied, resulting in a robust simulation with added value for the decision process.
Organizers: Jörg Fehr (Stuttgart), Michael Hanss (Stuttgart)
Speakers: Andrea Barth (Stuttgart), Matteo Broggi (Hannover), Jörg Fehr (Stuttgart), Carsten Proppe (KIT Karlsruhe), Matthias Faes (Leuven), Steffen Freitag (Bochum),
MS 7 Control of partial differential equations
Mathematical modeling leads to a description in terms of partial differential equations (PDEs) whenever time and space have to be considered as independent coordinates to properly represent the system dynamics. The control of PDE systems hence requires to combine control theoretic concepts with sophisticated mathematical tools including numerical techniques for simulation, optimization and implementation. This minisymposium will provide an overview of recent developments and their application.
Organizers: Thomas Meurer (Kiel), Frank Woittenek (Dresden)
Speakers: Nicole Gehring (München), MArtin Gubisch (Konstanz), Yann Le Gorrec (femto-st, Besançon), Philippe Martin (Paris), Paul Kotyczka (Lyon), Simon Kerschbaum (Erlangen) |
7ad62b2ef700cfd5 | Schrödinger representation
From Encyclopedia of Mathematics
Jump to: navigation, search
One of the basic possible (together with the Heisenberg representation and the interaction representation (cf. Interaction, representation of)) equivalent representations of the dependence on time of operators and wave functions in quantum mechanics and quantum field theory. In the Schrödinger representation the operators corresponding to physical dynamical quantities do not depend on ; thus, the solution of the Schrödinger equation
can be formally expressed by the Hamilton operator , which is independent of , in the form
where , being the initial value, does not depend on time, and the wave function in the Schrödinger representation depends on and contains all information with respect to changes in the state of the system when changes. The mean value of the operator in the Schrödinger representation
depends on as a result of the dependence on of the wave functions . can be also considered as the mean value of the time-dependent operator over the wave functions , which do not depend on :
i.e. as the mean value of an operator in the Heisenberg representation. The invariance property of the mean value (which should be observable and have physical meaning) under unitary transformations of type (4) means that the Schrödinger representation, the Heisenberg representation and the interaction representation are equivalent.
The Schrödinger representation was called after E. Schrödinger, who introduced it in 1926 when formulating an equation in quantum mechanics that was later called the Schrödinger equation.
Instead of Schrödinger representation one uses sometimes Schrödinger picture.
Equation (2) is correct for time-independent Hamiltonian operators only (cf. Schrödinger equation).
How to Cite This Entry:
Schrödinger representation. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by V.D. Kukin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article |
71af65722b16ce9c | Let’s question it’?’
ವಿಜ್ಞಾನ ಸಮ್ಮೇಳನದಲ್ಲಿ ಬಂದ ಸಂಪನ್ಮೂಲ ವ್ಯಕ್ತಿಗಳೆಲ್ಲಾ, ಎಲ್ಲದಕ್ಕೂ ‘ಶಾಸ್ತ್ರಾ, ಶಾಸ್ತ್ರಾ ಎಂದು ಹೇಳುತ್ತಿದ್ದರು. ಅಂದರೆ, ಭೌತ ಶಾಸ್ತ್ರಾ, ರಸಾಯನ ಶಾಸ್ತ್ರಾ, ಖಗೋಳ ಶಾಸ್ತ್ರಾ ಅಂತ. ಅಲ್ಲಾ, ಯಾಕೋ ನನಗೆ ಈ ಶಾಸ್ತ್ರಾ ಅನ್ನೋ ಪದವೇ ಸಮಂಜಸ ಅಲ್ಲಾ ಅನ್ನಿಸುತ್ತಿದೆ. ಏಕೆಂದರೆ, ವಿಜ್ಞಾನಕ್ಕಿರುವ ಮೂಲಭೂತ ಸ್ವಾತಂತ್ರ್ಯ ‘ಪ್ರಶ್ನೆ’ ಮಾಡುವುದು,
ಯಾವುದಾದರು ವಿಷಯಗಳನ್ನು ಶಾಸ್ತ್ರದ ಪಟ್ಟಿಗೆ ಸೇರಿಸಿದರೆ, ಅದನ್ನು ‘ಪ್ರಶ್ನೆ’ ಮಾಡುವ ಅಧಿಕಾರ ನಮಗೆ ಇರುತ್ತಾ?
ಶಾಸ್ತ್ರಗಳನ್ನು ಪ್ರಶ್ನೆ ಮಾಡೋ ಪರಂಪರೆ ನಮ್ಮಲ್ಲಿದೆ. ಆದರೇ ನೀವು ಹೇಳಿದಂತೆ ಅದಕ್ಕೆ ಪರ್ಯಾಯ ಪದ coin ಮೂಡಬಹುದೇನೋ! ಒಂದು ಹುಟ್ಟು ಹಾಕಿ ನೀವು ಹೊಸ ಪರಂಪರೆಯನ್ನ!
-ಗುರು ಪ್ರಾಸಾದ್ (ಆಕೃತಿ ಪುಸ್ತಕ ಮಳಿಗೆಯ ಓನರ್)
ಈ ಮೊದಲು ವಿಜ್ಞಾನ ವಿಷಯಗಳನ್ನು ಶಾಸ್ತ್ರ ಎಂದೇ ಕರೆಯುತ್ತಿದ್ದರು. ಆ ಪದ್ಧತಿ ತಪ್ಪಿ ಬಹಳ ದಿನಗಳಾಗಿವೆ. Science ವಿಷಯಗಳನ್ನು ವಿಜ್ಞಾನ ವೆಂದು Humanities ವಿಷಯಗಳನ್ನು ಶಾಸ್ತ್ರ ಗಳೆಂದು ಹೆಸರಿಸುತ್ತಾರೆ. ಉದಾ: chemistry,biology, physics, medicine, astronomy ಗಳನ್ನು ಕ್ರಮವಾಗಿ ರಸಾಯನ ವಿಜ್ಞಾನ ಜೀವವಿಜ್ಞಾನ ಭೌತವಿಜ್ಞಾನ ವೈದ್ಯವಿಜ್ಞಾನ ಖಗೋಳ ವಿಜ್ಞಾನ ಎಂಬ ಹೆಸರಿವೆ. Economics political science sociology ಮುಂತಾದವುಗಳನ್ನು ಅರ್ಥ ಶಾಸ್ತ್ರ ರಾಜ್ಯ ಶಾಸ್ತ್ರ ಸಮಾಜ ಶಾಸ್ತ್ರ ಎಂದೇ ಕರೆಯುತ್ತಾರೆ. ಅವುಗಳನ್ನೂ ವಿಜ್ಞಾನ ಎಂದು ಕರೆದರೆ ತಪ್ಪೇನಿಲ್ಲ. ರೂಢಿಯಾಗಬೇಕು ಅಷ್ಟೆ.
-ಕೆ ಪುಟ್ಟಸ್ವಾಮಿ (ಡಾರ್ವಿನ್ನ ‘ಜೀವ ವಿಕಾಸ’ ಪುಸ್ತಕದ ಕನ್ನಡ ಅನುವಾದಕರು ಮತ್ತು ಚಲನ ಚಿತ್ರ ವಿಮರ್ಶಕರು)
Dear Viswa: Quite often, words that came into usage are retained even when our understanding about it changes over a period of time. Words should not be interpreted literally. For example: The Black Body (in physics) was coined with some meaning in the 1850s. Now, we say that the Sun also is a Black Body eventhough it is anything but black! Well, people could have changed the word ‘Black Body’ with some word that encompasses ALL objects that radiate EM radiation by virtue of their surface temperature. But it was not done for historical reasons. I think one should be trained to interpret meanings of words contextually. So, while ‘Shaastra’ as used in a religious context would mean ‘doing things without questioning, as in a tradition or practice, in the context of subjects like social ‘shaastra’ or Political ‘shaastra’ it must be interpreted as ‘Discipline’. Having said that, I now see a problem with the usage of the word ‘discipline’! At a deeper level it has the connotation of ‘shaastra’! Now, to the use of word ‘science’ as a suffix. The so-called ‘Pure Scientist’ would have a problem with ‘Political Science’ as well. Is social science REALLY a science? Well, highly debatable. In one sense, yes and in another NO. In fact, my daughter who is studying psychology for her Masters’ Degree is in an M Sc course! It is an long-drawn debate whether Psychology qualifies to be a ‘science’. In summary, I would say that there should be no reservation to the usage of words that have come to have a certain meaning in the populace. Alternately, we must educate people about the broader meaning and contextual meaning of such ‘confusing’ words.
-H. R. Madhusudhan (Senior Scientific Officer, Jawaharlal Nehru Planetarium, Bangalore)
ಮಧುಸೂದನ್ ಸರ್ ಸರಿಯಾಗಿ ಹೇಳಿದ್ದಾರೆ. ಅದರ ಜೊತೆಗೆ ಸೇರಿಸಿ ಹೇಳುವುದಾದರೆ, ನಾವು ಯಾವುದನ್ನು ಬೇಕಾದರೂ ಪ್ರಶ್ನಿಸಬಹುದು. ಕರ್ನಾಟಕದ ಇತಿಹಾಸದಲ್ಲಿ ಎಲ್ಲಾ ಬಗೆಯ ಶಾಸ್ತ್ರಗಳಲ್ಲಿ ಹೇಳಿರುವುದನ್ನು ಸಮಯೋಚಿತವಾಗಿ ಪ್ರಶ್ನಿಸಿದವರ ದೊಡ್ಡ ಪಟ್ಟಿಯೇ ಇದೆ. ಹಾಗೆ ಪ್ರಶ್ನಿಸಿದ್ದರಿಂದಲೇ ಕಾಲ ಕಾಲಕ್ಕೆ ಎಲ್ಲಾ ರೀತಿಯಿಂದಲೂ ಎಲ್ಲಾ ಕ್ಷೇತ್ರಗಳಲ್ಲೂ ಬದಲಾವಣೆಗಳಾಗುತ್ತಾ ಬಂದಿವೆ.
ಪ್ರಶ್ನೆ ಮಾಡುವುದು ಬರಿಯ ವಿಜ್ಞಾನ ಕ್ಷೇತ್ರಕ್ಕೆ ಸೀಮಿತವಾದದ್ದಲ್ಲ. ವೈಜ್ಞಾನಿಕ ದೃಷ್ಟಿಕೋನದಿಂದ ಪ್ರಶ್ನಿಸುವುದು, ಪ್ರಶ್ನೆ ಮಾಡುವ ಹಲವಾರು ವಿಧಾನಗಳಲ್ಲಿ ಒಂದು.
ಇನ್ನು ನಮಗೆ ಅಥವಾ ಸಾಮಾನ್ಯರು ಅನ್ನಿಸಿಕೊಳ್ಳುವವರಿಗೆ ಪ್ರಶ್ನೆ ಮಾಡುವ ಅಧಿಕಾರವಿದೆಯೇ? ಖಂಡಿತಾ ಇದೆ. ಆದರೆ ಆ ಪ್ರಶ್ನೆಯನ್ನ ಯಾರ ಮುಂದೆ ಕೇಳಬೇಕು, ಹೇಗೆ ಕೇಳಬೇಕು, ಅನ್ನುವುದು ಮುಖ್ಯವಾಗುತ್ತದೆ. ತೆರೆದ ಮನಸ್ಸಿನಿಂದ ಯೋಚಿಸುವ ವಿದ್ವಾಂಸರ ಮುಂದೆ ಯಾವ ಯೋಚನೆಗಳನ್ನಾದರೂ ಹೇಳಬಹುದು. ನಮ್ಮ ಯೋಚನೆಗಳಲ್ಲಿ ಹುರುಳಿದ್ದರೆ ಅದನ್ನು ಸ್ವೀಕರಿಸಿ ಸರಿಯಾದ ದಿಕ್ಕಿನಲ್ಲಿ ಕೆಲಸ ಮಾಡುವ ಸಾಮರ್ಥ್ಯ ಅಂಥವರಿಗೆ ಇದ್ದೇ ಇರುತ್ತದೆ. ಇನ್ನು ನಮ್ಮ ಪ್ರಶ್ನೆಯಲ್ಲಿ ಸಾರವಿಲ್ಲದಿದ್ದರೆ, ಅಂತಹ ವಿಚಾರಗಳು ಉಳಿಯುವುದಿಲ್ಲ.
ಶಾಸ್ತ್ರಗಳು ಪ್ರಶ್ನಾತೀತ ಅಥವಾ ಕೆಲವೊಂದು ವಿಚಾರಗಳು ಪ್ರಶ್ನಾತೀತ ಎಂದು ಹೇಳುವ ಮೊಂಡ ಮನಸ್ಸಿನ ವಿದ್ವಾಂಸರ ಮುಂದೆ ಪ್ರಶ್ನೆ ಮಾಡುವುದು, ಗೊತ್ತಿದ್ದೂ ಗೊತ್ತಿದ್ದೂ ಕಲ್ಲಿಗೆ ತಲೆ ಚಚ್ಚಿಕೊಂಡಂತೆ!
-ಜಿ. ಕೆ. ಗೀತ (ವಿಜ್ಞಾನ ಸಂಶೋಧನಾ ವಿದ್ಯಾರ್ಥಿ)
ನಿಮ್ಮ ಅಭಿಪ್ರಾಯ/ಚಿಂತನೆಗಳೇನು?
What’s your ‘thought’ on this?
Comment to this article below…
Happy National Science Day 2018
This year theme: Science and Technology for Sustainable Future
Raman Effect and Raman Works
1. The Raman Effect
2. C. V. Raman’s Nobel Lecutre, December 11, 1930
3. C. V. Raman and His Works- Digital Repository from RRI, Bangalore
4. A new class of spectra due to secodary radiation (1928)
Viswa Keerthy S
Feb 28, 2018
(The article is an extract from one of my Facebook post)
Light-and-Shadow Play : Total Lunar Eclipse
Eclipses are the natural phenomenon that occurs in sky and was recorded by almost all civilizations which left their mark on Earth. Some associated eclipses for demons invading the earth (in all religions) or evil happening to humans, others really did scientific observation which led to eradicate (?) the former beliefs. The saga between ‘two’ still continues even today! We need both faces of stories to create awareness among ‘individuals (fellow people who still believe occurrence of eclipses are grim)’ to un-educated them with their un-questionable and superstition beliefs. Except making people to fear (or fear of GoD) and practicing ‘fruitless observances’ no other ‘constructive development’ has been recorded or reported (researched) from these attitudes. And the ‘_____ Culture (Samskruti)’ ties hands (thinking process) of ‘most of them’ to have an open-discussion on these thoughts. Today, ‘fear’ is the one-and-only synonym associated with the former word I guess? And there are numerous ‘organizations’ all around us growing like mushroom to imprint ‘fear’ factor to the people without religion, cast, race, country, etc.,(?) and claims to be the stake holder of ‘Samskruti’.
It is time for all of us, not to go against to ‘common people beliefs’ rather un-educate them about their crude beliefs by creating awareness about what are eclipses, what is actually happening in the sky, how others thought about the eclipses, what science we can learn from eclipses, why they are not bad or evil and finally make ‘them’selves to realize the beauty of nature becoming one among in it (nature) and do what they love to do rather what they do or practice out-of ‘fear’ spread by immature, self proclaimed stake holder of ‘almighty’ and ‘samskruti’. Let’s not fight against with self proclaimed ‘______’ , let’s fight against to their thinking, their nature of dictating twisted ‘culture’ on people, their nature of God, their nature of behaving godly person, their nature of business and the ‘media’ who support them. Let us not restrict ‘this’ to eclipse events, let us make this as a ‘habit of humanity’ to embrace the nature in every situations/events/celebration(etc.,).
When it comes to science, no defined boundaries as such to describe it! Each individual will have ample time to imagine (think) everything about the nature, experiment on it, predict on it, if it works move-on with it, until it is proved false! This is how science progress! And the great minds of our past have contemplated this. [Here], nothing is absolute and nothing is obsolete (!?). The spirit of questioning ‘everything’ is the prime signature and beauty of science as said by Prof. H. Narasimhaiah, physicist, educationist from Karnataka, India. Let us plunge the pseudo-scientific approach to all our understanding and move in the path which makes science as our way of life.
Eclipses are light-and-shadow play of nature showcased on an astronomical stage by wonderers (planets) (in our solar system). The second full moon of 2018 which happens on January 31st is an eclipsed Moon (Total Lunar Eclipse). This is a ‘delicious’ event for nature lovers to observe, to create awareness (not to practice superstitious beliefs/ eclipse myths) among public, to do science and to lead life in harmony with nature by celebrating it. This eclipse is also portrayed as “Super-Blue-Red Eclipse Moon“. Like – ‘Once in a Blue Moon’ phrase in English, this lunar eclipse claim to be a rare event culminating with all ‘flavoured color’ of Moon (?). Why the Moon on Jan 31st, 2018 is called Super-Blue Moon and many other resources can be found in this Eclipse portal of IIA, Bangalore. The lunar eclipse is visible in most part of North America, Europe, Asia and Australia.
Let us all rejoice in the eclipse!
Eclipse Handbook of January 31, 2018 – A Lunar Eclipse How-To for Everyone
To download click here.
Prepared by : Prajwal Shastri, IIA, Bangalore; Ajay Talwar, Amateur Astronomers, Delhi; Juny K Wilfred, ICTS, Bangalore
Lunar Eclipse at Bangalore:
Image Credit: http://www.coppermoon18.wordpress.com
Timings for Bangalore (IST):
Date: 31-01-2018
Rise = 06:46
Set = 18:20
Rise = 18:15
Set = 06:19 (on 01.02.2018)
Total Lunar Eclipse:
Partial Phase begins (P1) = 16:21:15 (Not Visible)
First Contact (U1) = 17:18:27 (Not Visible)
Second Contact (U2) = 18:21:27 (Visible)
Third Contact (U3) = 19:37:51 (Visible)
Fourth Contact (U4) = 20:41:11 (Visible)
Eclipse Ends (P4) = 21:38:27 (Visible)
Eclipse Duration
Penumbral = 05h 17m 44s
Umbral = 03h 22m 44s
Totality = 01h 16m 04s
Eclipse Sketch
Viswa Keerthy S
ಪ್ರಕೃತಿಯ ಗುರಿಯನ್ನು ಹುಡುಕುತ್ತಾ…!?…
ಬೆಂಗಳೂರಿನಲ್ಲಿ ದನಗೋಳು ಮಳೆ ಸುರಿಯುತ್ತಿದ್ದಾಗ, ‘ಪ್ರಕೃತಿಯ’ ಬಗ್ಗೆ ಹಾಗೆ ಬಂದು ಹೋದ ಒಂದು ಸಣ್ಣ ಯೋಚನೆ…
…ಎಷ್ಟೋ ಬಾರಿ ನಾವು ಭೂ ಕೇಂದ್ರಿತವಾಗಿ (ಮನುಷ್ಯ ಕೇಂದ್ರಿತ) ಪ್ರಕೃತಿಯ ಬಗ್ಗೆ ಯೋಚನೆ ಮಾಡುತ್ತೇವೆ ಮತ್ತು ನಮ್ಮ ಮತ್ತು ಪ್ರಕೃತಿಯ ಮಧ್ಯವಿರುವ ಸಂಬಂಧ, ಲಾಭ ನಷ್ಟ, ಇತ್ಯಾದಿಗಳನ್ನು ಹುಡುಕಲು ಪ್ರಯತ್ನಿಸುತ್ತೇವೆ. ಆದರೇ, ಪ್ರಶ್ನೆ ಇರುವುದು ಪ್ರಕೃತಿ ಅಂದ್ರೇ ಇಷ್ಟೇನಾ?
ವಿಶ್ವದ ಪ್ರಕೃತಿಯಲ್ಲಿ, ಭೂಮಿಯ ಮೇಲೆ ನಾವು ನೋಡುವ ಪ್ರಕೃತಿಯು ಯಕಃಶ್ಚಿತ್ ಧೂಳಿನ ಕಣಗಳಿಗಿಂತ ಸಣ್ಣ ಪ್ರಮಾಣ. ವಿಶ್ವದ ಉಗಮವನ್ನು ನಾವು ವೈಜ್ಞಾನಿಕವಾಗಿ ಅಂದಾಜಿಸಿದ್ದೇವೆ (ಎಷ್ಟು ವರ್ಷದ ಹಿಂದೆ ಎಂದು), ಅಲ್ಲಿಂದ ಎಲ್ಲವೂ (ಅಣು, ಪರಮಾಣು, ಅನಿಲದ ಮೋಡಗಳು ಇತ್ಯಾದಿ) ಗುರುತ್ವ (ನಾವು ಈ ರೀತಿ ಹೆಸರು ಕೊಟ್ಟಿದ್ದೇವೆ) ಬಲದ ಮೂಲದಿಂದ ರಚನೆಗೊಂಡ ನಿಯಮಗಳಿಂದ (ಕೆಲವು ನಿಯಮಗಳನ್ನು ಕಂಡು ಹಿಡಿದಿದ್ದೇವೆ) ನಾನಾ ರೀತಿ ರೂಪ ಪಡೆಯುತ್ತಿದೆ, ನಾನು, ನೀವು, ಬೆಟ್ಟ, ಗುಡ್ಡ, ನದಿ, ಬಸ್ಸು, ಕಾರು, ಗ್ರಹ, ಸೂರ್ಯ, ನಕ್ಷತ್ರ, ಗ್ಯಾಲಾಕ್ಸಿ, ಬ್ಲಾಕ್ ಹೊಲ್ ಇಡೀ ವಿಶ್ವ…. ಎಲ್ಲವೂ ಒಂದೆ, ಅಂದರೆ ನಾವೆಲ್ಲರೂ ಒಂದೆ, ನಾವು ಕೂಡ ವಿಶ್ವದ ಪ್ರಕೃತಿಯ ಒಂದು ರೂಪ, ಬೇರೆನಲ್ಲಾ ಹೀಗೆ ಕೋಟ್ಯಾನು ಕೋಟಿ ಜೀವ ರಾಶಿಗಳು (ಪ್ರಕೃತಿಯ ರೂಪಗಳು) ಕೋಟ್ಯಾನು ಕೋಟಿ ಗ್ರಹಗಳಲ್ಲಿ (ನಮ್ಮಂತಹ ಭೂಮಿ ಅನ್ಕೋಳಿ) ಅದರ ಅದರ ವಾತವರಣದಲ್ಲಿ ಬದುಕುತ್ತಿರಬಹುದು.
ಇಂತಹ ಉಹೆಗೂ ನಿಲುಕದೆ ಹರಡಿರುವ ಪ್ರಕೃತಿಯಲ್ಲಿ ಕುವೆಂಪು ಹೇಳಿದಂಗೆ, ‘ಯಾರುಮುಖ್ಯರಲ್ಲಾ, ‘ಯಾರುಅಮುಖ್ಯರಲ್ಲಾ, ‘ಯಾವುದುಯಕಃಶ್ಚಿತವಲ್ಲಾ…. ಯಾವ ಲಾಭ, ನಷ್ಟ, ಉದ್ದೇಶ, ಗುರಿ ಈಪ್ರಕೃತಿಗೆ ಇಲ್ಲಾ
ಹೀಗೆ ನಿಮಗೇನಾದರೂ ವಿಚಿತ್ರ ಯೋಚನೆಗಳು ಬಂದಿದ್ದರೆ, ಈ ಲೇಖನಕ್ಕೆ ರಿಪ್ಲೈ ಮಾಡಿ…
ವಿಶ್ವಕೀರ್ತಿ ಎಸ್.
The story of photographic plates: NGC 1851
Unravelling the hidden data from the archival photographic plates of globular cluster- NGC 1851
This work got published in the ‘BASE Line’ a quarterly bulletin of JNP, Bangalore.
(This is a scanned copy of the BASE Line. Full copy of the bulletin can be downloaded here
http://www.taralaya.org/BASEline-August-2016.pdf )
Jawaharlal Nehru Planplanetarium Bangalore (JNP)
Vision and Mission
• To popularise science amongst general public
• Non-formal science education,
• To train students to take up science as a career.
BASE has established a Science Centre in the Planetarium which has become a nucleus for non-formal science education at all levels. The activities of the Science Centre and the Planetarium have made BASE a unique institution for the dissemination of science. A Science Park has been developed in the premises of the Planetarium.
Science popularisation activities of BASE viz., sky-theatre shows, monthly astronomy lectures, monthly science movies attract over two lakh visitors every year.
Our science education activities viz., Research Education Advancement Programme (REAP), Interactive Week-end Sessions for school children, Summer Programmes, Science Exhibitions and Workshops attract thousands of students every year. Over 35 students from BASE have got admission for Ph.D. courses in premier research institutes in India and abroad. They include Raman Research Institute, Indian Institute of Science, Wild Life Research Institute, Oxford University, Michigan State University, Delft Institute, State University New York, Buffalo, University of Florida, Cornell University.
JNP Publications
JNP Website
JNP Address
Jawaharlal Nehru Planetarium
Bangalore Association for Science Education,
Sri T. Choudaiah Road, High Grounds,
Bangalore – 560 001
Land Mark: Opp. Indira Gandhi Musical Fountain, Near Raj Bhavan.
Viswa Keerthy S
‘YoU(r)’niversal Law!
Newton’s ‘Universal’ Law of Gravitation portrayed in an article -theguardian
Once there was a conversation on the topic – What is “Universal Law”? with three persons whom I know very well, here are there views… (there actual WhatsApp conversation).
First Person : Is coulomb’s law universal law …. ?
Second Person: Vo nice… will think on this… it is difficult to term a law as universal.
Second Person: for me the concern is the constant present in the coulomb’s law….
Second Person: Change the equal sign to nearly equal or the force is proportional to the product of two charges by the square of the distance between them… is more or less Universal Law….
Second Person: And I also do have a feeling that no law is universal law….
Second Person: it is nearly universal but not fully universal!!
First Person : Can u get the criterion to call a law universal
Second Person: the only condition I believe is that, it should be valid at all points in the universe…
Second Person: If this is a condition… Just a single law cannot become universal… (up to the physics that we know today)
First Person: What do u mean by just a single law
Second Person: one law cannot be valid at all places… anta
Second Person: Even Newton’s Universal Law of Gravitation is also not a universal law… the name still sticks to it.
First Person : Howda…. (Is it?)
First Person : Elli valid alla adu? (Where it is not valid?)
Second Person: Mercury ge valid agalla… (the law is not valid for mercury…)
Second Person: GPS system ge valid agalla… (it is not a suitable law for GPS…)
First Person : Every 2 particles having mass should obey
Second Person: Yes…
First Person : Mercury ge yake aagalla (Why the law is not valid for mercury?)
Second Person: I don’t remember where I read this…
Second Person: it says like this…
First Person: the light moving near the star bends due to SR or GR what ever? But the claim is that the star attracts (not a suitable word) light particles and in turn light also attracts the star at a very negligible force (which is not zero but negligible). According to Newton’s law how can light attract star, it does not have mass (rest mass!!)
Second Person: But these things are accounted for accurate measurement of many things… which is not the direct consequence of Newton’s Law!!
First Person: The concept is different from Newton’s law ashte
Second Person: Mercury has a special kind of motion called “Precision of its Orbit” which is not explained by Newton’s law.
Second Person: Yes, that puts a question on Mass? Does mass is the main source of gravity in the nature (which is true with newton’s law)
First Person : According to khan academy lecture precession is boz of sun earth interaction
Second Person: interaction through Gravity…
Second Person: if newton’s law talks about Gravity, then it should explain… these interactions… why it fails?
First Person : Yes
Second Person: btw this is only for Mercury…
First Person : Y not
Second Person: Uranus and Neptune were discovered because of the power of Newton’s Law….
Second Person: what I am trying to say is that, gravity is not just the way Newton’s Law define. It may be true at some places but not at all places… and Newton’s law does not give full picture of it… So it is not a universal law (that is why I said “Universal Law” should not be restricted to just one law)
First Person : G is called universal constant … law is valid at every place
Second Person: humm,
Second Person: What is the source of G?
First Person: Cavendish experiment proof
Second Person: Newton’s law is not just G. It is a constant number that is present in our universe. But the value of G does not talk about the nature of gravity… that is spoken by other parameters in the newtons law… which is not always true at all points in the space…
First Person : Do u mean (M)(m)/ r square. Is not true at certain places ?
Second Person: Yes, it is nearly true but not always… Once C. Sivram sir took a beautiful class in M.Sc.
Second Person: That is why we have General Theory of Relativity…
Second Person: Combination of both Newton’s Law and GR could be universal (if we really need to use this word)….
First Person: I’m stunned. .. no words
Second Person: Yes, that is Nature!!
First Person : Students ge hogi Newton’s law universal alla antha helbeka
Second Person: Interesting part is Kepler defines the motion of planet without using Gravity…. It perfectly fits.. Later newton comes and gives gravitational concept… Kepler did not even know about this Gravity.
Second Person: In fact nothing is universal!! anta ankondiddini… (In fact, I think nothing is universal)
Third Person : Even i think so…that, no law is universal “YET”
Second Person: Yet….? What next?
Third Person : May be in future we can get a universal law…
Second Person: Yes…
So, what is your thoughts on it? Reply to this article.
I mean ‘YoU(r)’niversal Law!!
Viswa Keerthy S
Elusive Particle of the Universe
Hi, welcome to you for the year of light and welcome to you for this interesting lecture.
Celebrating IYL 2015
Yes, we are talking about light, the light which is responsible for the diverse life on Earth and spreads in the unimaginable universe. In the last lecture we spoke about the first light from the Big Bang called CMBR. But in this lecture I don’t directly talk about light. Instead I talk about atoms!!
Atoms are the building blocks of matter. We have more number of atoms in our eye than the total number of stars in a galaxy. This is true with any object which is of the size of an eye. In principle we humans are all collection of atoms that have different names! The matter which surrounds us is also a collection of atoms. Suppose if I am drinking water from a glass. If you shoot this in a video-camera which can only detect atoms regardless of their physical size, shape, and features, you can see that some arrangement of atoms drinking the same atoms which are in different arrangement. This means that, you can form infinite number of structures by arranging atoms. That is what the nature is doing from all these billions of billions years and it continuous do it in future also. So, if you look at animals, birds, plants, humans, mountains, earth, stars and galaxies all these are just arrangement of atoms.
Image 4
Arrangement of atoms forms different structures. (Living and Non Living Things)
So it is important to study about atom. Until the beginning of 20th century, scientist thought that atom is an indivisible particle of the universe and the fundamental particle of all the objects. Later many observations and sensitive measurements by the 20th century scientist proved that atom is divisible. Today we know that atom is made up of proton, neutron and electrons. Protons and neutrons make up the nucleus which concentrates most of the atoms mass while the electrons revolve around the nucleus. If I have to give you the feel of an atom or how big an atom is; imagine like this, consider this room (Auditorium) is of the size of an atom and this tennis ball (author is holding tennis ball) is a nucleus, than the smallest dust grain that you can find in this room is an electron! This means atom mostly contains an empty space! All the mass is concentrated in the nucleus. This is a distinctive sketch of an atom. But you might ask me a question… Do all atoms show the same characteristics? The answer is no. Atom with some fixed number of protons, neutrons and electrons have some physical and chemical characters. But if you change the number of constituent particles of an atom, the characteristics also changes. This means that if you change the number of electrons in an atom it exhibit different characteristics than the previous one. Depending on the number of electrons in an atom a specific name is given to them, like hydrogen which has one electron, helium which has 2 electrons and so on… These are called elements. These are arranged according to the ascending order of the number of electrons in their atoms. This is called Periodic Table. This table makes us to easily identify the element which has a specific character depending on its electron number. This is about atom, and its constituent particles and their arrangement and how there characteristics changes. But why am I talking about this, instead of light!
Yes, I will talk about light, but for a while let us go back to 1930. Wolfgang Pauli an Astrian physicist was trying solve the mysterious violation of law of conservation of energy in process called Beta Decay. I know all of you know about law of conservation of energy, in simple words the law says, energy can neither be created nor be destroyed. It can only transfer from one form to another form. And also the energy of the entire universe is constant. In a given process the energy before the process and after the process should be equal. No room for violation of this law in nature. While doing math’s we may do errors in calculating the energies, but nature do not do any error. It never violets the law of conservation of energy! But something strange was happening with beta decay. To know this let us learn about beta decay. Since we know about atom, it is very easy to understand this process. In certain elements the nucleus which contains protons and neutrons becomes unstable due to the more number of neutrons than the protons. If something is unstable it has to give out or it has to accept something in order to attain stable position. Here we have only protons and neutrons in the nucleus. So, what happens to this unstable nucleus? Yes, an extra neutron in the nucleus decays into proton and in this process an electron is emitted out of the nucleus as shown by the below nuclear reaction. (The elements which shows this behavior is called Radioactive Elements and the process is called Radioactivity). This process is called beta decay. (Beta particle = Electron). This is an observable process and the experiments have confirmed this nuclear reaction. But what is the strangeness about this nuclear reaction? You might be surprised to hear this; this nuclear reaction is actually violating the law of conservation of energy! But the question is HOW?
n -> p + e
If you equate the energies in beta decay process, you will see that the total energy of the emitted particle is less than the energy of reactants (for now consider it as reactants). Let me explain this… In beta decay process an extra neutron in the nucleus decays into proton and stays inside the nucleus. But an electron ejects out from the nucleus and this electron does not belong to orbital electrons, because it is created in the nucleus and emitted out. But the amount of energy that electron is taking out from the nucleus is less compared to the energy used in creation of the electron. This tells us some energy is missing in this process. When we do the experiment, there is no signature of any other particle taking the missing energy. Then where is the missing energy? This is the question that haunted many scientists during the twentieth century. So what do you conclude based on our experiment? Is nature violating the law of conservation of energy or our calculations are wrong?
Pauli worked extensively on this problem of missing energy in beta decay and finally arrived at one conclusion. He believed that the universe is governed by the order of natural laws. And he had firm believes that nature does not violates these laws. More importantly he also believed that his calculations are correct. He said nature is not violating law of conservation of energy. The missing energy in the beta decay is transmitted through a particle called ‘Neutrino’ which has zero mass but has the energy which corresponds to the missing energy! He wrote down all the mathematical calculation to prove the existence of this particle which he called Neutrino and published the paper. But many scientists did not believe Pauli and went on claiming that, this kind of particle does not exist. But Pauli who was theoretical physicist, I mean the one who do only math and calculation to give the theory of nature had solid-believe on his theory. I should say he trusted his calculations so much that, he always said this particle; I mean the neutrino should exist in our nature. This is the beauty of understanding the nature. This is where we should appreciate our language called maths and equations. The numbers and equations hide the unseen exquisiteness of nature.
n -> p + e + nubar
Finally the beauty of Pauli’s calculation unfolded in the year 1956, when two physicist Frederick Reines and Clyde Cowan announced that they have indeed detected neutrino’s in beta decay process predicted by Pauli. It nearly took 16 years to find the existence of the particle predicted by the great physicist Pauli. But you know what, nearly trillions of trillions of trillions neutrinos have passed through your body while listening to this lecture. For every minute billions of neutrino is passing your body. The earth is in the shower of neutrinos from the sun, stars and galaxies endlessly.
Today I am here to talk about neutrinos. When Pauli predicted this particle, he said it has energy but rest mass of the particle is zero, which means these particles never come to rest!! So, how do we detect this particle?
Sir, I have question.
Go Ahead (Author)
Sir, Why do we need to detect neutrinos? Should we detect them only to explain beta decay or is there any other use from them?
Very good question. Beta decay problem is over, it is now confirmed that neutrinos are emitted from the nucleus and law of conservation of energy is successfully explained with this particle. I will reframe your question like this; why do we need to detect neutrinos on a large scale? Does it give any information to us? The answer is yes and let me give you one such example. Let us talk about Sun for a while; the thermonuclear reaction happening at the core is powering sun’s total energy. During this reaction photon (particles of light), neutrinos and other particles are emitted. The light that warms the surface of the earth as well as you and me has come from the photosphere of the sun. Photosphere is the visible disc of the sun. As we all know that light contains photons. These photons are generated at the core of the Sun. It takes billions of years for a photon to reach to the surface of the photosphere; from there the photon takes only 8 minutes to travel to the earth. So the photon which is warming you right now has left the sun’s interiors billions of years ago. Yes….. Absolutely amazing right! If you study this light you will be actually studying the sun’s interior which is present some billions of years ago. So you cannot study real time sun’s core in the visible light. But on the other hand the neutrinos which are created at the sun’s core travel to the earth in just 8 minutes. This is because neutrinos do not interact (very very less interaction) with matter. They just pass through it. But photon suffers trillions of collisions with the matter spending billions of years inside the sun before emerging out of the photosphere. So detecting these solar neutrinos is very helpful in monitoring real time sun’s core. Not only Sun in all the high energetic explosions (Supernovae and other) that happens in our universe emits neutrinos and detecting them will give us clear picture of these events as well as our universe.
How do we detect them? As I said neutrinos interaction with matter is very less. They can pass through humans, buildings and even mountains!! That is why they are called as most elusive particles of the universe. But if we detect them we get a lot of information about our universe. So here it is an astronomical observatory built half a mile down the earth. This is Super-Kamikonde Neutrino Observatory built under Mt Ikenoyama at Japan. This is a human eye to watch supernovae in our milky way galaxy.
Image 1
Super-Kamikonde Neutrino Observatory (Screen grab from the documentary Cosmos hosted by Neil deGrasse Tyson)
Image 2
The observatory is built underneath the mountain. This is because the instrument which detects the neutrinos is very sensitive. Apart from neutrinos there are other elementary particles reaching earth. If we have not shielded our instruments then the detector will pick up the signs of unwanted particles. How to shield these particles? Yes, go underneath the mountain; let nature itself shield these particles (Most other particles can interact with matter). Most of the neutrino observatories are built in the abanded mining places. Since neutrinos don’t interact with matter, they just pass through the mountain and to the detector. And because of zero interaction with matter for billions of years, they have the information of past! If you detect them you can actually get oldest of old information regarding our universe. Detecting even a single neutrino requires lot of big observatories like this and lot of patience because of its elusive nature.
In this observatory, 50000 tones of distilled water are stored and it is surrounded with scintillating tubes which detects the minute flash of light. But how do you get this flash of light in distilled water? In a minute trillions of trillions of neutrinos are passing through this huge detector. Suppose if one neutrino interacts with the nucleus of water molecule, an electron is emitted. This electron is detected by the very sensitive scintillating tubes by producing a flash of light. Each flash of light can signifies the presence of neutron in the detector. If the activity is more in outer space, I mean if there was a supernova in our galaxy, then flash of light increases indicating the presence of more neutrinos. This has been observed during the Supernovae-1987A in the year 1987.
Image 3
Astrophysicist Neil deGrasse Tyson having boat journey inside the detector. (Screen grab from the documentary Cosmos hosted by Neil deGrasse Tyson)
A flash of light is used to detect one of the most elusive particles of the universe which hides rich information about our home. It is again the light that is doing our work!
Happy International Year of Light.
Thanks a lot for listening to me, that’s it for today.
Have a great evening.
Image Credits:
First Image: Google Images
Second, Third and Fourth Images: Screen grab from the documentary Cosmos hosted by Astrophysicist Neil deGrasse Tyson
Year 2015 is celebrated as International Year of Light and Light Based Technology by UN as a global event. This year mark the 150th anniversary of Maxwell’s Equations by James Clerk Maxwell, the man who unfolded the secret of nature and answered what light is made up of.
More Information about IYL 2015
UN Anniversaries
IYL – 2015 Home Page
IYL – 2015 Blog
Upcoming Topics for IYL 2015
• Biological Industry – Photosynthesis
• Principe, Africa – 1919 TSE
• Copy of Earth – Extraterrestrial Planet
• ದೀಪವು ನಿನ್ನದೆ ಗಾಳಿಯು ನಿನ್ನದೆ ಆರದಿರಲಿ ಬೆಳಕು
• Let there be light – Newton, Maxwell, Hertz and Einstein
Viswa Keerthy S
First Light – CMBR
Celebrating IYL 2015
If you look at the modern periodic table you will find hundred and eighteen elements listed in it. Out of this, ninety four elements occur naturally. All these elements can be found on Earth (exceptions are there). Elements are building block of things that you see around you. Whether it’s a human, animal, tree, building, roads, bridges, monuments, mountains, etc, everything is made up of elements. But the question that comes to our mind is who brought all these elements on the Earth? How are they formed? The answer is… all these elements are formed inside a star. Yes, in a sense, everything that you see around is cooked inside a star! Richard Feynman, a famous physicist once said that, the stuff which we are made was once cooked in a star and spit out.
The entire universe comprises trillions of trillions stars which cooks the matter that we see around us. The next question that arises is how do these stars form? Well, these kinds of questions will go on….. it’s a never ending process. But then, where do we begin?
Yes, we begin at the beginning.
We begin at the Big Bang!
Big Bang
Artist view of Big Bang before the explosion.
The universe which includes stars, galaxies, nebulae etc. is expanding for billions of years. If you look back, somewhere in the past there must exist a time where the entire universe is concentrated in a point of high density (infinite density). According to the accepted theory, this high density state of a point, exploded violently, creating the universe that we see today. This is the Big Bang theory proposed by Edwin Hubble in the year 1929. The explosion that we consider in Big Bang is not at all a violent explosion, in fact it is one of the silent explosions that you can ever think of and it is neither big nor bang! This is just a name given to the theory. The reason is this, the space, time, sound, matter, elements and more importantly the light that we feel and experience today was created after Big Bang! Big Bang is just an expansion of a point of infinite density, nothing more that. But then, how do we verify this theory?
If this kind of an expansion has happened in the past, then we must be able to detect, outer surface of the expanding sphere which contains the information of big bang in the form of light. According to the theory, this sphere of light should uniformly exist all over the space. In a sense we have to detect the first light from the big bang. This is called background radiation of the Big Bang. If we succeed, it confirms the theory Big Bang.
In the year, 1960 two astrophysics namely Dickey and Pebbles were working on the challenge of finding background radiation of the Big Bang. They came to the conclusion that, since universe is expanding from billions of years, the energy of the photons of background radiation must be less because of the expansion in space. (Expanding Universe: The space and time fabric between the galaxies expand. The light or the photon which travels in this fabric also expands. When it expands the energy of the photon becomes less – this is predicted by Einstein’s General Theory of Relativity) When they did the calculation by using Wien’s Displacement law, they found that photon’s energy lies in the microwave region of light.
Few miles away from this group of scientist there were two more researchers from Bell Laboratory namely Penzias and Wilson designing the Horn antenna for radio astronomy. They were trying to remove the background noise in the antenna which is detected by the detector. No matter how much they tried they were unable to remove the background noise even-though they replaced the detector for many times. Interestingly the detector was detecting the noise in all the direction and at all times of the day. After their unsuccessful attempt of trying to remove this noise, they come to the conclusion that, this signal is not a noise, it is actually coming from outer space.
Later it turned out that the noise of Penzias and Wilson’s horn antennae was actually the background radiation of Big Bang that Dickey and Pebbles were searching for! Yes, it was an accidental discovery by two researchers from Bell Laboratory who have no idea about background radiation. All they did was they tuned their antennae to receive a microwave radiation. The detector was detecting first light from the Big Bang but Penzias and Wilson thought it was noise!! The temperature of the background radiation was 3 kelvin and it was in perfect agreement with the measured value from the theory. This is famously called as 3 kelvin black body radiation curve or 3 K curve. Both of them were awarded noble price in physics in the year 1978 for their accidental discovery of Background Radiation. Press Release from Bell Labs can be seen here. This observation confirms Big Bang theory. (Big Bang theory is still a debatable theory for other reasons).
Horn Antenna
Penzias and Wilson in front of Horn Antenna
Today we have some of the sophisticated scientific instruments in the on-board satellites which are revolving around the Earth. WMAP (Wilkinson Microwave Anisotropy Probe) has mapped the background radiation. The first light from the Big Bang is here….
CMBR – Cosmic Microwave Background Radiation by WMAP
The Light which is the sole responsible for life on earth was actually began its journey 13.5 billion years ago.
Through light on some atoms for billions of years, eventually you have life!
Image Details:
First Image: International Year of Light and Light Based Applications – 2015 Logo
Second Image: S. Harris Cartoon
Third Image : Bell Labs
Fourth Image:
APOD Images, March 25, 2013
More Information about IYL 2015
UN Anniversaries
IYL – 2015 Home Page
IYL – 2015 Blog
Upcoming Topics for IYL 2015
• Elusive Particles of the Universe
• Biological Industry – Photosynthesis
• Principe, Africa – 1919 TSE
• Copy of Earth – Extraterrestrial Planet
• ದೀಪವು ನಿನ್ನದೆ ಗಾಳಿಯು ನಿನ್ನದೆ ಆರದಿರಲಿ ಬೆಳಕು
Viswa Keerthy S
Wavefunction and Max Born
One of the beautiful theories in physics, which is more than hundred years old, is Quantum Mechanics (QM). In 1900 when Blackbody Curve was satisfactorily explained by Max Planck, Quantum Mechanics saw its birth. Later many great scientist of 20th century like Einstein, Bohr, Hertz, Heisenberg, Dirac, Schrodinger, Born, de Broglie developed quantum mechanics the way we see it now. Almost all the people whom I have mentioned above have received Nobel Prize for their work on developing QM. But to me the important thing is the idea behind the each stage of development in the QM. Some of the concept and the experimental results were unable to explain with the knowledge of Classical Mechanics which was well developed during that time. The new ideas and concepts which came up to explain these strange results were led to the development of this field. Here I want to focus on some ideas/concepts of QM developed by different people and more importantly by a great mathematician and physicist Max Born.
Max Planck was the first person who broke the “continuity” concept of classical mechanics and introduced the “discreteness” in the energy called ‘quanta’ (packets) of energy in order to explain the “Blackbody Curve”. The formula which he gave was in fact a perfect fit for the observed experimental curve. His formula for the discrete energy includes the constant h = 6.023X10^-34 Joules, where h is Planck’s constant. He was awarded Nobel Prize for his work in the year 1918. Later in 1905 Einstein also came up with similar idea of ‘photon’ (packets of energy) to explain the energy concept in Photoelectric Effect. The concept of ‘discreteness’ (Einstein called ‘photon’, Planck called ‘quanta’) again fits perfectly for the Photoelectric Effect. So, it is the idea of quantization principle (discreteness) which led to the satisfactory explanation for all these experimental results. And Einstein received his Nobel Prize for his work on Photoelectric Effect in 1921. Another great scientist Neils Bohr who was a student of Rutherford and he was working on the model of an atom. Rutherford atomic model was not satisfactorily explaining all the observed experimental results. His student Bohr applied the same ‘quantization principle’ to angular momentum and by using Plank theory he was able to explain all the observed phenomena through his new atomic model. Again quantization principle perfectly fits the theory. So, in quantum world every quantity is quantized. Today we know even space and spin is quantized! But in classical world this is not at all true, everything looks continuous! The idea of quantization by different people has led to the development of this unknown world known as ‘quantum world’.
scientist photo copyImage: Max Planck, W Heisenberg, E Schrodinger, Max Born.
In 1924 another great scientist de Broglie came up with a strange concept that almost all particles which has mass is associated with wave nature with a wavelength of h/mv, where v is the velocity of particle, m is mass and h is Planck’s constant. This is called Wave-Particle duality, where all particle exhibit both the properties of wave as well as particle. This was a big blow in the development of QM and its a strange concept when compared to Classical Mechanics. The theory predicted that if a car (mass m) is moving at a velocity ‘v’ is actually associated with a wave of wavelength ‘h/mv’. Classically it is impossible for a car to move like wave, but still the theory predicted wave nature of a car and gave the value of wavelength. But in reality the effect of wave nature in a macroscopic world (mass is large) is very small and can be neglected, which means the car’s wavelength is very small and its wave nature can be neglected! But when it comes to an electron which has very less mass, the wave nature of an electron magnifies and it can be studied under suitable condition. The concept of QM is to study the dynamics of these tiny particles such as electrons and elementary particles. Theory of QM is applicable to almost all particles, but for macroscopic particles the effects are very negligible and it is neglected. This actually makes QM theory a Universal Theory to some extent. For predicting the strange (classical sense) theory of wave-particle duality of nature de Broglie was awarded Nobel Prize in 1929. Meanwhile in 1925 W Heisenberg explained the basic principles of QM through his papers ‘Quantum- Theoretical Mechanics based exclusively on relationships between quantities observable in principle’. And in the very next year E Schrödinger came up with a different approach to QM through his ‘wave equation’, which is a very famous equation in physics called as ‘Schrodinger Equation’. In classical mechanics we write the famous Newton’s law as F=ma, where ‘F’ is force applied, ‘m’ is mass of an object and ‘a’ is the acceleration of an object. By solving this equation for a particular system (ex pendulum) we can get the equations of motion. Which means the solutions can predict the motion of a system (ex Pendulum) after some time (t). Newton’s law is actually the foundation to classical mechanics. And this equation holds good for almost all systems classically. On the other hand in QM the same equation is replaced by ‘Schrodinger Equation’ which is given below. Compare to Newton’s law, Schrödinger Equation is not so simple and difficult to solve.Equation
Meanwhile in 1927 Heisenberg came up with another law known as ‘Uncertainty Law’. He said it is impossible to measure the position and momentum (mass x velocity) simultaneously for microscopic particles. It means we cannot say the position and momentum of particle (ex. electron) at the same time. This kind of simultaneous measurement is impossible in QM. But in classical mechanics we can easily say the position of car and its momentum at any point of time simultaneously. In macroscopic world everything looks simple and obvious, but the same is not true in microscopic world. This restriction has reveled striking feature of microscopic world (nature) that we cannot pin point where is the electron in a system at a given point of time. So, the traditional way of writing atomic model (fig a) breaks down and electron never revolve around the nucleus in a circular way as we have learnt in school. Instead the circular line becomes a disk of width ‘a’ (two dimensional representation) (fig b). Electron is present in the region of disk, but where in the region is unknown even today. There is a debate that may be we lack the technology of doing experiment and our instruments are incapable of doing such sensitive measurement. But this is ruled out by scientist! It is nothing to do with our technology or the sensitivity of instruments, Uncertainty Law is a law in nature! Neither our future technology nor our highly sensitive instruments in future can go beyond this level. It is just there in the nature, and it magnifies in microscopic level.
model of atom
As I said the Schrödinger equation actually tells the dynamics of the particles (say electron). It should describe how an electron moves in a given condition (ex. Infinite Square Well Potential). While expressing this equation Schrodinger assumed that all the required information about the particle (electron) is hidden in the quantity called wavefunction (Ψ). By applying the given condition (given potential) to the equation one should get the value of wavefunction which is normally in complex form. After obtaining the wavefunction in its form, the required information should be extracted from the wavefunction. And this wavefuntion is a not a localized function but a spread function! Uncertainty Law which is present in nature is actually a hidden property of Schrödinger equation and it shows up in the wavefunction as a spread function (not localized like classical mechanics). But the question is how do we extract the required information about a particle from wavefunction? Here comes Max Born a mathematician and physicist with his revolutionary approach to QM through Statistical Mechanics in the year 1926 said ‘modulus psi square(IΨ(x,t)I^2) is actually gives the probability of finding the particle at a point x and time t. Which means squaring of modulus of a wavefunction is actually a real quantity not a complex number! (But remember wavefunction is a complex quantity) And Max Born was the first person to identify this quantity as the ‘probability’ of finding electron at x at time t. This is called Born’s Statistical Interpritation. I feel this concept is a revolution in QM, because without this concept we cannot interpret the wavefunction. The normalization concept which is the direct product of this idea is a very handy tool in handling wavefunction. I feel the statistical approach is the most significant step in the development of QM. All credits go to Max Born who identified ‘modulus psi square’ is actually a probability. Heisenberg and Schrodinger were awarded Nobel Prize in 1932 and 1933 respectively. Max Born was awarded Nobel Prize on Dec 11, 1954 for his statistical approach to QM, after 29 years of his most important concept. Coincidentally his Nobel Prize ceremony date was his 72nd birth anniversary (Born: 11 Dec 1882, Died: 5 Jan 1970)! In his Nobel Lectures he began his talk like this ” The work, for which I have had honour to be awarded the Nobel Prize for 1954, contains no discovery of fresh natural phenomenon, but rather the basis for a new mode of thought in regard to natural phenomena.” What matters more is the way we think on some observed results/phenomenon. All great scientist have just did that. If you ask me what is QM, in simple words I can say Quantum Mechanics is just solving Schrodinger Equation.
(Max Born visited Bangalore in the year 1935. He worked with Sir C V Raman at Indian Institute of Science for six months.)
On an application level QM has wide verity of application. In a real world application it is applied in Laser, MRI, Quantum Cryptography, Transistor and many more. It is also applied in the study of Atoms and Molecules, Nuclear Physics, Astrophysics, Solid State Physics and many more. At some point of time I said QM is a Universal Theory, but still it fails to explain some important concepts. Even today there is a debate on the fundamental concepts and principles of QM. It is still not a complete theory even after more than hundred years. Never the less it is one of the famous theories in Physics.
In the developing stages, the unique ideas by the scientist played a vital role in shaping QM in a proper way. For student like us the important thing is the ideas behind each successful stage of development in QM or in general Science. The ideas/concepts which are different from normal thinking can actually change the world totally. This has actually happened in the history. When I met my friend Suraj at Bangalore Planetarium, we were discussing about the great scientist and their concepts. He said “look Einstein is famous for E=mc^2, Stephen Hawking is famous for his chair, Feynman is famous for his teachings, but there are other great scientist with their ideas/concepts, revolutionized the world, but they are not famous among us (to some extant this is true even among science students and teachers). Everyone tells Faraday, Einstein, Edison, Hawking but nobody remembers Maxwell, Tesla, Born, Dirac, Pauli and many more who are equally important like others. The reason is we never tell the stories of these great peoples among students. I feel it is the stories of these great people and their ideas are important to students rather than the confined textbook chapters.
[If you are still interested in quantum mechanics and its development I strongly recommend you to read the preliminary chapters and Nobel Lectures in the book “Quantum Mechanics: Theory and Applications” by A.K. Ghatak and S. Lokanathan. (S. Lokanathan was my Classical Mechanics teacher in REAP Course at Bangalore Planetarium)].
[I have just shown only four scientist pictures in this article, but all of them are equally important.]
Image credits: Google images.
Viswa Keerthy S
Six Easy Pieces from Feynman
For purchase…
-Viswa Keerthy S (21/07/2013)
“Symmetry, Symmetry, Symmetry”
tarasov-this-amazingly-symmetrical-worldWhen we see the word ‘Symmetry’, we used to think vo! It is a topic in elementary particle physics, crystal structures, rotational symmetry, transnational symmetry and other things… full theory, and we feel it’s a boring topic. Once we decide it’s a boring topic, no matter what happens we never ever even try to look at it. I feel this is the nature of normal student. But now, I am here to tell you about a book which is written on “Symmetry”!, believe it or not once you read this book you will surely say, without ‘symmetry’ atom, matter, earth, life, etc,.… nothing would have existed!
Symmetry is one of the most general properties of nature. And Symmetry is not restricted to Chemistry, Crystal Structure and Particle Physics. It can be visualize everywhere in nature. It can be seen in animals, plants, birds, humans, snowflakes, non living objects… what not! Everywhere you can see symmetry. In the book “This Amazingly Symmetrical World” the author quotes a sentence said by M Gardner (mathematician and science writer) that “On the earth life started out with spherical symmetry, than branched off in two major directions: the plant world with symmetry similar to that of cone, and animal world with bilateral symmetry.” Yes it is true! If you see any plant from top, the structure of the plant looks like cone. The cone structure ensures equal sharing of sun light to all parts. And all animals do exhibit bilateral symmetry. Symmetry controls the structures of all living organisms. Amazing!
Throughout the book author puts above sentence in this way “Symmetry limits the diversity of structures in nature.” And he gives many examples of which one of the interesting examples is this one… Many of our science fiction stories portraits that extraterrestrials may look very different than organisms found on earth. But symmetry tells you, whatever the extraterrestrials looks like, they must exhibit bilateral symmetry. The reason is our laws of physics are universal in nature. That is, whatever the planet, the planet will have gravity, and our physics laws works same as it worked here. And again all our symmetric conservation laws of nature works the same in that world. If the conditions present in other planet and here is same then, extraterrestrials should exhibit bilateral symmetry. What fiction stories tell such as, a single eye, and single ear or any wired organisms is not possible just like that. Symmetry of physics laws restricts those structures. For example if extraterrestrials has ear, it should have two ears with same size and shape on opposite side. At the same time it does not mean that symmetry can predict the possible structures in nature; instead it can predict structures which are not possible in nature. Whatever the topics in which you study symmetry, symmetry always tells you what is not possible in nature. Symmetric laws are sometimes called as ‘Prohibition Laws’. These are some of the examples which I liked, but in book it covers many aspects of symmetry found in nature. And lot more interesting examples. You will really enjoy reading this book.
“This Amazingly Symmetric World” is a book written by L. Tarasov and published by MIR Publication. It is a general science book written in a very lucid language, and requires knowledge of elementary school physics to read. Book begins with the conversation between author and reader (who is yet to know the world of symmetry) and finally ends with another conversation between them, but now reader has seen the beauty of symmetry. Conversations are joy to read. The book has two parts, ‘Symmetry Around Us’ which will show you symmetry found around the nature that we live. And the second part is ‘Symmetry at the Heart of Everything’ which talks about the symmetry found in elementary particles and how our conservation laws are said to be universal laws. I feel reading about symmetry is like reading nature’s beautiful power! This book really shows you that. I am sure when you read this book, you will definitely say, beautiful and powerful tool of nature is symmetry! Without this nothing would have existed!
And also we always study many topics in science in a narrow field, and think why we need to study all these? What is so special about these things? But when we come out of it and start looking at these beautiful things in a big picture, it wonders us, and gives more clarity about why we need to study these things, and what are the roles they play in nature. This is true with symmetry or for that matter any topic in any subject.
e-copy of this book can be downloaded here…
(I would like to thank my teacher Mr Madusudhan sir [Scientific Officer at Bangalore Planetarium] who showed this book and inspired me to read this book.)
-Viswa Keerthy S (06/07/2013) |
7f0169022b322808 | 717-2160/01 – Quantum Physics I (KFI)
Gurantor departmentDepartment of PhysicsCredits4
Subject guarantorMgr. Jana Trojková, Ph.D.Subject version guarantorDoc. Dr.RNDr. Petr Alexa
Study levelundergraduate or graduate
Study languageCzech
Year of introduction2016/2017Year of cancellation2017/2018
Intended for the facultiesUSP, FEIIntended for study typesBachelor, Follow-up Master
Instruction secured by
LoginNameTuitorTeacher giving lectures
ALE02 Doc. Dr.RNDr. Petr Alexa
TRO70 Mgr. Jana Trojková, Ph.D.
Extent of instruction for forms of study
Form of studyWay of compl.Extent
Full-time Credit and Examination 2+2
Subject aims expressed by acquired skills and competences
Explain the fundamental principles of quantum-mechanical approach to problem solving. Apply this theory to selected simple problems. Discuss the achieved results and their measurable consequences.
Teaching methods
The course introduces the most important aspects of non-relativistic quantum mechanics. It includes the fundamental postulates of quantum mechanics and their applications to square wells and barriers, the linear harmonic oscillator and spherical potentials and the hydrogen atom. The remarcable properties of quantum particles and the resulting macroscopic effects are discussed.
Compulsory literature:
MERZBACHER, E.: Quantum mechanics, John Wiley & Sons, NY, 1998.
Recommended literature:
SAKURAI, J. J.: Modern Quantum mechanics, Benjamin/Cummings, Menlo Park, Calif. 1985 MERZBACHER, E.: Quantum mechanics, Wiley, New York 1970
Way of continuous check of knowledge in the course of semester
Další požadavky na studenta
Systematic off-class preparation.
Subject has no prerequisities.
Subject has no co-requisities.
Subject syllabus:
1. Introduction - historical context and the need for a new theory. 2. Postulates of quantum mechanics, Schrödinger equation, time dependent and stationary, the equation of continuity. 3. Operators - linear Hermitian operators, variables, measurability. Coordinate representation. 4. Basic properties of operators, eigenfunctions and eigenvalues, mean value, operators corresponding to the selected physical variables and their properties. 5. Free particle waves, wavepackets. The uncertainty relation. 6. Model applications of stationary Schrödinger equation - piece-wise constant potential, infinitely deep rectangular potential well - continuous and discrete energy spectrum. 7. Other applications: step potential, rectangular potential well, square barrier potentials - tunneling effect. 8. Approximations of selected real-life situations by rectangular potentials. 9. The harmonic oscillator in the coordinate representation and the Fock's representation. 10. Spherically symmetric field, the hydrogen atom. Spin. 11. Indistinguishable particles, the Pauli principle. Atoms with more than one electrons. Optical and X-ray spectrum. 12. The basic approximations in the theory of chemical bonding. 13. Interpretation of quantum mechanics.
Conditions for subject completion
Occurrence in study plans
Academic yearProgrammeField of studySpec.FormStudy language Tut. centreYearWSType of duty
2017/2018 (N2658) Computational Sciences (2612T078) Computational Sciences P Czech Ostrava 1 Choice-compulsory study plan
2016/2017 (N2658) Computational Sciences (2612T078) Computational Sciences P Czech Ostrava 1 Choice-compulsory study plan
Occurrence in special blocks
Block nameAcademic yearForm of studyStudy language YearWSType of blockBlock owner |
0910f2f008c44d7e | Last modified on 2 January 2013, at 10:20
Statistical Mechanics/The Foundations
The goal of statistical mechanics is to bridge the gap that exists between the microscopic and the macroscopic world. Take for example the equation governing the behaviour of microscopic particles: the quantum mechanical Schrödinger equation. Based on this equation, a quantum physicist will tell you everything you want to know about a particle by itself. If you ask, he will even be able to give you a solution for the wave function of a system of two particles. However, once you add a third or more, things start to get more complicated. There is no longer an analytic solution, and one must turn to computers to solve the problem numerically - and the results the computer will spit out are quite accurate for systems of 3, 4 or more particles. Even when considering classical Hamiltonian mechanics, there is no analytical solution for many-body problems such as the motion of the planets, although we can simulate them accurately numerically.
But what happens when the number of particles you have is much larger - not just ten or twenty, or even a thousand - what happens when you have a cup of water for example, with ~1025 particles? Each of the 1025 particles interacts with every single one of the 1025 other particles - that's a total number of interactions of the order of 1050 that would have to be computed at every instant! Even a computer that could perform a trillion calculations per second would take 1020 times longer than the age of the universe to compute the exact state of your cup of water for a single instant in time. Clearly such a computation is not possible. It is therefore not possible, in practice, to solve the equations governing macroscopic systems.
Statistical mechanics provides the tools required to take the information given by quantum physics and use it to describe macroscopic systems and predict how they will evolve in time. By far the most important of these tools is probability theory. Instead of saying that a physical system is in exactly one or the other configuration, we will talk about the probability of it being in a certain configuration. For example, in a room filled with gas, it is far more probable that the gas is spread evenly rather than being bunched up in one corner. This may seem to be nothing more than common sense, but it has profound implications, especially when the probabilities involved are studied quantitatively. Through the use of this and other tools, statistical mechanics enables physicists to gain fundamental insight into the workings of the macroscopic world. |
8be8567556d42575 | path integral
Quantum field theory
physics, mathematical physics, philosophy of physics
Surveys, textbooks and lecture notes
theory (physics), model (physics)
experiment, measurement, computable physics
Measure and probability theory
Integration theory
under construction
The notion of path integral originates in and is mainly used in the context of quantum mechanics and quantum field theory, where it is a certain operation supposed to model the notion of quantization.
The idea is that the quantum propagator – in FQFT the value of the functor U:CobVectU : Cob \to Vect on a certain cobordism – is given by an integral kernel U:ψK(,y)ψ(y)dμU : \psi \mapsto \int K(-,y) \psi(y) d\mu where K(x,y)K(x,y) is something like the integral of the exponentiated action functional SS over all field configurations ϕ\phi with prescribed boundary datat xx and yy. Formally one writes
K(x,y)=exp(iS(ϕ))Dϕ K(x,y) = \int \exp(i S(\phi))\; D\phi
and calls this the path integral. Here the expression DϕD \phi is supposed to allude to a measure integral on the space of all ϕ\phi. The main problem with the path integral idea is that it is typically unclear what this measure should be, or, worse, it is typically clear that no suitable such measure does exist.
The name path integral originates from the special case where the system is the sigma model describing a particle on a target space manifold XX. In this case a field configuration ϕ\phi is a path ϕ:[0,1]X\phi : [0,1] \to X in XX, hence the integral over all field configurations is an integral over all paths.
The idea of the path integral famously goes back to Richard Feynman, who motivated the idea in quantum mechanics. In that context the notion can typically be made precise and shown to be equivalent to various other quantization prescriptions.
The central impact of the idea of the path integral however is in its application to quantum field theory, where it is often taken in the physics literatire as the definition of what the quantum field theory encoded by an action functional should be, disregarding the fact that in these contexts it is typically quite unclear what the path integral actually means, precisely.
Notably the Feynman perturbation series summing over Feynman graphs is motivated as one way to make sense of the path integral in quantum field theory and in practice usually serves as a definition of the perturbative path integral.
We start with stating the elementary description of the Feynman-Kac formula? as traditional in physics textbooks in
Then we indicate the more abstract formulation of this in terms of integration against the Wiener measure on the space of paths (for the Euclidean path integral) in
Then we indicate a formulation in perturbation theory and BV-formalism in
Elementary description in quantum mechanics
A simple form of the path integral is realized in quantum mechanics, where it was originally dreamed up by Richard Feynman and then made precise using the Feynman-Kac formula?. (Most calculations in practice are still done using perturbation theory, see the section Perturbatively in BV-formalism below).
The Schrödinger equation says that the rate at which the phase of an energy eigenvector rotates is proportional to its energy:
(1)iddtψ=Hψ. i \hbar \frac{d}{dt} \psi = H \psi.
Therefore, the probability that the system evolves to the final state ψ F\psi_F after evolving for time tt from the initial state ψ I\psi_I is
(2)ψ F|e iHt|ψ I. \langle \psi_F|e^{-iHt}|\psi_I\rangle.
Chop this up into time steps Δt=t/N\Delta t = t/N and use the fact that
(3) |qq|=1\int_{-\infty}^{\infty}|q\rangle\langle q| = 1
to get
(4)ψ F|e iHΔt( |q N1q N1|dq N1)e iHΔt( |q N2q N2|dq N2)e iHΔte iHΔt( |q 1q 1|dq 1)e iHΔt|ψ I \langle \psi_F| e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_{N-1} \rangle \langle q_{N-1}| dq_{N-1}\right) e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_{N-2} \rangle \langle q_{N-2}| dq_{N-2}\right) e^{-iH\Delta t} \cdots e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_1 \rangle \langle q_1| dq_1\right) e^{-iH\Delta t} |\psi_I\rangle
(5)= q 1 q N2 q N1ψ F|e iHΔt|q N1q N1|e iHΔt|q N2q N2|e iHΔte iHΔt|q 1q 1|e iHΔt|ψ Idq N1dq N2dq 1 = \int_{q_1} \cdots \int_{q_{N-2}} \int_{q_{N-1}} \langle \psi_F| e^{-iH\Delta t} |q_{N-1} \rangle \langle q_{N-1}| e^{-iH\Delta t} |q_{N-2} \rangle \langle q_{N-2}| e^{-iH\Delta t} \cdots e^{-iH\Delta t} |q_1 \rangle \langle q_1| e^{-iH\Delta t} |\psi_I\rangle dq_{N-1} dq_{N-2} \cdots dq_1
Assume we have the free Hamiltonian H=p 2/2m.H=p^2/2m. Looking at an individual term q n+1|e iHΔt|q n,\langle q_{n+1}| e^{-iH\Delta t} |q_{n} \rangle, we can insert a factor of 1 and solve to get
(6)q n+1|e iHΔt( dp2π|pp|)|q n = dp2πe ip 2Δt/2mq n+1|pp|q n = dp2πe ip 2Δt/2me ip(q n+1q n) = (i2πmΔt) 12e iΔt(m/2)[(q n+1q n)/Δt] 2. \array{\langle q_{n+1}| e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} \frac{dp}{2\pi}|p\rangle \langle p|\right)|q_{n} \rangle &=& \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-ip^2\Delta t/2m} \langle q_{n+1}|p\rangle \langle p|q_{n} \rangle \\ &=& \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-ip^2\Delta t/2m} e^{ip(q_{n+1}-q_n)} \\ &=& \left(\frac{-i 2\pi m}{\Delta t}\right)^{\frac{1}{2}} e^{i \Delta t (m/2)[(q_{n+1}-q_n)/\Delta t]^2}.}
(7)Dq=lim N(i2πmΔt) N2 n=0 N1dq n,\int Dq = \lim_{N \to \infty} \left(\frac{-i 2\pi m}{\Delta t}\right)^{\frac{N}{2}} \prod_{n=0}^{N-1} \int dq_n,
and letting Δt0,N,\Delta t \to 0, N \to \infty, we get
(8)ψ F|e iHt|ψ I=Dqe i 0 tdt12mq˙ 2. \langle \psi_F|e^{-iHt}|\psi_I\rangle = \int Dq e^{i \int_0^t dt \frac{1}{2}m \dot{q}^2}.
For arbitrary Hamiltonians H=p 22m+V(x),H = \frac{p^2}{2m} + V(x), we get
(9)ψ F|e iHt|ψ I = Dqe i 0 tdt12mq˙ 2V(x) = Dqe i 0 t(q˙,q)dt = Dqe iS(q), \array{\langle \psi_F|e^{-iHt}|\psi_I\rangle &=& \int Dq e^{i \int_0^t dt \frac{1}{2}m \dot{q}^2 - V(x)} \\ &=& \int Dq e^{i\int_0^t\mathcal{L}(\dot{q},q) dt} \\ &=& \int Dq e^{iS(q)}, }
where S(q)S(q) is the action functional.
Is there an easy way to see how the Hamiltonian transforms into the Lagrangian in the exponent?
As an integral against the Wiener measure
More abstractly, the Euclidean path integral for the quantum mechanics of a charged particle may be defined by integration the gauge-coupling action again the Wiener measure on the space of paths.
Consider a Riemannian manifold (X,g)(X,g) – hence a background field of gravity – and a connection :XBU(1) conn\nabla : X \to \mathbf{B}U(1)_{conn} – hence an electromagnetic background gauge field.
The gauge-coupling interaction term is given by the parallel transport of this connection
exp(iS)exp(2πi ()[(),]):[I,X] x 0,x 1Hom(E x 0,E x 1), \exp(i S) \coloneqq \exp(2\pi i \int_{(-)} [(-),\nabla] ) \colon [I, X]_{x_0,x_1} \to Hom(E_{x_0}, E_{x_1}) \,,
where EXE \to X is the complex line bundle which is associated to \nabla.
The Wiener measure dμ Wd\mu_W on the space of stochastic paths in XX,we may write suggestively write as
dμ W=[exp(S kin)Dγ] d\mu_W = [\exp(-S_{kin})D\gamma]
for it combines what in the physics literature is the kinetic action and a canonical measure on paths.
(This is a general phenomenon in formalizations of the process of quantization: the kinetic action (the free field theory-part of the action functional) is absorbed as part of the integration measure against with the remaining interaction terms are integrated. )
Then one has (e.g. Norris92, theorem (34), Charles 99, theorem 6.1):
the integral kernel for the time evolution propagator is
U(x 0,x 1)= γtra()(γ)[exp(S kin(γ))Dγ], U(x_0,x_1) = \int_{\gamma} tra(\nabla)(\gamma) \, [\exp(-S_{kin}(\gamma)) D\gamma] \,,
hence the integration of the parallel transport/holonomy against the Wiener measure.
(To make sense of this one first needs to extend the parallel transport from smooth paths to stochastic paths, see the references below.)
This “holonomy integrated against the Wiener measure” is the path integral in the form in which it notably appears in the worldline formalism for computing scattering amplitudes in quantum field theory. See (Strassler 92, (2.9), (2.10)). Notice in particular that by the discussion there this is the correct Wick rotated form: the kinetic action is not a complex phase but a real exponential exp(S kin)\exp(- S_{kin}) while the gauge interaction term (the holonomy) is a complex phase (locally exp(i γA)\exp(i \int_\gamma A)).
From the point of view of higher prequantum field theory this means that the path integral sends a correspondence in the slice (infinity,1)-topos of smooth infinity-groupoids over the delooping groupoid BU(1)\mathbf{B}U(1)
[I,X] ()| 0 ()| 1 X exp(iS) X χ() χ() BU(1) \array{ && [I,X] \\ & {}^{(-)|_0}\swarrow && \searrow^{(-)|_1} \\ X && \swArrow_{\exp(i S)} && X \\ & {}_{\mathllap{\chi(\nabla)}}\searrow && \swarrow_{\mathrlap{\chi(\nabla)}} \\ && \mathbf{B}U(1) }
(essentially a prequantized Lagrangian correspondence) to another correspondence, now in the slice over the stack (now an actual 2-sheaf) Mod\mathbb{C}\mathbf{Mod} of modules over the complex numbers, hence of complex vector bundles:
X×X p 1 p 2 X γexp(iS(γ))[exp(S kin(γ))Dγ] X ρ(χ()) ρ(χ()) Mod. \array{ && X \times X \\ & {}^{p_1}\swarrow && \searrow^{p_2} \\ X && \swArrow_{\int_{\gamma}\exp(i S(\gamma)) [\exp(-S_{kin}(\gamma))D\gamma]} && X \\ & {}_{\mathllap{\rho(\chi(\nabla))}}\searrow && \swarrow_{\mathrlap{\rho(\chi(\nabla))}} \\ && \mathbb{C}\mathbf{Mod} \,. }
For more discussion along these lines see at motivic quantization.
Perturbatively for free field theory in BV-formalism
BV-BRST formalism is a means to formalize the path integral in perturbation theory as the passage to cochain cohomology in a quantum BV-complex. See at The BV-complex and homological integration for more details.
action functionalkinetic actioninteractionpath integral measure
exp(S(ϕ))μ=\exp(-S(\phi)) \cdot \mu = exp((ϕ,Qϕ))\exp(-(\phi, Q \phi)) \cdotexp(I(ϕ))\exp(I(\phi)) \cdotμ\mu
BV differentialelliptic complex +antibracket with interaction +BV-Laplacian
d q=d_q =QQ +{I,}\{I,-\} +Δ\hbar \Delta
The path integral in the bigger picture
Ours is the age whose central fundamental theoretical physics question is:
What is quantum field theory?
A closely related question is:
What is the path integral ?
After its conception by Richard Feynman in the middle of the 20th century It was notably Edward Witten’s achievement in the late 20th century to make clear the vast potential for fundamental physics and pure math underlying the concept of the quantum field theoretic path integral.
And yet, among all the aspects of QFT, the notion of the path integral is the one that has resisted attempts at formalization the most.
While functorial quantum field theory is the formalization of the properties that the locality and the sewing law of the path integral is demanded to have – whatever the path integral is, it is a process that in the end yields a functor on a (infinity,n)-category of cobordisms – by itself, this sheds no light on what that procedure called “path integration” or “path integral quantization” is.
The single major insight into the right higher categorical formalization of the path integral is probably the idea indicated in
which says that
• it is wrong to think of the action functional that the path integral integrates over as just a function: it is a higher categorical object;
• accordingly, the path integral is not something that just controls the numbers or linear maps assigned by a dd-dimensional quantum field theory in dimension dd: also the assignment to higher codimensions is to be regarded as part of the path integral;
• notably: the fact that quantum mechanics assigns a (Hilbert) space of sections of a vector bundle to codimension 1 is to be regarded as due to a summing operation in the sense of the path integral, too: the space of sections of a vector bundle is the continuum equivalent of the direct sum of its fibers
More recently, one sees attempts to formalize this observation of Freed’s, notably in the context of the cobordism hypothesis:
based on material (on categories of “families”) in On the Classification of Topological Field Theories .
The original textbook reference is
• Richard Feynman, A. R. Hibbs, , Quantum Mechanics and Path Integrals , New York: McGraw-Hill, (1965)
Lecture notes include
Textbook accounts include
• G. Johnson, M. Lapidus, The Feynman integral and Feynman’s operational calculus, Oxford University Press, Oxford, 2000.
• Barry Simon, Functional integration and quantum physics AMS Chelsea Publ., Providence, 2005
• Joseph Polchinski, String theory, part I, appendix A
Discussion in constructive quantum field theory includes
• James Glimm, Arthur Jaffe, Quantum physics -- A functional integral point of view, 535 pages, Springer
• Simon, Functional Integration in Quantum Physics (AMS, 2005)
• Sergio Albeverio, Raphael Høegh-Krohn, Sonia Mazzucchi. Mathematical theory of Feynman path integrals - An Introduction, 2 nd corrected and enlarged edition, Lecture Notes in Mathematics, Vol. 523. Springer, Berlin, 2008 (ZMATH)
• Sonia Mazzucchi, Mathematical Feynman Path Integrals and Their Applications, World Scientific, Singapore, 2009.
The worldline path integral as a way to compute scattering amplitudes in QFT was understood in
Stochastic integration theory
The following articles use the integration over Wiener measures on stochastic processes? for formalizing the path ingegral.
• James Norris, A complete differential formalism for stochastic calculus in manifolds, Séminaire de probabilités de Strasbourg, 26 (1992), p. 189-209 (NUMDAM)
• Vassili Kolokoltsov, Path integration: connecting pure jump and Wiener processes (pdf)
• Bruce Driver, Anton Thalmaier, Heat equation derivative formulas for vector bundles, Journal of Functional Analysis 183, 42-108 (2001) (pdf)
For charged particle/path integral of holonomy functional
The following articles discuss (aspects of) the path integral for the charged particle coupled to a background gauge field, in which case the path integral is essentially the integration of the holonomy/parallel transport functional against the Wiener measure.
• Marc Arnaudon and Anton Thalmaier, Yang–Mills fields and random holonomy along Brownian bridges, Ann. Probab. Volume 31, Number 2 (2003), 769-790. (Euclid)
• Mikhail Kapranov, Noncommutative geometry and path integrals, in Algebra, Arithmetic and Geometry, Birkhäuser Progress in Mathematics 27 (2009) (arXiv:math/0612411)
• Christian Bär, Frank Pfäffle, Path integrals on manifolds by finite dimensional approximation, J. reine angew. Math., (2008), 625: 29-57. (arXiv:math.AP/0703272)
• Dana Fine, Stephen Sawin, A Rigorous Path Integral for Supersymmetric Quantum Mechanics and the Heat Kernel (arXiv:0705.0638)
A discussion for phase spaces equipped with a Kähler polarization and a prequantum line bundle is in
• Laurent Charles, Feynman path integral and Toeplitz Quantization, Helv. Phys. Acta 72 (1999) 341., (pdf)
following Norris 92, theorem (34).
Other references on mathematical aspects of path integrals include
Detailed rigorous discussion for quadratic Hamiltonians and for phase space paths in in
Discussion of quantization of Chern-Simons theory via a Wiener measure is in
• Adrian P. C. Lim, Chern-Simons Path Integral on 3\mathbb{R}^3 using Abstract Wiener Measure (pdf)
Lecture notes on quantum field theory, emphasizing mathematics of the Euclidean path integrals and the relation to statistical physics are at
MathOverflow questions: mathematics-of-path-integral-state-of-the-art,path-integrals-outside-qft, doing-geometry-using-feynman-path-integral, path-integrals-localisation, finite-dimensional-feynman-integrals, the-mathematical-theory-of-feynman-integrals
• Theo Johnson-Freyd, The formal path integral and quantum mechanics, J. Math. Phys. 51, 122103 (2010) arxiv/1004.4305, doi; On the coordinate (in)dependence of the formal path integral, arxiv/1003.5730
Revised on February 19, 2016 15:11:41 by Urs Schreiber ( |
efa7cd868642ee80 | We don’t usually think of chemicals as having shapes. But from the simple V of a water molecule to the intricate folds of a protein, the shape of a chemical compound plays a critical role in how it reacts with other molecules. For example, many drugs work by binding to specific receptors in cells, a process that depends on a precise match between the shape of the drug molecule and of the receptor.
Gtx280_folding_2 The shape of a molecule is determined by the interactions of the electrons in its constituent atoms, ultimately at the level of quantum physics. In the simple case of water, the physics causes the two hydrogen atoms to bond to oxygen at a 105° angle. Proteins, however, can contain thousands of atoms arranged in a helix that twists and turns into complex shapes.
As in so many other fields, the immense power of computers to complete in seconds. computations that would take human beings lifetimes has revolutionized chemistry, giving rise to the collaboration of chemistry, physics, mathematics and computer science known as computational chemistry. Pharmaceutical research, for example, used to be a hit-or-miss process testing thousands of chemicals for pharmacological effects, with far more misses than hits. Now researchers are more likely to figure out what sort of molecule they need, set out to design it, and then figure out a way to synthesize it.
For large molecules–and proteins and other biologically active molecules that are often very large–this becomes a daunting computational task. Fortunately, it is one that lends itself well to the efficiencies of parallel computing. Most of this work used to be carried out on supercomputers or custom-designed clusters of workstations and servers. More recently, the work is moving to massively multi-core graphics processing units, such as NVIDIA’s Tesla.
The results of this can be very impressive. TeraChem, from PetaChem LLC is a quantum chemistry software package optimized for GPUs. Running analyses of several molecules on a workstation with four Tesla GPUs, TeraChem performed 8 to 50 times faster than the widely used General Atomic and Molecular Structure System (GAMESS) software running on a cluster of 256 quad core CPUs. A quad Tesla workstation is hardly your garden variety desktop—the 240-core Tesla C1060s go for about $1,300 apiece—but the setup outperformed far more expensive and complex hardware.
Harvard chemist Alán Aspuru-Guzic is a convert to GPU computing. His quantum chemistry research group analyses molecultes using electron correlation. This approach requires solutions to Schrödinger equations, differential equations that describe changes in the state of the system over time. An exact solution to a Schrödinger equation requires knowing all possible quantum states at the same time, a complicated version of the famous mind experiment of Schrödinger’s cat, which may be alive, dead, or both inside a sealed box. That’s a problem that can only be solved on a quantum computer, a device that unfortunately exists only in computer science labs, and there only in a primitive and not very usable state. Eventually, quantum computers will be available to computational chemists. But, says Aspuru-Guzik, “it will take a decade, maybe two decades. It’s hard to predict.”
Lacking quantum computers, researchers have to settle for close approximations to the exact solutions, but even these require tremendous computational effort. “In the meantime we have the GPU,” says Aspuru-Guzik. “The GPU is a very attractive alternative because it is cheap. It’s the future of computing.”
Aspuru-Guzik’s toolkit includes Q-Chem 3.1, a commercial quantum chemistry program, and CUBLAS, a high-powered linear algebra system based on CUDA, NVIDIA’s technology for general computing on GPUs. They found that using GPUs to assist in the multiplication of large matrices, another job ideally suited to parallel processing, sped the task by a factor of 13 over the use of the CPU alone. Most of us, of course, will never try to figure out the interactions of electrons in a molecule, nor will we have any use for a quantum computer. But the result of this work is important to all of us because it means better understanding of basic chemical processes and ultimately such things as faster development of new drugs. You can even get involved through a Stanford University project called Folding@home. that uses idle time on thousands of computers to compute protein folding. And if your computer has a modern GPU, you’ll become part of the parallel revolution.
This post is an entry in The World Isn’t Flat, It’s Parallel series running on nTersect, focused on the GPU’s importance and the future of parallel processing. Today, GPUs can operate faster and more cost-efficiently than CPUs in a range of increasingly important sectors, such as medicine, national security, natural resources and emergency services. For more information on GPUs and their applications, keep your eyes on The World Isn’t Flat, It’s Parallel.
Similar Stories |
ebbccaccc84b75ba | 245 Responses to Open Thread Non-Petroleum, March 22, 2017
1. Science has proven that Republican Conservatives, (i.e. Right-Wing Ideologues), are just plain dumb. That is they have much lower cognitive ability than their much smarter counterparts, Liberal Democrats.
The below is a very extensive paper published by the Association for Psychological Science. I think it just confirms what most of us already know.
photo Right Win Ideology_zpslziawm4u.jpg
John Stuart Mill: In a Parliamentary debate with the Conservative MP, John Pakington (May 31, 1866).
• Dave Hillemann (Texan) says:
My conservative values stem from a deep love of the USA, including all it has been and all it can be. There’s no intelligence deficiency there, just a sensible pragmatic consideration of which candidates better align with my own beliefs and desires. As of now, I will say Trump has been aligned with me around 70-80% of the time, and I am in full agreement with him on most of the larger issues of importance.
• Nick G says:
Yeah, the study about is about a general correlation – it doesn’t say that smart people can’t be wrong too. I know plenty of smart people who have unrealistic ideas about how the world works (i.e., are conservative).
A big part of the problem is misinformation from bad journalism: Fox News, NewsCorp, talk radio, etc.
• GoneFishing says:
The plan is to pump a lot more CO2 into the air and dumb us all down.
• R2D2 says:
You might want to have your furnace checked out for leaks
• GoneFishing says:
Villager’s don’t comprehend sarcasm.
• GoneFishing says:
Sorry, the term villager and comprehension in the same sentence is inexcusable.
• R2D2 says:
Stormwatcher, I understood your sarcasm just as much as you understood mine. You just don’t seem to like it, when your on the receiving end. You should be more careful. Now man up and sleep in it.
Sarcasm- a sharp and often satirical or ironic utterance designed to cut or give pain
• GoneFishing says:
Villager, you do not comprehend the difference between a personal attack and a generalized statement. I have no expectations for you so you meet my expectations.
• R2D2 says:
Bullshit, you were directing your sarcasm at Dave Hillemann. You just did it behind your mothers skirt. If you’ve got a problem with somebodies statement. Grow a pair and lay out your case. Your comment was just a cheap shot.
“man up”
“I have no expectations for you so you meet my expectations”
Careful, your going to fall off your high horse.
• GoneFishing says:
Well, you have just shown a proven ability to misinterpret, misconstrue and spew hatred. Having home problems or just not getting your prescriptions filled?
I was referring to a Harvard paper on the cognitive effects of CO2 which I had presented here at an earlier time.
But if you need to believe something else, then join the delusional masses.
Why have you named yourself after a star wars character that looks like a rolling garbage can and makes unintelligible beeps?
• R2D2 says:
“misinterpret, misconstrue and spew hatred”
FishBait, your the one who make the first sarcastic comment(refer to the definition). Now your referring to some study that you’re pulling out of your rectum from some past post. That has no relationship to the prior conversation being discussed. Then belittled me for not comprehending your now disclosed nonsense. You make less sense than Trump. I don’t know if your a conservative, but your stupid enough to be one.
• GoneFishing says:
The villagers are angry because they can’t understand even simple things.
• Oldfarmermac says:
I know plenty of liberals who THINK they are smart, and for the most part, they are, depending on the subject matter in question.
In some respects, they are as dumb as fence posts. In politics for instance they expect working class people to vote for politicians who are campaigning on globalism and ( relatively speaking ) open borders.
They believe they can piss and shit all over just about any cultural value held by working class people, and use sniffy nose in the air condescending language while they are doing it, and still expect working class people to vote their way.
They are often accomplished in math, but they can’t see something that is perfectly obvious, which is that there are more working class people in this country than any other class, and that the working classes INCLUDE most of the minority ethnic or racial population, and most of the various other politically sensitive classes as well such as gays and lesbians etc.
And after steadily losing power, politically, to the supposedly less intelligent so called conservative faction, until they are in the dog house , politically, quite a lot of them REFUSE to even consider the possibility that maybe they are in the dog house due to failing to understand reality at the fairly simple level of electoral politics.
Quite a few of these supposedly intelligent liberals are actually STUPID enough, when it comes to winning elections, to accuse the largest bloc of people they need to win of being racists, xenophobes, sexist, superstitious, etc, on a blanket basis.
And then they accuse any body who tries to get them THINK a little of being a Worm Tongue sent by the opposition.
Well folks, let this old redneck who likes to watch cars go around in circles while drinking beer with his friends remind these super smart liberals of a couple of things just one more time. In order to win at the track, you first have to FINISH the race.
In order to advance your grand agenda, which DOES include some absolutely critical and absolutely essential elements, such as protecting the environment, well you first have to WIN elections.
The Trumpsters aren’t interested in doing a goddamned thing for gays and queers and girls and boys who want to switch bathrooms. YOU can’t do a damned thing for them unless you are IN POWER.
If you want to return to power, well, you are going to have to adjust your politics to reflect the fact that MOST people are going to vote their own personal values, their own personal interpretation of the world as divided into two camps, US and THEM, and they are going to vote the way they THINK best represents their own economic interests.
Conservative people are not even half as stupid as liberals like to make them out to be.
Consider the issue of unions versus open shop or no unions for instance. The average or typical person who does not support closed shops understands a few things that pro union liberals are VERY CAREFUL to avoid mentioning when the topic is unions.
I will get back to this particular point later, the sun is out nice and warm, and I’m going out for a while to enjoy the spring sunshine and get a little exercise transplanting some walnut seedlings which will hopefully produce a few thousand bushels of super nutritious and tasty nuts over the next century or so.
In some respects, they are as dumb as fence posts.
Hell, I never met a conservative who thought differently. What conservatives don’t realize is that has nothing to do with math or engineering. It is all about philosophy. Or more correctly, their philosophical world view.
And their view, that is the average conservative view goes something like this. (Though they may not agree with all the below, most conservatives would agree with most of what I have listed.)
We need to go back to the Bible. We should get God back into government. Prayer ought to be returned to public schools. Evolution is a myth. You can have my gun when you pry it from my cold dead hands. Queers ought to be shot. Amerka, love it or leave it. Professional wrestling is not fake. Most intellectuals are dumb as fence posts and they don’t know shit abut how to make Amerka great again.
• Oldfarmermac says:
Hi Ron,
You are a prime example of precisely the sort of politically naive liberal individual I have been talking about.
I am not advocating defending the church, or god in government, or any of the shit you posted. You can search the whole internet, and you won’t find that I have supported even ten percent as many key Republican party policies and positions as I support in common with the D party.
I am talking about WINNING ELECTIONS, or at least not LOSING elections by unnecessarily insulting many tens of millions of people. ENOUGH of those people could be and would be voting YOUR way ,rather than for the opposition, for the D party to WIN, if you would simply ditch the condescension , and the trash talk, and talk to them and about them respectfully while seeking common ground.
Holier than thou trash talking Democrats need to keep their goddamned mouths SHUT in certain respects if they want to win elections.
If you are trying to make friends with a new acquaintance or coworker , or trying to get a date, or get a JOB, or MAKE THE SALE if you are a salesman, well, you can bet your last can of beans that you do NOT have a snowball’s chance on a red hot stove of getting that date, or landing that job, or making that sale, if you GO OUT OF YOUR WAY TO INSULT THE WOMAN,INSULT your new potential employer, or INSULT that potential customer.
Why do you think things should be different in terms of winning votes ????
If you want the vote of the working class and people who take religion seriously, etc, you will find it helpful to display a little of the tolerance you ( rhetorical ) supposedly hold sacred.
Even the dumbest hillbilly understands meaning of the old saying that you can catch more flies with honey than you can with vinegar.
There are ENOUGH such culturally conservative and religious people in this country to elect so called conservatives aka REPUBLICANS such as Trump, and enough of them to elect Republicans to a substantial majority of all public offices nationwide.
That’s an incontestable fact, and there’s not a damned thing you can do for it, except wait for them to die, unless you are willing to think about winning them over to your side.
Keep up the UNNECESSARY and UNCALLED FOR condescending trash talk, and you can kiss the votes of ENOUGH of the culturally and politically conservative and middle of the road people nationwide good bye to STAY in the dog house.
Sure Clinton won the popular vote, but that’s JUST ONE election, where as the R’s have been winning elections at upwards of twice the rates D’s have been winning, which is obvious from the fact that the R’s control so many state legislatures, governor ships, mayors and sheriffs offices, etc nationwide.
It’s not at all unusual for a baseball team to do well one or two innings out of the nine while losing the game.
If you can’t comprehend that the culture war the liberals have been winning in the courts for the last couple of generations has been playing a MAJOR role in liberals losing ELECTIONS, to the point that the R’s now have firm control of government at every level, nationally, well, you just aren’t open minded enough to GET IT.
If you are incapable of understanding that if you want to win elections, then you must run candidates and pursue policies that are more appealing to the majority of the people than the candidates and policies the OTHER party runs and pursues, well, that LACK of understanding will KEEP you out of power.
Sometimes it’s NECESSARY to back off publicly supporting some portions of your overall agenda, and substitute portions of the oppositions agenda in order to win elections, and put MOST of YOUR OWN agenda into practice.
If you LOSE , you won’t be putting much of YOUR OWN agenda into practice.
I don’t know how to make it any plainer.
• Mac, let’s understand where I am coming from. I am not trying to win friends and influence people. I am not looking for converts. I have no need to spread honey over anything. Vinegar will do just fine, thank you.
Do you actually think I want to proselytize Trump supporters? First of all, I realize that my input is just a drop in the ocean and has little to no influence in the grand scheme of things. But most importantly I know it is impossible to convince Bible thumping ignoramuses into believing anything logical or scientific. Their world view was pounded into their heads while they were still children and it will never change.
Hey, all I am doing is telling it like it is. And if it taste like vinegar then so be it.
• Oldfarmermac says:
Hi Ron,
MY POINT is that if you want to win elections, you MUST win friends and influence people to vote your way.
And while there are plenty of ignoramuses in this country, including some who are indeed bible thumpers, they ARE all entitled to vote, unless they’re convicted felons or something of that nature.
It IS possible to find common ground, and work with them, and get them to vote YOUR WAY.
The VERY FIRST STEP involved in doing so is to quit badmouthing them. Now if you are NOT willing to do that……… well, they will continue to vote against the D party, because until the badmouthing stops, they will never listen to any argument that can potentially result in their changing their political stripes.
I am sure you believe it is wrong to badmouth a person who is born physically deformed, or with less than normal intelligence, etc. And you are undoubtedly smart enough to know that we don’t choose our parents, or the station in life to which we are born. Either of us could have been born a black kid in Africa where we would as likely as not starved or been killed in one of the many wars that plague that part of the world, or we could have been born with the proverbial silver spoon, and our surnames could have been Vanderbilt, or Getty.
SO -If you understand this , and I am absolutely sure you DO, where do you get off, how do you justify consistently and perpetually badmouthing your fellow citizens who happened to be born into a culture that has shaped their lives just as the culture they are born into shapes the lives of starving African kids and the lives of Vanderbilt rug rats ?
Is it my old Daddy’s FAULT he was born to parents who were hard working and honest but barely literate parents, who taught him as best they could, to live right, according to what THEY knew themselves ?
There was a ONE ROOM SCHOOL near here, back then, and life was pretty damned tough, and the one old woman who ran that school, single handed, and was a SAINT, if ever there was one, believed in the KJB. Daddy went to work, he never had a shot at a real education, nor did anybody else in this entire neighborhood, excepting the kids of the very small handful of wealthy people who lived here back then.
If you want to know WHY so many culturally conservative people are READY and EAGER to give culturally liberal people the middle finger, think about just how unjustified you are when you talk about people like my old Daddy the way you do.
He would give you the shirt off his back if you needed it, in a flash, although he doesn’t know you.
You are as guilty of stereotyping people for reasons of your own as any body from the opposite end of the political spectrum. Do you realize that?
Somehow I doubt you have even the foggiest idea what is actually taught in the VAST majority of churches these days, or not taught, because you focus exclusively on the worst case examples of religious people in term of their behavior and politics.
The adult lesson this last Sunday at the church where my folk are buried was NOT about denying evolution, or a flat earth, or any of that sort of dogma. It was about doing the right thing, when you run across somebody in a lot of trouble, thru no fault of their own. The lesson taught that you are not to turn your back, that you are not to be afraid of the person in need, that you WILL go out of your way, and spend your time and your money helping the person in need.
A few weeks back the lesson was about putting aside some of your earnings as savings, and living modestly, rather than extravagantly, and turning away from rich food, and strong wine , and that sort of thing.
The basic rules we live by, if we are civilized, are woven into every lesson, rules such as thou shalt not steal, etc.
I can’t even REMEMBER the last time I heard a preacher bring up the subject of evolution from the pulpit, although it happens. I can’t even remember a preacher saying anything against birth control, although it happens.
It never seems to occur to people like you that the various taboos enforced by most churches or religions serve very useful purposes, in terms of the smooth functioning of the lives of the members. If you don’t eat pork, you don’t get sick from eating it, and in times gone by, eating it was often a death sentence.
Not having extramarital sex solves ninety nine percent of the problems associated with sexually transmitted diseases. A woman who refuses to sleep with a man who does not wish to marry her doesn’t risk having to raise a child without the man around to help .
There’s a reason religions are pretty much universal across time and geography. They are evolved behavioral systems that confer FITNESS. The most fit individuals and societies tend to grow and eventually smother out competing societies. You know enough biology to know this is so.
Now having said all this, I recognize that modern societies, especially wealthy ones, have evolved in ways such that the role played by religion in times past can now be played, and IS played, frequently, by other institutions, and that in such highly evolved societies, religions tend to fade away, with nobody the worse in consequence.
Personally I am a hard core Darwinist, and believe in evolutionary theory, including the relatively new field of evolutionary psychology.
I don’t believe in eternal life, or Heaven or Hell, or any dogma as such, but otoh, I am not blind to the reasons why religions exist and persist.
I rarely set foot inside a church these days, except to attend funerals and weddings, but I still know what goes on inside, in general terms.
• The VERY FIRST STEP involved in doing so is to quit badmouthing them.
Fuck no I will not. They are dumb as dirt and I will continue to remind them of that fact.
They are dogmatic ideologues. They don’t listen to arguments…. period.
Those who know that their beliefs are founded in reason are willing to argue their way to victory and are willing to renounce opinions that do not survive such argument. Those who are aware that their beliefs are founded in faith, on the other hand, are unwilling to submit their beliefs to dispassionate discussion and do not expect to change their own beliefs ever. They are perfectly willing, if pressed, to resort to force to change other people’s beliefs by brainwashing children, persecuting heretics, and warring with “unenlightened” adversaries. Religious instruction manipulates the vulnerable psyches of young children before they are able to think for themselves, endeavoring to prevent them from ever acquiring this ability. They never attain an intellectual resistance sufficient to counter the influence of dogmatic precepts, to grow up as free individuals.
Bertrand Russell:
Human Society in Ethics and Politics.
• HuntingtonBeach says:
OldMacFarmer aka KGB says-
“aren’t interested in doing a goddamned thing for gays and queers and girls and boys who want to switch bathrooms”
Your statements makes it clear you don’t have a problem with your behavior. You just don’t want to be called out for who you are. It’s not about your vote for Liberals. It’s about your mean selfish actions. Once conservatives understand they are uncivilized knuckle draggers. There will be no problem with Liberals winning the vote.
• Oldfarmermac says:
Hi HB,
It’s not at all uncommon to run across people who are so utterly wrapped up in their own righteousness and so intellectually blind that they are simply INCAPABLE of appreciating constructive criticism, whether presented diplomatically, or sarcastically, or in any OTHER fashion, but you are about as extreme an example of that sort of person as I have ever run across.
I DON’T go around insulting the working class people who are the real core of the Democratic voting coalition, calling them racists, xenophobes, superstitious, etc, on a blanket basis.
That’s the sort of talk you and countless holier than thou liberals, including some other regulars here, engage in on a regular basis, and then you go around blaming the people you talk about that way for voting for the opposition, repeating your insults once again.
Before the election, you accuse them of stupidity, and after the election, you blame losing on their stupidity.
What I actually SAID farther down was THIS.
WHAT YOU SAID , quoting me, out of context, so as to create the impression I”M a trumpster ““aren’t interested in doing a goddamned thing for gays and queers and girls and boys who want to switch bathrooms”
Well, so long as the D party is out of power, it WON’T be doing much to protect gays and lesbians or transgender individuals or endangered species or what’s left of the natural world.
It takes a real idiot to deny such an obvious and simple observation.
It takes a partisan idiot, or cynical brazen hypocrite like you to take a few well selected words OUT OF CONTEXT and thereby try to make the speaker of those words look bad.
This technique works if you have a way of presenting your cherry picked, out of context quote to the public, when the person whose words you quote cannot reply due to lack of access to whatever media you use .
In a forum such as this one, it won’t work. I can repeat the words I wrote earlier, and RESTORE the context, leave you looking like an even bigger fool.
And while I may not get many if any replies supporting the arguments I have been making here, well……. I have reason to believe that the LACK of counterarguments is evidence enough that I am getting thru to at least some people.
And it’s obvious enough, to anybody who cares to take a few hours reading the many sites devoted to internal D party politics that more and more big D Democrats are coming around to thinking the way I have been talking here.
I’m just repeating the message of this large and growing bloc of Democrats. This bloc is now large enough that the last election for party chairman was pretty damned close. The Republican Lite faction is on it’s way out, it’s just a matter of time.
This message is not my original creation by any means, and I have never claimed otherwise.
With a little luck, the rising Sanders and company faction will soon take control of the D party away from the declining Republican Lite faction that has controlled the party in recent times , and return it to its true roots, and start kicking Republican ass instead of the D party getting ITS ass kicked.
If my circumstances are such that I am able to do so , I will probably be out and about having a good time next election working the phones and maybe doing some door to door with some of the young people I met at Sanders rallies.
Being retired has some good points, such as being free to do what you please, once you have taken care of any personal obligations such as looking after family members.
But I will never go around pretending the D establishment is always right, and the R establishment is always wrong, or arguing that socially conservative and religious people do not have the right to believe in values of their own choosing.
• HuntingtonBeach says:
OldMacDonald aka KGB, Sanders had lost the election by mid March of 2016. But you insulted HRC every other day until Nov 8th by spewing Russian Conservative fake news. Your a Trumpster, plain and simple.
“Even the dumbest hillbilly understands”
There you go, insulting millions. Your a bigger conman than your Trump. Your just a “stupid Conservative”.
• Oldfarmermac says:
Back to you one more time, HB
SURE I’m a stupid conservative.(SARCASM LIGHT ON FOR HB’S benefit. )
That’s why I post at least a couple of thousand comments a year in various forums in support of this country moving to a Western European style health care system, in support of pedal to the metal incentives for the renewable energy industries, in support of energy efficiency and conservation, in support of strong environmental rules and legislation, etc.
Please keep it coming, because you are helping me make the case that the D party needs to change direction to a significant extent in terms of what it places first and foremost in the overall party agenda.
Bankster buddies and globalism aren’t very effective arguments at a time when the people are scared for their jobs.
Sure people like you who have some money are happy when they can hire their yard work done for peanuts because so many people are out of work due to the industry they USED TO work in has been shipped overseas.
But the catch is that such people generally fall into the class of people once described as ” newly minted conservatives” by a conservative comedian or pundit. I don’t know who he was, but he nailed it.
His definition of a newly minted conservative is this: ” a Democrat who has just been mugged”.
It’s as easy as falling off a log for liberals who are government employees, retirees, welfare bums, professional people working in fields where their jobs are not threatened by immigrants or their industry being offshored, union members with seniority in industries that CAN’T be offshored, such as with the public utilities, etc etc.
But as for the REST of the people, academic arguments about “everybody ” EXCEPT THEM being better off due to globalism don’t cut much ice when they have already been shitcanned by globalism, or in FEAR of being shitcanned by globalism and immigrants taking their jobs.
The French have a saying about such thinking which goes something like this.
“Only a fool or an academic could possibly believe….( insert example argument).
Only a fool or a liberal insulated from the effects of globalism and immigration could possibly believe it’s safe to campaign on globalism and relatively open borders at a time when people are worried sick about their jobs and life styles.
Now as it happens, I am not a hypocritical and cynical partisan, like you, and I don’t mind at all pointing out the TRUTH about globalism , when it comes to politics.
The Republicans are the real drivers of the globalism movement, in terms of American national politics. The Democrats, or more specifically the Clinton machine faction of the Democrats, are just ” me too” Republican Lite types who are all too happy to take the banksters money, and corporate money, and sell out the people who are the core of the Democratic Party coalition , the working class people of this country.
And ENOUGH of them understand this reality, and gave the Republican Lite D HRC the finger in the states that put Trump in the White House.
Now WITHIN THE CONTEXT of this observation, it does not matter that Trump is a con man, who will happily sell out ANYBODY, and has habitually done so all his life.
He had sense enough to take advantage of the very real fears of these people for their jobs, and at least CAMPAIGNED on keeping the industry here in the USA.
Clinton was so unspeakably arrogant and condescending that she didn’t even put in an appearance in the last three big states where she in effect told generally reliable D voters to go fuck themselves, but please on the way don’t forget to stop off at your polling place and vote for me.
Now I realize that you and a lot of other hard core Democrats are incapable of understanding such simple observations, but there is no doubt in my mind that there are millions of others who have seen the light already, and millions more who will, in the near future.
• Oldfarmermac says:
Somebody here in this forum, I can’t remember who, recently posted a comment to the effect that Trump’s election would or might turn out to be the best thing that ever happened to the country, because it would finally force the people to realize what Trump type government is all about.
They may be right. In my opinion, there is an EXCELLENT chance they are right, and that the result of Trumps election is that there will be a powerful political backlash in favor of liberalism and Democratic Party politics, resulting in the return of the D party to power.
BUT BUT BUT BUT this observation is only likely to hold true if the D party evicts the Republican Lite faction that has controlled the party in recent times, and puts real Democrats and real Democratic policies first and foremost in future elections.
This is NOT to say that the D party should abandon very much of the current party agenda, other than the Republican Lite portions of that agenda.
The D party does not NEED to turn it’s back on gays and lesbians or transgendered people or racial or ethnic minorities, or the fight for strong environmental legislation, or any of the high moral ground it holds in such matters.
All it needs to do is understand that all the people mentioned in this comment will vote D ANYWAY, so long as they do not feel betrayed by the party. What other choice do they have, other than to vote R?
They’re on board, they’re in the bag, so to speak, down on the plantation , in the sarcastic language of the R party . They won’t be leaving the D party “plantation” because it’s their political home.
What the D party must do is put the economic security of the working class people of this country first and foremost in terms of selecting candidates and formulating policy, and communicating policy to the voting public.
OTHERWISE………. the R’s may well remain in power for quite some time, probably until the demographic trends in favor of the D party result in the D’s returning to power. In effect this means waiting for most of the boomer generation to die off, and be replaced at the polls by younger people who are on average more liberal than their grandparents and even their parents by a country mile.
• alan2102 says:
Oldfarmer: Off topic, but I just wanted to pass something along to you (and anyone else who might be interested).
You wrote, on an older thread:
Oldfarmermac says:
01/20/2017 at 10:44 pm
“I no longer have access to professional journals, which is a great disadvantage to me…. Hopefully within the next year or so, I will have my personal ducks in a straighter row, and get up to Blacksburg, and enroll in a course or two as a special grad student, which will give me access again”
You don’t need to enroll in courses. You just need to go to sci-hub.cc:
….enter PMID or DOI, and bingo! FREE FULL TEXT of just about anything academic. Works like a charm.
• Oldfarmermac says:
HI Alan,
THANK YOU!
I have heard this sort of thing mentioned, but was under the impression you must have a valid student id number to get access.
But now that I am more or less fully freed up to do as I please, excepting family obligations, I intend to enroll in a class or two anyway, because as an enrolled student, I can get an appointment to talk to just about any professor in the entire university during his student office hours.
Such face to face access is PRICELESS when you are researching a book.
• Fred Magyar says:
Well, you are definitely in the minority!
The whole world is laughing at the USA because of Trump! He is managing to destroy decades of hard diplomatic work and the USA is rapidly losing respect on the international stage!
Don’t forget that Trump lost the popular vote by about 2 million votes and was elected by less than 25% of eligible voters. Trump’s job approval rating stands at 37 percent with a whopping 56 percent of Americans disapproving of the job Trump’s doing.
• Songster says:
Hi Fred,
He won the election fair and square. Sorry, but your statistics really don’t mean much. He won. The electoral college exists for a very good reason. Would you prefer out and out North vs South or East vs West or Liberal vs Conservative with real weapons? Without the electoral college that probability is vastly increased (IMHO).
As to what the rest of the world thinks, again, so what? We have many times been very unpopular, even when doing the right thing. Their vote doesn’t mean much to me.
Trump is bizarre (and stupid about climate), for sure. But things did need to get shaken up, be it by Sanders or Trump or another (I voted another). The rotating Clinton/Bush door was really not working either.
• Fred Magyar says:
I take your points and agree that things were due for a shake up regardless and that, in and of itself was to be expected. Though Trump being stupid about climate is the least of my problems with him.
Claiming my statistics don’t mean much misses quite a few elephants in the room, One of which is that he and his administration actually have to govern. His approval ratings are the lowest of any president ever! Winning got him in but that was just a job offer. He is still in a probationary period and his performance so far has been very unimpressive with regards actual governing.
As for suggesting that what the rest of the wold thinks of us is unimportant is a very head in the sand kind of attitude. How the US interacts with the rest of the world has real consequences for all Americans.
• The electoral college exists for a very good reason.
Bullshit! The electoral college existed for a very good reason. Existed, in the past tense. In the early days where communication was poor and horse powered travel was the only kind of travel, the electoral college system was enacted to help solve that problem. Each state would vote for representatives to go to Washington and and vote for them.
Then these representatives would all travel to Washington and cast their vote for President. But now in the days of instant communication, the electoral college system is an antiquated system.
• Oldfarmermac says:
Horse and buggy travel had quite a bit to do with early days political arrangements, no question.
But Ron is wrong about the REAL reason we have an electoral college, and why we have IN ADDITION to the electoral college, TWO SENATORS per state, regardless of population.
The smaller states would not have been willing to enter into long term or permanent political alliances with the larger and more powerful states without the safegaurd two senator per state arrangement.
And while I am not especially fond of the electoral college myself, I don’t have any trouble seeing it as being a restraint on what can be or could be referred to as a runaway majority that might steamroller the interests of the people of some of the smaller or less economically powerful states.
I want to be clear about my opinion of Trump. He is the worst president we have ever had, by a mile, and I have a hard time thinking of any particular policy of his which I agree with, for the same reasons, or to the same extent.
He’s so bad I’m even just the tiniest bit hopeful he will have to resign , even with the R’ in control of Congress to cover his sorry ass.
But the people of the three big rust belt states sent the majority that voted for Clinton a MESSAGE. The message will most likely be forgotten, but it was clear enough. Piss and shit on us enough, and we will rebel, and even vote for the opposition, when you force feed us social change we aren’t all that interested in to begin, and when you run a globalist bankster when we are worried sick about losing our jobs.
The electoral college and the two senators serve useful purposes, in terms of allowing the country to BE a country. They serve to protect the interests of people who are MINORITIES as measured or defined by the PLACE they live, or the cultural and economic class they belong to.
There is a real possibility that some states would actually try to secede if these arrangements were seriously questioned. Without them, there would not likely even BE a USA as we know it.
• Mac, the electoral college has absolutely nothing to do with minorities. We only had one minority when the electoral college was enacted, blacks. And at that time they were not even allowed to vote.
Also, minorities, of that time, had nothing to do with where you lived. Small states are better represented by two senators and representatives based on population. But the president is elected by the people, not by states. Or at least that’s the way it should be.
Oh good gravy, you know better than that. The last time some states tried to secede we went to war. It is impossible for states to secede though some right wing wingnuts give lip service to it.
• Hickory says:
OFM- someone losses however the system is designed. Currently Wyoming population wins big at the expense of Washington. Why is the the vote of someone in Casper Wyo more valuable than one persons vote in Spokane Wa?
One person one vote is how a democracy should work.
I’m pretty sure if the country could vote on it in that manner, that is what the outcome would be.
Also, many people in states that did not vote for Bush or Trump (but nontheless voted for the popular vote winner) would like to exit the union, and for good reason with a schmo like Trump as the so-called president.
• Oldfarmermac says:
Hi Hickory,
I don’t dispute your comment at all.
All I’m saying is that there are REASONS we have the political institutions we do, and that they are GOOD reasons, or at least they WERE good reasons at the time these institutions were put into place.
I am also saying that given the advantage the people in such states as Wyoming have now, due to the two Senator rule, they can be expected to fight like hell if anybody tries to change that rule.
Personally I do not defend the electoral college arrangement. I just recognize it as the fact it IS.
If it is ever on the ballot as a referendum question, I will vote in favor of doing away with it.
I have never given any deep thought to whether I would vote in favor of doing away with the two senators per state rule.
The arguments for doing so are compelling, but the arguments for retaining it are compelling as well.
It serves as a constraint on the majority forcing unwanted change on the people in smaller states, and without that constraint, they might actually decide to withdraw from the union.
They SIGNED UP according to the deal that gave them two senators, and they will be exceedingly unhappy about the deal being changed without their consent. It’s not likely people from Wyoming or Vermont will go along with giving up their advantage, lol.
And there’s a flip side, as well. Small states can morph from red to blue, giving the liberal D faction MORE power.
As a matter of fact, I have consistently argued that the inevitable passing of the two older generations, really old folks and the boomers, will result in the country turning sharply to the left, politically, because younger people are more liberal than older folks, on average , by a country mile.
So MAYBE if the two senator rule is maintained , it will actually work to the advantage of the liberal / Democratic party wing, in the not so distant future, depending on which states go from red to blue soonest.
A lot of people are in favor of allowing DC and maybe Puerto Rico to join the union as states. The odds are EXCEEDINGLY high that senators from DC would invariably be D’s and very high that senators from Puerto Rico would be D’s as well, in the opinion of the people in favor of state hood.
It would be interesting indeed to participate in a nuanced discussion of these possibilities, but this forum is not the place for it.
• Oldfarmermac says:
Hi Ron,
I am beginning to think that if there is ANY POSSIBLE WAY for you to misinterpret what I say, you will find it.
THIS is precisely what I SAID.
I didn’t fucking bring race into this argument, but you apparently saw the word minorities, and just fucking jumped to the conclusion you reached without even reading the REST of the sentence.
Now just about everybody in this country, excepting the hard core members of the right, thought Trump didn’t have a snowball’s chance in hell of winning the nomination, and then after he did, just about everybody thought he didn’t have a snowball’s chance in hell of winning the election, but he DID.
SHIT HAPPENS.
You may be unaware of it, but there are PLENTY of very liberal people who talk about the possibility of states such as CALIFORNIA seceding from the union.
The opposite extremes of the political spectrum could conceivably find COMMON GROUND in dissolving the union as we know it.
The rural states might just be happy as hell to tell the liberal states to get lost, we don’t WANT you if we have to accept your culture along with your money.
And the liberal states such as New York and California that have the money might be happy to say thanks, we’re fucking tire of supporting your sorry impoverished asses ANYWAY, and tired of your obstructing us in our efforts to rearrange the country and the world to suit US.
The odds of it actually happening are exceedingly slim, maybe so low as to approach zero, but there is nevertheless a possibility the country could fall apart politically.
And while we went to war over secession in the eighteen sixties, that war was primarily about just one overriding issue, slavery.
You may not realize it , but prior to the twentieth century, just about everybody in the USA looked at the union as an ALLIANCE of states, something along the line of the current day European Union.
If you asked a New Yorker previous to 1900 what he thought about the federal versus state issue, he almost for sure would have come down on the side of the states having just about all the power, and the federal government very little power at all, other than to secure the borders, run the post office, etc. And this was even AFTER the Civil War.
My opinion is that the odds of the USA surviving in it’s current political form for the easily foreseeable future are anywhere from a hundred to one in favor to a thousand to one, or even higher, in favor.
But the flip side is that there might be a one percent possibility the country falls apart, politically.
• Hickory says:
OFM- we agree about the electoral issues by and large. Regarding the voting trends that you mentioned, I was surprised to find out that the current young generation (millenials)
have a higher rate of right leaning voters among them then the boomers generation did at their age. Also, the country is slowly getting older, favoring the conservative vote.
On the other hand the country is becoming more multi-racial, which strongly favors the democratic party.
I am not confident in either party, since both spend about 95% of their mental bandwidth just fighting the other side, rather than crafting good policy without partisan concern, and both have to work in a faulty system that gives them the incentive to have only a very short time horizon on their goals and operations.
Low expectations. But also very low tolerance for tyranny, and racism and bullying (and ignorance).
• HuntingtonBeach says:
OldMacDonald aka KGB says- “Piss and shit on us enough, and we will rebel, and even vote for the opposition, when you force feed us social change we aren’t all that interested in to begin”
Spoken like a true racist and homophobe who doesn’t realize their hypocrisy.
• Oldfarmermac says:
HB does it again, he pulls a few phrases out of context to make me look like a villian or nincompoop in his three twenty three seventeen 3:23 pm comment.
This is what I actually said.
How about that. What I said is about as far from what HB would like everybody to BELIEVE I said as the east is from the west.
Now as for where my loyalties lie, I AM an advocate and somewhat of a self appointed spokesman for working class people, having been born and bred and raised among them, and having many friends and relatives among them to this day.
I have never denied my origins, I BRAG about them, I am PROUD of them.
But I also have a SUPERB well rounded education, earned by the sweat of my brow, at three well respected universities, as a graduate of one, and an off and on grad student for decades at the other two, plus a couple of years of specialist training I got at community colleges, etc, plus I read serious books even in my old age at LEAST twenty hours a week.
HB , you don’t know shit from apple butter.
A person with real ethics does not defend one scumbag in preference to another. He exposes both for what they are.
Incidentally, just about all the major polls tell us that Sanders is by a mile the most popular politician in the USA today.
The Sanders camp will be taking over, and next time around I will be manning a phone with the kids in that camp, and providing rides, and helping register likely D voters.
This past time , I was too tied up in the house with family responsibilities to do much, other than donate a few bucks and get to a few meetings, and post stuff on the net.
• HuntingtonBeach says:
OldMacDonald aka KGB, Sanders failure to unite the party after his loss to HRC left the door wide open for the Russian fake news to divide the party. Your just a fool who fell for the con and you won’t admit it.
Your going to need that Obama insurance medical card. When you go to the ER with your broken arm. You might also want to ask them to remove your Republican cancer for your soul.
• HuntingtonBeach says:
And yet when it came time to oppose Trump. You did nothing.
• Oldfarmermac says:
Of COURSE I’m a right wing ideologue.
That’s why I support strong environmental laws, renewable energy, single payer Euro style health care, etc etc.
I don’t support one ethical train wreck over another. I do what I can to get rid of both of them.
• Mac, if you are not a right wing ideologue, then I am not talking to you or about you. I am speaking only to, or rather about, right wing ideologues.
But that begs the question, if you are not a right wing ideologue, then why are you bitching so much about my posts, posts that have absolutely nothing to do with you.
• Dave Hillemann (Texan) says:
I don’t give a shit about “approval ratings” or what the outside world thinks of the USA. To do so would be like basing my electoral decisions on peer pressure. Do liberals do that? Maybe they do, considering how insistent they are about their politicians “being on the right side of history.”
• Survivalist says:
The Daily Donald. The stupidest man to ever occupy the WH. No question about it. It’s interesting that you find so much to admire in the man.
• Fred Magyar says:
Maybe you should! The US comprises only 5% of the world’s population. We are faced with global problems which simply can not be addressed at the national level. Humanity needs to cooperate at a global level to survive.
Trump and his administration are trying to go back to a time and place that doesn’t exist anymore.
I doubt a talk by someone like Simon Anholt a political scientist and adviser to presidents and prime ministers of 54 countries, during a career spanning more than two decades, will change your mind but who knows, it might just give you an understanding of why you might want to reassess your thinking.
Or maybe delve a little deeper and listen to a talk by Yuval Noah Harari: Homo Deus: A Brief History of Tomorrow
• Walt Seh says:
Good Evening Mr. Magyar,
When I saw your mentions of “global problems” or a statement “humanity needs to cooperate a global level to survive” I sense that your thought processes may have been unduly corrupted by the globalists, ultimately COMMUNISTS, who have subverted (often by PROXY) many of the United States’ civic and cultural institutions. The United States isn’t the only place where this is happening though. It’s been happening throughout the entire western world, including here in Canada, which got an earlier start to the process in any event.
I’m not going to go into all the details about the communist subversion, because to even come close to understanding the FULL situation, one must undertake several YEARS of study and reflection.
On the other hand, I will guide you toward the book “1984” by Geo. Orwell. In it, he describes a image of a future world beset by out of control government espionage including thought police who seek to exterminate anything which runs counter to the government’s official narratives.
A lot of people write off “1984” as pure fiction, but the TRUTH is, it describes our current plight most correctly. How so? The key is to keep in mind Mr. Orwell could accurately describe life under a government usurped by communism because of first-hand experience with the Spanish Civil War of the 1930s. Once he saw what these forces of pure EVIL could get up to, he probably had an easy time writing a book about what life would be like for the rest of us under oppressive communistic tyranny.
Be well,
• Fred Magyar says:
Why are you assuming I have not spent several years of study and reflection trying to make sense of the world? And why the all caps for ‘YEARS’?
In any case you seem to have very little understanding of the present, let alone the future if you are still focused on communism and capitalism. We are in the 21st century and things have changed a bit. The vast majority of humans are quickly becoming useless. Hint, the very concepts of labor and capital might become meaningless in a world of AI, robotics and bioengineered humans.
Take 50 minutes and watch the talk I posted above, by Yuval Noah Harari: Homo Deus: A Brief History of Tomorrow
Then come back and tell me again how nationalism and 19th century isms are still relevant paradigms in the 21st century.
You suggested I haven’t spent years studying history, I suggest you have no understanding of the present and how things will change in the future.
Then again, I might just be wasting my time arguing with a bot.
• Nick G says:
Scott Adams is impressed by Trump’s ability to hypnotize people.
Sadly, Adams doesn’t seem to realise that sales & marketing skills alone aren’t enough to run a country. It works for a small real estate company, but…
Interestingly, the people around Trump are making the same mistake: they describe themselves as entrepreneurial. Well, any good investor knows that the entrepreneurial skill set is great when a company is new and small, but that it fails badly when a company gets large and complex. And I can’t think of any organization bigger and more complex than the US government.
• Survivalist says:
Make America Great Again™. Trump and his stagnant ideals. Crisis cults serve up illusions of recovered grandeur and empowerment during times of collapse, anxiety and disempowerment. Crisis Cults echo xenophobic ideology and seek to magically recover a pure mythologized past. A past that most certainly, in reality, did not exist anywhere, ever.
• Fred Magyar says:
Well right now we seem to have an incompetent land lubber who is at the helm and he has pointed our bow directly into the wind and our sails are luffing. We need to tack!
• Oldfarmermac says:
Hi Fred,
I totally agree about the lubber at the helm, but I’m afraid the only way we are likely to get rid of him and his kind is for the opposition to adjust THEIR sales so as to win future elections.
We could talk forever about the faults of the R party in general and Trump in particular, but criticizing the people who voted for him will only harden their resolve to vote for him, or someone like him, next election.
It’s one thing to talk abstractions such as the pro’s and con’s of free trade or globalism, or the dangers of runaway climate. These things are real enough, to people who are knowledgeable, but they are NOT the things that motivate them to vote one way or the other, except among the elite few who are better educated.
The average man or woman on the street is going to vote his or her own perceived best interests.
In terms of winning elections, it doesn’t even matter if that man or woman is deluded. He will vote his PERCEPTIONS, and so will she.
In order to get their votes, the D’s have to run candidates on platforms that reassure them they are more important to the party than banksters, corporate executives, folks who have millions in the stock market, etc.
Will the D party adjust it’s sails to accommodate the realities that led ENOUGH people to vote for him to put him in the White House?
The R party has ITS sails adjusted to suit the R agenda, and doesn’t NEED to change, in order to keep winning, at least in the short term.
The D’s are losing and losing BADLY. They have been losing for a pretty good while now.
Most people who spend time studying politics seem to think the R’s are going to mop up again in the mid terms.
The D’s need a new game plan, and new players, or to at least shake up their roster.
• Fred Magyar says:
OFM, I think we need a couple of real alternative political parties. This Remocrat and Depublican schtick just isn’t cutting it anymore.
Though I have to admit a fair share of schadenfreude watching Paul Ryan having to admit defeat on Obamacare’s repeal. To be clear, I am not a fan of the ACA and believe we should be on a single payer system. I was with Bernie Sanders on that one.
There are still millions of Americans who even now do not have health care. Having STRONG>access to health care is just bullshit…
I think we are ripe for some serious political disruption here in the US.
Case in point: In Brazil for example, basic health care was written into the constitution because it was considered a basic human right. If you are poor it is free. If you have money you can still get five star service at private clinics.
I have a good friend who is one of the top hepatic surgeons in Brazil he is the head of the department of surgery at a state university and heads the university hospital there. He also owns a couple private clinics as well. I have gone on rounds in the hospital with him and met his staff. They are all topnotch surgeons and also compassionate human beings.
I think the American political class needs to seriously reassess what it means to be human.
• Songster says:
Hi Fred,
I agree on needing additional political parties here. I tire of the continuous tit for tat arguing here and in the country in general. Let the extreme right and extreme left have their own sanctioned parties. Both of their views, generally, are not helpful. I neither want unlimited spending, illegal immigration, un-needed military adventures, ridiculous government regulations or denial of science.
As to healthcare, single payer, for me, would seem to be the best alternative. But, we would also need to control immigration, military costs, and keep other general budget items in control as it will be expensive thus causing something else to be cut or we continue to debase our currency more than we have done so far.
I am hoping that all of the current uproar in our government will lead to some of these changes.
• Oldfarmermac says:
Hi Fred,
You and I are in the same book I at least to the extent we both believe our two party system needs a major overhaul, at the very least.
Once in a while we might even be on the same page, lol.
No matter how hard I try, I just can’t see things improving for the better politically,barring plain old good luck, such as Trump having to resign, unless the D party adjusts it’s priorities along the lines advocated by the Sanders camp.
There is little to no real hope the liberal camp can convince the conservative camp to change it’s world view, it’s values and morality, and that camp is numerous enough that the D’s must have some of them, and a bountiful harvest of independent or middle of the road voters as well, to return to power.
If liberals and big D Democrats approach these needed voters diplomatically, and refrain from pushing their hot buttons to the extent possible without compromising core D party values, they can find plenty of common ground.
I will have more to say about how to find this common ground later. There are things I want to do before it gets dark. If I weren’t old, I could have finished the entire day’s planned work by noon. 🙁
There are a number of extremely compelling reasons we should have a single payer health care system, but preferably with the option of opting out, and going to a pay for service doctor or hospital if you want to, because without that option, the people with a lot of money are far more likely to try to sabotage the system.
Now as far as the ACA aka Obamacare is concerned, I consistently said back during the discussion of it and the struggle to pass it, that it would be a short term disaster for the D party, but that it would also be a clear long term winner.
My opinion is that my prediction was basically a good one, because the ACA sure as hell REALLY pissed of a LOT of people who not only lost good insurance policies mostly paid for by their employer, or the small companies they owned, and forced them to pay what they considered outrageously high premiums for something they never asked for, and didn’t want, and on top of that the D’s had the GALL to tell them that the premium wasn’t or isn’t a tax. I understand the TECHNICAL argument that the premiums are not taxes, but you can take it to the bank that the average person on the street considers any money he must spend because of a government mandate tax money.
And while most people might disagree that the ACA was deliberately written to reward D voters and punish R voters, there is no doubt at all that R voters who were compelled to buy ACA policies believe the ACA was written with this dirty trick in mind.
The R’s who had to buy policies at high prices tend to be high earners , compared to D’s who were also compelled to buy high priced policies, because a hell of a lot of government employees, union members, health care professionals, and other professionals are D’s who continued to get their policy thru their employer. A much higher percentage of high earning R voters seem to be self employed as small contractors, small business owners,etc.
I know some of that sort personally who are STILL furious that they were compelled to buy policies that put them in the position of subsidizing people they very often consider trifling worthless loafers.
Now all I need do is to point out that there ARE a LOT of trifling, worthless people in the USA who are quite happy to work as little as possible, and game the system to whatever extent they are able, to get flamed, but it’s still true, and I know some of that sort personally. I should point out that I also know quite a lot of people , at least four or five times as many, who are not well off who are hardworking, deserving individuals who SHOULD have tax paid health care provided.
The fundamental mistake the D’s made in writing the ACA is that they should have put the bill on the government tab, rather than on the individual shoulders of voters who will NEVER forgive them. I know at least two or three of that sort too. One is a very liberal very young professional woman , a second cousin, who just recently graduated from a so called local public ivy, and landed a hot job making great money. She will never forgive the D’s for losing the policy her employer provided previous to the ACA being passed. She suddenly found out that IN EFFECT, she was hit with a mid to high four figure unexpected tax bill.
I believe in markets in general, and in free markets, most of the time, when they actually exist, but over the years the health care industry in this country has captured the regulatory and political authorities that were PUT IN PLACE to regulate the industry so as to better serve and protect the public. Now these same authorities protect the industry from the people to as great or greater an extent than they protect the people from the industry.
A few days back a dozen plus Democrats voted with the Republicans to block a bill proposed by Sanders and another senator or two that would have, if it were enacted, forced the drug companies to negotiate the price of drugs sold to the federal government. That’s the sort of behavior I call Republican Lite.
There’s simply no reason at all that we should have to pay megabucks for health care so physicians can drive super cars and otherwise live the life of Reilly, or that we should have to pay two or three or even sometimes ten or more times the price for the same drug that people in other countries get at such a huge discount because THEIR government tells the drug companies what they will pay for any given drug, and if the drug company doesn’t accept that price……. Well then, they don’t sell that drug in that country. It’s no surprise that they generally accept the offered price, is it?
I agree, health care should be dealt with as a basic human right, although I would like to figure out ways to force people to do the right thing for themselves, or bill them for their care.
As I see it, an adult who has been told repeatedly that some particular bad habit is going to kill him, or make him VERY sick, and cost the country a million bucks to treat him, should have to pay out of his pocket to the limit of his resources for treatment, if he persists in his reckless behavior.
I used to ride motorcycles. In the event of a serious accident, I would have had to pay for my own treatment. I gave up motorcycles many years ago, after one too many near accidents, lol.
But I have friends who ride, and some of them ride recklessly. WHY should I live a low risk life style, and be forced to subsidize their HIGH RISK life style? That’s a HARD question, and convincing answers are scarce.
But overall, single payer health care means we get twice the health care per dollar, and that’s reason enough. What I lose subsidizing bums and willfully high risk individuals, I will get back twice over by forcing doctors, hospitals, and drug companies to really compete for my health insurance dollar, and by eliminating a lot of more or less parasitic middle men who add little or nothing of any value to the end user, the tax payer and patient.
2. wharf rat says:
Snowpack levels have increased significantly from the near-zero levels measured in the Sierra Nevada Mountains in April 2015. As of March 21, 2017, the California Department of Water Resources reported that statewide snowpack was 158% of normal for that date. A more important metric when considering snowpack is the snow water equivalent (SWE)—the total amount of water contained within the snowpack. California’s SWE levels have noticeably increased this year, and as of March 21, the California Department of Water Resources reported that the statewide snow water equivalent was also 158% of average for that date.
• GoneFishing says:
Looking at the rainfall records, the heavy rains come every five to seven years with droughts in between.
3. clueless says:
If a person wanted to escape global warming, it appears that they could move to Fairbanks, Alaska.
• GoneFishing says:
Last winter was just the opposite with much higher temps than normal.
• Fred Magyar says:
Do you ever bother to actually fact check any of the stuff you post? You do of course realize that the further north you go the faster things are warming up and the greater the temperature swings will be with all of the ensuing feedbacks, right?
Over the past 60 years, the average temperature across Alaska has increased by approximately 3°F.[3] This increase is more than twice the warming seen in the rest of the United States. Warming in the winter has increased by an average of 6°F [3] and has led to changes in ecosystems, such as earlier breakup of river ice in the spring. As the climate continues to warm, average annual temperatures in Alaska are projected to increase an additional 2 to 4°F by the middle of this century.[3] Precipitation in Alaska is projected to increase during all seasons by the end of this century. Despite increased precipitation, the state is likely to become drier due to greater evaporation caused by warming temperatures and longer growing seasons.[3]
4. Survivalist says:
Scientists have discovered as many as 7,000 gas-filled ‘bubbles’ expected to explode in Actic regions of Siberia after an exercise involving field expeditions and satellite surveillance, TASS reported.
• GoneFishing says:
They need to pop all those pimples and set them on fire. That way it is only CO2 not methane. I bet the mythbuster guys would love that opportunity.
• Survivalist says:
We’re likely to see a heavy CO2 release once that tundra duff layer starts getting a regular wildfire season.
Found this article interesting too.
And this on blackening of Greenland ice sheet
• GoneFishing says:
As far as the methane pockets are concerned, lighting will do. however it is much more likely to be released as methane than burned. Also the bogs and many ponds and lakes release methane across he planet. Now that the permafrost is melting that same biological action will occur across vast areas.
Yes, albedo changes across much of the snow and ice packs (not just Greenland) due to soot and algae are changing the albedo of the planet and causing melt in many cases. Soil also can get blown long distances.
• Max Gervis says:
If we can believe the scientists really just recently discovered these things, then how can we say they haven’t existed for hundreds, thousands, or even millions of years, all the while ready to “go off” at a moment’s notice, regardless of man’s activity on the planet? Though, I can see also a case where actually the scientists discovered these gas bubbles long ago, but waited until a convenient political moment to tell the public, like how businesses and governments time out the delivery of good or bad news.
• GoneFishing says:
It was a lot more fun when people believed in UFOs. These conspiracy theories are just plain boring and lame.
• Oldfarmermac says:
UFO’s ARE literally real. I have seen some myself, and know a few other people who have seen some as well. Definition , the simplest possible one, something seen in the sky that is not an ordinary aircraft.
I believe in Little Green Men as well, although I am ready to bet the farm against a stale donut that there won’t be any coming here from Mars, or any place else, at least not within any time frame meaningful in terms of human life span.
As big as the known universe IS, it seems very likely to me that there are also men ( sentient bipeds with two eyes up top, etc, ) of many varieties and colors SOMEWHERE. Also some that look more like octopi, or maybe elephants or raccoons, or dragons.
Whatever it is that people see, they do actually see something.
I once parked my truck and hiked up a hill to get a better look at some lights I kept seeing in the sky, and got to a place I could see what was making the lights. Somebody had run their Jeep off a private mountain road, and the headlights were pointed almost straight up. With the air crystal clear, I couldn’t see the beams, but when the light struck some low level clouds passing over, I could see the clouds lit up in spots. They were turning the lights on and off as needed to save the battery while trying to winch it back on the road. I got an unopened fifth of one o one Wild Turkey as a thank you for pulling them out with my truck, lol.
There isn’t any point to this comment, or any moral, and anybody guilty of trying to find one will be sued for slander, other than to point out that there ARE still some things to be seen that are not always easily explained.
I once saw three suns in the sky from my front yard, the usual one in the middle, and one on either side, and of COURSE I didn’t find a camera in time to get a picture, which I was sure would make me famous.
This was pre internet days, and I couldn’t convince a soul I saw what I did, and at the time, I wasn’t able to research it, and eventually forgot about it.
Now of course even back then I knew this had to be a mirage of some sort, but it was a couple of decades, after the arrival of the internet, before I finally learned something about this particular optical illusion. Not more than maybe one person out of a million will ever actually see it, because the weather has to be EXACTLY right, and it seldom lasts more than a few minutes.
Another time I was out working in the sun with a thunderstorm threatening, and all at once the sun seemed to get to be twice as bright and hot as usual. I looked up and was instantly temporarily blinded by the glare. This perceived extra hot bright sun lasted maybe a minute or two.
I have never heard of any thing quite like this happening to anybody else , but what I think happened is that for that minute or two, the sun happened to be shining directly on me, plus a thundercloud by accident was shaped and located in such a way that it worked like a mirror and reflected the additional light and heat on the spot I was working.
If I can find anybody with money who will pay for an expedition to find the edge of the Earth, I’m ready to sign up. Sounds like a great way to see the world on somebody else’s money to me, lol.
I guarantee that if we actually do find the edge, I’m dumb enough to stand right on it, and take pictures to prove it.
• GoneFishing says:
Yes Old Farmer, and there are sometimes glowing red pillars in the sky also.
Side lighting from clouds is a well known and measured phenomena.
Even weirder, I have seen trees melt snow.
You and others might be interested in the International Cloud Atlas (which also covers meteors).
and with photos
• R2D2 says:
An important source of radiant heat is the sun, or solar radiation. Tree trunks are dark and so absorb much of the sun’s energy that falls on them. … On the other hand, snow around the base of trees absorbs much of the energy emitted by the tree trunk near the ground
• Oldfarmermac says:
Glowing red I have seen many times, but never as vertical pillars. Horizontal strips, or pancake like , would be more like it, and a suffused red glow all over the horizon as the sun sets is so commonly seen hardly anybody even notices it.
I know now that if there are layers of near transparent or diffuse clouds composed mostly of minute frozen water droplets, situated in just the right fashion, light can be refracted and focused in such a way as to produce the illusion of the triple sun, from finally running across this data on the net.
Are these red pillars ever seen from the southeastern USA to your knowledge?
About that standing right on edge to take pictures, well on second thought, I would be afraid to do it, or even get within a mile or two, because all the water running off would make Niagara Falls look like a drippy faucet, lol.
Anyway, we have a good scientific explanation about how the world sits on a giant tortoise’s back. When asked about it, some proponent of that theory, when asked what the tortoise sits on, after a few minutes thought explained that it’s tortoises all the way down, lol.
• Hightrekker says:
What if the speed of light is the governor it probably is?
• OldFarmerMac says:
UFO’s ARE literally real.
I believe in Little Green Men as well,
Why am I not surprised?
Absolutely. However most people who believe in UFOs have no concept of the distance between stars or or how space ships propel themselves. In space the only way to gain momentum is to throw something else in the opposite direction. (No, we have not figured out how to violate the basic laws of thermodynamics and we never will.) Rockets throw spent rocket fuel in one direction and therefore they move in the opposite direction.
And the speed that you gain depends entirely on the weight of what you throw and the speed that you throw it. If you could throw fuel, or whatever, equal to your weight, at one tenth the speed of light then you would move away from that fuel at one tenth the speed of light. But you both move, in opposite directions, from that point, at one twentieth the speed of light. (Okay, you would have to throw it a little bit at a time but the principle is still the same.)
At that rate it would take you about 200 years to reach the nearest possibly habitual planet, about ten light years away. Then you would have to do the same thing just to slow down. That is throw something equal to your weight in the opposite direction. Then you would still need to have enough fuel left to land on a habitual planet, if you were lucky enough to find one there.
Understand that something travelling at one tenth the speed of light would go three quarters of the way around the earth in just one second. Doing that, even if we use tiny nuclear bombs to do the propelling, would be impossible. We can only hope to travel at a tiny fraction of the speed of light. It would take us thousands of years to reach the nearest possible habitual planet.
Little Green Men would have to do the same thing. That is unless they have figured out how to overthrow the laws of thermodynamics. And I seriously doubt that.
The distance between stars, especially stars with earth like planets, is just far too great to ever hope to travel to them.
One more point. If a civilization ever figured out how to travel at speeds relative to the speed of light, and they encountered a dust particle equal to the weight of a grain of sand, it would create an explosion that would blow them apart.
• Oldfarmermac says:
Back atcha AGAIN , Ron
I cannot possibly believe you read my comment as indicating that I believe UFO’s are space ships with little green men in them.
What I said is that it is perfectly obvious that people do see things that are not easily identified , in the sky, and that these things are obviously not ordinary aircraft.
My point, such as it is or was, is that people are prone to just go around making fun of other people for no good reason, for instance implying that anybody who has SEEN a ufo is an idiot or nut case of one variety or another.
And while I believe that the odds of us or anybody else discovering new physical laws and principles that will make interstellar travel possible are close to zero , you can spend some time hanging out in forums where theoretical physicists discuss such matters, and you will find that some of them do believe that such discoveries ARE POSSIBLE.
But I’m not betting the farm on it, lol.
Here’s another possibility. Somebody somewhere may be building space habitat that is so advanced that they can live in it indefinitely, recycling every last milligram of waste into renewed resource, and on such a scale that they need not make land fall more than once in a VERY long time, to renew their supply of anything running short.
Interstellar hydrogen is out there, a few molecules in every cubic meter, and with the right tech, a fast moving ship could sweep such hydrogen into a funnel, and use it to run it’s engines, assuming the mastery of fusion power.
It might take a century to accelerate such a ship up to a tenth of light speed, but she would COAST all the way here, once accelerated, and only have to fire her engines to SLOW down and make landfall, starting deceleration a century ahead of course. A voyage of ten thousand light years might be considered routine, among some alien species, just as the former migratory life style of some human societies involved people traveling great distances, with some of the old folks dying, and some babies being born routinely on the way.
Likewise it is may be possible for such a ship to use reactive armor to protect itself from any particle in it’s path. Such armor could possibly be deployed many many miles in advance of the ship itself, using remote controlled smaller ships.
I am not saying such ships actually exist, but it pays to avoid being dogmatic about physical reality, given that it’s impossible to know what MIGHT be built, discovered or invented in the future.
You have often said you believe it is impossible to scale up the renewable energy industries to the point they can shoulder the load currently borne by the fossil fuel industries.
I used to believe you were correct.
You may be right about that, but a lot of people smarter than I am believe it’s possible.
I changed my mind after seeing just how fast the technology of renewable energy has progressed over the last decade or so, and coming to appreciate how fast new renewable technology is likely to arrive over the next few years and decades.
The fact that I know more than a little about how well we can live without consuming vast amounts of energy also helped me change my mind.
Barring it burning down, the house I live in will last at least a couple of hundred years, and it might last five hundred years, if it’s kept dry and free of leaks and rot and termites, etc.
And most of the furniture in it will last more or less forever, lol. None of the good stuff has ever been in a box, none of it has a brand name on it, none of it has any materials in it other than good domestic hardwood, brass and steel hardware, some glass, and some wax.
I personally believe the renewable energy transition will happen, if we stay after it, and get the transition far enough along before our one time endowment of non renewable resources is depleted to the point that finishing the transition becomes politically and economically impossible .
• Fred Magyar says:
It is one way but not the only way! A space craft can literally sail on solar winds.
Given enough time, a spacecraft equipped with a solar sail can eventually accelerate to higher speeds than a similarly sized spacecraft propelled by a conventional chemical rocket.
“A sail wins the race in terms of final velocity because it’s the tortoise and the hare,” says Les Johnson, the Technical Advisor for NASA’s Advanced Concepts Office at the Marshall Space Flight Center. A chemical rocket provides tremendous initial thrust, but eventually burns up its fuel. “Since the sail doesn’t use any fuel, we can keep thrusting as long as the sun is shining.”
Photons Rule! 😉
Mark Twain
• Photons Rule
Well no, they do not. Fred, I have known about solar sails for years now. After all I worked at NASA for he last 17 years of my career. Though I was in computer science myself I rubbed shoulders daily with rocket scientists. No one ever took solar sailing very seriously. That is because for inter-stellar space travel it would take sails many thousands of miles in diameter.
From your link:
The continuous thrust provided by sunlight hitting the solar sail will accelerate the probe to an impressive 63,975 mph (28.6 km/s) relative to the sun.
Okay, do the math. If the nearest possible habitual planet was only 10 light years away, (and that is being grossly overly optimistic), then it would take that spacecraft only 104,666 years to reach that planet.
No, solar sails is not a viable option when it comes to interstellar space travel.
And even for interplanetary travel sails have a serious problem. There is no way of slowing them down or even changing course without jettisoning their sails and resorting to rocket fuel. That is just one of the reasons that solar sails have never been taken seriously by NASA.
• Fred Magyar says:
I wasn’t talking about interstellar space. I was just making a comment that rocket propulsion is not the only possible way to move a spacecraft.
Furthermore I think any discussion of interstellar space travel by humans anytime within our and our descendants lifetimes is moot. It ain’t gonna happen!
Though I did not work at NASA myself, I used to sell a high end scientific graphics software package back in the day and had many NASA scientists and engineers as customers, even a few rocket scientists. So I do have a pretty good idea how those guys think!
BTW, I know this is the internet and sarcasm often fails even when we employ winky faces, but my italicized “Photons Rule” followed by the quote from Samuel Clemens was intended to be somewhat tongue in cheek and yes it was a little dig at your absolutist comment that jet propulsion is the only way to move in space…
• Oldfarmermac says:
For now, fusion power is still theory, and my guess is that it will still be theory decades from now , even in a large stationary power plant on the ground.
But a fusion engine running on the hydrogen molecules that could be scooped up as the ship travels could exhaust it’s own burnt fuel, and so you would basically have a rocket engine that COLLECTS or HARVESTS it’s own fuel, the same as grazing a horse back in the days of traveling into wilderness areas.
Oh my goodness. You know that would not work. The hydrogen you are capturing is at rest, compared to the speed of your space ship. Capturing it, and bringing it up to the speed of your spaceship, slows you down. Then you expel it, or rather the fused helium atoms, speeds you right back up again. Nothing gained.
Anyway, fusion just produces heat, that’s all. How do you suppose they would convert this heat to thrust that would propel those helium atoms backward?
You haven’t given this much thought have you Mac?
• notanoilman says:
You haven’t given this much thought have you Mac?”
Neither have you. Fusion would work about the same as a rocket engine or, if using a Bussard Ram, a jet engine. Your arguments would prevent a jet engine from working too. 😉
• @notanoilman
Notanoilman, you are the one who has not done his homework. You have just not kept up on your science fiction since the Bussard engine was proposed. And the Bussard engine is, and always was, science fiction.
Bussard ramjet
That’t exactly what I said, the drag would be equal to the propulsion. Though the scientists in this article concluded that the drag would exceed the propulsion.
You wrote: Your arguments would prevent a jet engine from working too.
No it would not. There is absolutely no comparison. A jet engine uses fan blades to dig into the air stream, much like a prop driven plane does. The blades actually pull into the air. The air is then compressed and jet fuel is injected into it. Then the combustion expels both the air and the spent jet fuel giving it the thrust. (Actually the oxygen in the air and the fuel are converted to CO2, CO and water vapor but the principle is the same.)
• notanoilman says:
I have no desire to get into a long to and fro on this but FWIW.
Yes, I am familiar with the problems of the Bussard Ram. A ram jet engine has no fan blades to dig into the air. It does not need to rely on the expulsion of fuel residues to create thrust, very hot air would do the trick thanks to Charles’s law.
Fusion would rely on expelling the plasma at a very high temperature and, from that, a very high velocity. Yep, just Helium coming out but VERY fast, quite an ISP. The NERVA fission engine worked on the same principle with Hydrogen as the working fluid.
But none of this detracts from it being very impractical with our current level of technology.
• Songster says:
Sure Max, World-wide, all the climate scientists, were just waiting for this moment! OMG….I should have seen it sooner. /sarc
5. wehappyfew says:
Some of the Sea Level Altimetry products have been updated. We are now past the bump from El Nino and had a small La Nina, yet the sea level rise persists.
Acceleration is evident.
• Rick's says:
This El nino and La nina stuff is quite interesting. For this is because I can recall back to my USAF days hearing about all these weather control experiments being done by the US, West Germany, Soviets, red China. But then suddenly around the time the Berlin wall fell and the USSR collapsed it seemed like all those reports stopped while at the same time the scientists came up with the words El nino and La nina to describe a new phenomenon they were witnessing.
Gets me wondering if the 2 are actually connected. In that the results of the attempted weather controls are the reason for the phenomenon. Thinking economically it would’ve made sense back in the cold war for a country to warm up the temperatures of it’s own country while cooling down ‘the enemy’ on the other side of the world. But because we had both halfs of the world trying to do this type of thing at the same time, I can see where they could create a condition involving a pendulum swinging back & forth between warm & cold temperatures. In other words the cold war weather control projects could create the El nino La nina system the scientists discovered in the 1990’s.
• Fred Magyar says:
No Rick, it’s the the little green men from Mars. They are preparing an invasion and need to make some changes to the planet’s climate… No worries!
• notanoilman says:
The term “El Nino” goes back to 1892 or earlier, a little before the Berlin Wall was even thought about, let alone falling.
• GoneFishing says:
I wonder at what amount and when the acceleration of sea level rise will level off. 20 mm per year, 25 mm/year, or will it go incremental as more inland ice is needed to be melted?
To muddle the answer is the fact that the process based models of the IPCC show half the sea level rise of empirical based estimates from past temperature sea level change relationships. Even the empirical estimates may be low since generally temperature rose significantly less rapidly in the past than it is now.
• Hightrekker says:
SST anomalies off of west coast of Peru/Ecuador (Nino 1+2) are at their warmest levels since October of 2015. Currently +2.6°C
Can you say El Niño?
I know you can!
We are sailing into unchartered waters——
• Andy Fishburn says:
My opinion, now might not be the time or place to bring the IPCC’s epic fail predictions and models into the debate. Same can be said for all the misleading and manipulated left-wing narratives coming from those predictions and modellings. Don’t get me wrong, climate change is real, but we are overdue by now for another miniature Ice Age, not a Venusification of Earth. Look at it this way, men are liars, women are liars, but the sun always tells the truth…especially the sun spots. I look at the sun each and every day. What I and the experts see is a decline in sun spots in a way not seen since the 1600s. That is one of the sure signs of coming global cooling, not warming as the mainstream scientists wish you to believe!
• Fred Magyar says:
Absolutely! Which is why I, living in the greater Miami area, have been stocking up on salt, snow shovels, and bought a new snow blower…
• George Kaplan says:
Where’s the flat earth parodist when we need him. There’s some great new material being posted here now.
• GoneFishing says:
He fell off the edge. 🙂
• Fred Magyar says:
Yeah, LOL!
If that ain’t comedy gold, I don’t know what is…
• Songster says:
You “look at the sun each and every day”. And just what is your occupation? I mean besides bullshi** others that is.
• George Kaplan says:
Coastal areas don’t necessarily need permanent flooding to become problematic. Higher storm frequency (100 year storms become one year storms by 2100 under RCP8.5 and directly impact 5 million in Europe – lower under other RCPs but still much more frequent) or higher wave energy leading to increased erosion (mostly impacting Southern Hemisphere) can do the job as well:
• GoneFishing says:
This is only a modern problem. Primitive man could just move a little further from the ocean and didn’t have many fixed abodes to lose as we do now.
I don’t know if there is a way for satellites to measure the amount of rogue waves formed on the ocean, but with increasing wind and changing patterns rogue waves should be forming much more often. The shipping insurance companies might be very interested in that occurrence.
• R2D2 says:
General high-order rogue waves and their dynamics in the nonlinear Schrödinger equation. General high-order rogue waves in the nonlinear Schrödinger equation are derived by the bilinear method. These rogue waves are given in terms of determinants whose matrix elements have simple algebraic expressions. It is shown that the general N-th order rogue waves contain N−1 free irreducible complex parameters.
• George Kaplan says:
Quite interesting, highly theoretical, and nothing whatsoever to do with rogue waves in the ocean.
• GoneFishing says:
It’s a simple model concerning one possibility.
• Fred Magyar says:
Here’s some info on rogue waves in the sea.
Massive rogue waves aren’t as rare as previously thought
Findings are critical for safe operations at sea
March 8, 2017
University of Miami Rosenstiel School of Marine & Atmospheric Science
Credit where credit is due:
The birth of rogue waves can be physically explained through the modulation instability of water waves. In mathematical terms, this phenomenon can be described through exact solutions of the nonlinear Schrödinger equation, also referred to as “breathers.”
• John Norris says:
Thanks for the link, WHF. I used the Aviso data to plot 5 yearly trend values. As you can see in 2013 we broke out of the box.
6. George Kaplan says:
I recall Survivalist called this for the the 6th, based on Jaxa.
I find NSIDC sometimes confusing as they have one graph which is a five day average (which effectively adds a delay to the numbers) and one which is daily, I don’t know what one they used to call the maximum.
7. Hightrekker says:
Liberal Lies!
8. Survivalist says:
The guy’s a fucking nutcase. He compares the experience of Vietnam vets with his sex life from the 80’s.
• Fred Magyar says:
The next paragraph is even more informative. Someone needs to start a petition to have him tested!
Maybe he doesn’t suffer from neurosyphilis and is just another run of the mill narcissistic sociopath and pathological liar with multiple personality disorder. In other words, a fucking nutcase… In any case the guy needs a complete medical and psychiatric evaluation!
One thing I think we can say for sure: “A Great and Brave Soldier”, he most definitely, is not!
• GoneFishing says:
Fred, it sounds like you are saying that the inmates are running the sanitarium.
• Lloyd says:
Is Donald Trump truly crazy?: Hepburn
But the evidence is growing that Trump is losing touch with reality, with his childlike actions and aberrant behaviour prompting more and more concerned mental health professionals to speak up.
So worried are these experts about Trump that they are ignoring the so-called Goldwater Rule established in 1973 by the American Psychiatric Association. The rule declares it unethical to diagnose a person without examining them personally. The policy was instituted after more than 1,000 psychiatrists told a magazine that 1964 Republican candidate Barry Goldwater was mentally unfit to be president.
The question of Trump’s mental state is extremely important. That’s because an unstable, erratic president could wreak havoc around the world, spelling trouble not only for Americans, but for Canadians and people in every other country.
• Hightrekker says:
Was born on third base and thought he hit a triple, then got some lucky pools of the dice.
A symptom, not a cause.
Easily replaced by the millions of sociopaths available.
9. Survivalist says:
Interesting story on Russia Troll Brigades and influencing the news cycle. Maybe some Russian sock puppets here too. I find it odd that a fine blog like this attracts some extremely idiotic comments from people who, if they are legit, should probably be watching Duck Dynasty or reading an Archie comic or something. Does it seem odd to anyone besides me that a man who claims to agree with Trump would come to this blog to read the articles posted by Dennis and Ron? The articles posted here, in my opinion, are not really what the Ted Nugent followers are looking for.
• Dave Hillemann (Texan) says:
Are you one of Caelan MacIntyre’s pseudonyms, by chance? He’s from Canada too, and your argument sounds similar to one he tried to use against me last year.
• Survivalist says:
No I’m not. We just happen to agree.
• Hightrekker says:
How about some optimism?
10. Oldfarmermac says:
Read and heed.
This short excerpt from a new book lays out some agricultural history, little known to people outside the field other than biologists, which is critically important to understanding what the future holds in terms of food supplies.
11. Doug Leighton says:
“Wed., March 22, ‘17 – We’re in ‘uncharted territory’ due to the unprecedented global heat Earth is experiencing, sea ice reaches record-breaking lows at both poles, and carbon dioxide levels in the atmosphere may reach 410 ppm this year.”
• Doug Leighton says:
• Doug Leighton says:
Plus, that IAE (International Energy Agency) happily reports a very healthy 1,6 mb/d increase in oil demand last year and something similar (apparently) every year, all the way to the end if time. So, a nice trend there what with production at 90 mb/d in 2013, now about 97.5 mb/d. Progress on all fronts, eh wot lads?
• GoneFishing says:
Despite what we may think, apparently continuing to use as much fossil fuel as possible is the plan.
Plan B, use more natural gas as oil depletes.
Plan C, use more coal as natural gas and oil depletes.
Plan D, use methane hydrates as everything else depletes.
Plan E, get conservative, run all the extraction operations using renewable energy.
Plan F, last part of plan SHTF
All above plans have a sub-component where efficiency is increased so the masses can still use the stuff to get to work and the rich people can continue to enjoy their lifestyle.
• R2D2 says:
“the rich people can continue to enjoy their lifestyle”
Sorry StormWatcher, I guess you didn’t make the cut
• GoneFishing says:
Wow, you follow me everywhere I go little rolling can. It’s like having an internet doggy.
• R2D2 says:
Maybe you should give your new found Pitbull a treat if he is hungry. Instead of kicking him.
• Doug Leighton says:
Yes, but Transport currently only accounts for (only) about 13% of CO2 emissions whereas Forestry alone accounts for roughly 17%. Not that I’m knocking efficiency improvements but with 80 million more people on the earth every year how much will these efficiency improvements really help?
• R2D2 says:
Extending the life of humanity doesn’t come in one simple little pill. Today even effective birth control has other options. It took centuries for humanity to dig themselves into this mess. It won’t be turned around over night. First you have to point yourself in the correct direction. Than second you have to start walking before you run.
• It won’t be turned around over night.
First you have to point yourself in the correct direction.
Than second you have to start walking before you run.
Wow! I have never heard such profound verbiage in a long while. Such deep knowledge and wisdom you show.
Let me add a few:
Don’t take no wooden nickles.
Pretty is as pretty does.
People who live in glass houses shouldn’t throw stones.
Necessity is the mother of strange bedfellows. 😉
• R2D2 says:
We are all going to die and as you know humanity will also go extinct. Maybe it will be 5, 500 or 500,000 years from now. When you get down to it, it’s really very simple. The 350 pound 20 year old, who eats 1/2 gallon of ice cream everyday has a choice. Myself, I prefer an ice cream cone on date night and a hour bike ride 6 days a week. With the plans to live to be 95 years old.
I would rather get my ducks in a row than run around with my hair on fire. If your a Doomers and give up. It becomes a self fulfilling prophecy.
Now it’s time for me to go and take my Lipitor.
• Doug Leighton says:
I don’t see how extending the lifetime of humans is going to help us get out of the huge mess we’re in. Anti-aging medicine would certainly be a hit with the pharmaceutical industry though. Imagine a few billion 120 year olds taking their daily anti-aging pill at $100 a pop.
• R2D2 says:
I would put population overshoot as maybe the number one prior. But it has to be matched with reducing the human foot print or damage to the environment.
Maybe I wasn’t clear. It’s more like your going to have to take a thousand different pills and some of them you won’t show results or work.
Life is like a brand new car. Drive it careful, wash and maintain it. It will most likely last a long time. Or, you could say fuck it. There is no hope the car going to not stop running someday and drive it towards a cliff today.
• Hightrekker says:
I was reading about how countless species are being pushed toward extinction by man’s destruction of forests. … Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us.
— Bill Watterson
• GoneFishing says:
I wonder if it is statistically inevitable that a planet able to sustain complex life forms for long enough will develop an anti-lifeform.
12. Survivalist says:
It’s a little odd that Comey tells the voters about investigations into Clinton prior to the election but not about investigations into Trump.
Could have been because Trump himself wasn’t under investigation but that people around him were. Could have been OpSec too, trying to flush out some Russian agents.
13. Survivalist says:
35 min interview re What Will Cause the Next Holocaust
Hint: ecological panic + politics.
14. Hightrekker says:
Eurogroup Finance President Accuses Southern Europe of “Spending Money on Booze and Women then Begging for Help”
So they just wasted the rest?
15. Rational Analyst says:
I am reading stuff on the Internet and am haring Mike Pence blather on in one of Trumps ‘adore me’ campaign tours in WV…
Idiot Pence said (paraphrase) ‘The era of slow growth is over’…’jobs are coming back’…we will pass the largest tax cut since Ronald Reagan…blah, blah, blah…”the war on coal is over”…”A new era of energy has begun” (note: how can we have a ‘new era of American Energy’ when idiot Pence is pushing coal? is that new?)
The clown show (R) party couldn’t even pass a new health care bill to replace the ACA they so hated. I got news for the (R) party…you have had 25 years to craft the ideal bill…ever since Health care costs and lack of coverage came on the the national radar about the time the Clinton administration came to power. President Obama took the (R) poster child, Romney Care, and tried to use their idea nationally, and got crapped on because the (R) idea was promulgated by a (D) black president.
So, the point of my screed is this: Expect NO HELP from LEVIATHAN regarding getting our crap together to plan for a rationale energy policy going forward.
Happy Daze are here again! Oh Golly, idiot boy Pence actually just said we will “restore the arsenal of democracy”. This is from the administration that has a man crush on Vlad. How the hell did we get here? Too many rubes in this idiocracy.
Sloganeering and jingoism are no substitutes for logic, reason, and science. This is depressing.
• GoneFishing says:
In America the slogan always was “Anyone can become president.” I guess no one thought of this result.
But here is a prescient quote from George Carlin “In America, anyone can become president. That’s the problem.”
• HuntingtonBeach says:
Violence erupts at pro-Trump rally in Huntington Beach
Bill maher Interviews Timothy Snyder on Real Time 3/24/17
Very distributing, especially for you Fred
• Fred Magyar says:
Very distributing, especially for you Fred
Not quite sure why I’m being singled out? In any case there are many things I find troubling to be sure.
I will relate two personal anecdotes. Back in the late 1970s I was being trained by SubSea Oil in Italy to become a diver. I decided to visit Hungary on a long holiday weekend. Took a train with a friend and we had to change trains at the Austrian Hungarian border. We still had to get past guard towers, barbed wire and soldiers with machine guns.
A few decades later well into the 21st century I took another trip to visit my family in Germany and we decided to drive to Budapest to visit our Hungarian relatives there. We drove from Munich, through Austria, straight across the Hungarian border through what used to be border and passport control stations, now empty. There was no one there to stop us or check our travel documents because by this time Hungary was part of the EU. Side note : they still kept their own currency, but I digress.
Point is, anyone who argues that in the 21st century an EU with open borders and free trade isn’t better than one with closed borders and isolated xenophobic ultra nationalist tendencies has no clue what they are talking about.
Let’s just say I’m not a big fan of Viktor Orban and his Jobik party! They are fascists, plain and simple. Not to mention that they are supported by Putin’s apparatchiks. We’ve seen that movie before and it doesn’t end well. I have taken public transport in Budapest from Széll Kálmán tér (Széll Kálmán Square, formerly between 1951 and 2011 Moszkva tér or Moscow Square) …
Here’s an interesting exchange that occurred shortly after the Brexit vote but before Trump was elected.
The Rise of Populism and the Backlash Against the Elites, with Nick Clegg and Jonathan Haidt
Very distributing, for all Americans and citizens of the free world who still believe in the liberal concept of democracy, not just me! At least that is how I see it. A resurgence of populism and nationalism, while understandable, is the last thing the world needs right now. Nick Clegg and Jonathan Haidt argue we should just give the populists plenty of rope and they will hang themselves because they don’t have a real plan. They are like the dog who has caught the car it was chasing, now what?!
Case in point: Trump and the Republican’s poorly planned repeal of the ACA. A few more major blunders like that one and even the most ignorant Trump supporters will turn against him. Unfortunately at this particular juncture in history with pressing global issues, I’m not sure we have the luxury of time to let the populists figure things out for themselves.
• Doug Leighton says:
Hi Fred,
This business (debate) of borders is extraordinary complex and one that I have faced many times having lived and worked in many different countries. Also, one of my Daughters lives in Italy (married to an Italian) and I studied in a “foreign” country (Sweden). I like the comment by Frank Furedi below from his essay: “There is little enlightened about being ‘post-borders’ today”. I believe Furedi is/was Hungarian???
• Fred Magyar says:
Furedi is a Hungarian born professor of sociology whose family left Hungary after the 1956 revolution. While not particularly relevant to a discussion about political borders, It might be worth mentioning that he is a climate change denier.
I certainly agree that the debate about borders in Europe and elsewhere is a highly complex one, to say the least. However let’s not forget that the idea of a European Union came to be as result of two extremely bloody and devastating world wars. The Europeans wanted to find a way to make sure that never happened again!
• HuntingtonBeach says:
Fred, because from what I have seen here you have shown more concern about tyranny and fascism than anyone else. I thought you would find it interesting and help the need for this conversation to move forward.
• Fred Magyar says:
Oh, ok but I would hope that there are many more people who are concerned about tyranny and fascism around here, than just me.
16. GoneFishing says:
Keynote address by Dr. Joseph Romm, creator of climateprogress.org, at the Annual Wirth Sustainability Luncheon in Denver, Colorado, Sept 9, 2016.
He strongly discusses the future of energy in the world.
“Almost anything you think you know about climate change is outdated.”
• Fred Magyar says:
IMHO, this talk should be a posted front and center in the so called ‘Petroleum Thread’ Future US Light Tight Oil (LTO) update… While I do read the posts and discussion in that thread, more and more I find the comments there to be diverging from reality to such an extent that I find myself just rolling my eyes when I read them! To paraphrase what Romm says at the end of the talk when asked about fracking, It’s game over for the fossil fuel industry, full stop!
To me that discussion is over as is the discussion that still somehow rages about the validity of climate science!
What scares me is that the majority of the posters on the Petroleum thread are still thinking like Steve Mnuchin. They somehow think that driverless electric vehicles are some distant dream…
TODAY, IN 2017, the president’s top economic advisor said he had no worries about robots putting people out of work. “In terms of artificial intelligence taking over the jobs, I think we’re so far away from that that it’s not even on my radar screen,” Treasury Secretary Steve Mnuchin told an audience in Washington. “I think it’s 50 or 100 more years.”
• GoneFishing says:
But the important prognosticator and actualizer of our time thinks that many jobs will be taken over by robots and computer systems. That person is Elon Musk.
• Doug Leighton says:
• Doug Leighton says:
Meanwhile, not to be outdone:
China’s life-like ‘robot goddess’, Jia Jia, impressed the public by holding conversations with participants at a conference in Shanghai. The realistic AI humanoid, which was unveiled last April, also made specific facial expressions when asked various questions, including whether or not she had a boyfriend. Her inventor predicted that within a decade or so, artificially intelligent (AI) robots like Jia Jia would begin performing a range of menial tasks in Chinese restaurants, nursing homes, hospitals and households.
• GoneFishing says:
What happens to society when robots replace workers
• Hickory says:
The robots that replace the workers at Foxcomm cost a lot more than all the robotic driver replacements we will have on the worlds roadways over the next decade.
While many of the current robot prototypes being displayed in places like in China (JaiJai) are female, the next generation ones will be unisex, and carry heavy weaponry, facial recognition cams, their own overhead infrared drones, and if you look close at the model tag in the battery compartment- will be owned by the Trump-Koch Consortium,Inc. They will be policing all the unemployed displaced workers, and stationed at all key food distribution and industrial infrastructure sites, among other tasks. And at all the polling places, and newscasting sites. Don’t be surprised when this consortium makes a move to buy up the grid. Good morning.
• GoneFishing says:
The factory robots actually produce something versus the autopilots in cars which just use up energy and materials to move themselves around.
As far as weapon carrying robots to police humans,
humans will turn them into useless junk very rapidly. Then the humans will have the heavy weapons. Bye bye consortium.
• Oldfarmermac says:
Some years ago, there was a hilarious exchange of comments in some magazine or another about the possibilities involved if it becomes possible to build robots that look enough like humans, and are intelligent enough to pass for humans.
I wish I could remember which one it was, but if anybody else remembers it, they would provide us all with a great belly laugh by linking to it.
Some guy said he would have his on remote, for house cleaning and sex, etc, and then park it /her in a closet until he got horny again, and never have to put up with any woman’s bullshit again.
The women’s responses were the really good ones. Hopefully somebody who is skilled at finding such stuff will link to it.
• Fred Magyar says:
Blade Runner (1/10) Movie CLIP – She’s a Replicant (1982) HD
• GoneFishing says:
A documentary on how robots and automation can and will replace more than 25% of the workforce in the near future.
• Fred Magyar says:
Actually the video talks about replacing 45% of the labor force. The 25 % cited was unemployment numbers during the depression. I have even seen talks where people who are supposedly in the know are talking 50% or more of all jobs replaced by AI and Automation in the next two decades…
Coca-Cola Wants to Use AI Bots to Create Its Ads
Algorithms can already pick music and write copy
• GoneFishing says:
Which is why I said “more than 25%” and did not specify a date but left it at a nebulous “in the near future”. I don’t take those predictions, such as 45%, as hard numbers nor simultaneous across various businesses. Also I guessed that if it reached 25% in a relatively short time period that the unemployment would be in a free fall cliff mode unless severe government action and societal action was taken. The degree to which automation can be supported is somewhat dependent (at least initially) upon the ability of the consumer to partake of the goods and services produced. Without jobs the purchasing power of a large number of people would disappear.
Now if the goods and services produced were disconnected from capital, then automation could quickly run it’s course and society would move on to a new state of being. Unless governance is severed from business influence, there will be a big fight on our hands. The people will not put up with being pushed out of the system. Look what has happened lately just because some people were unhappy with less of the pie. Imagine if they thought they would get no pie at all.
• Fred Magyar says:
GF I wasn’t disagreeing or quibbling with you.
25% of useless humans is already a very big deal and has a huge impact on our present industrial civilization… We had all better start thinking about it now!
Excerpted transcript from a podcast interview by Ezra Klien and Yuval Noah Hariri, author of HomoDeus A Brief History Of Tomorrow
Ezra Klein
Then why, given the range of uncertainty both about AI development and what an AI would look like, are you so persuaded that human beings will not be a dominant life form in 300 years?
Yuval Hariri
It’s not because I overestimate the AI. It’s because most people tend to overestimate human beings. In order to replace most humans, the AI won’t have to do very spectacular things. Most of the things the political and economic system needs from human beings are actually quite simple.
If you go back in time to the hunter-gatherer days, then it’s a different story. It would be extremely difficult to build a hunter-gatherer robot that can compete with a human being. But to create a self-driving car that is better than a human taxi driver? That’s easy. To create an AI doctor that diagnoses cancer better than a human doctor? That’s easy.
What we are talking about in the 21st century is the possibility that most humans will lose their economic and political value. They will become a kind of massive useless class — useless not from the viewpoint of their mother or of their children, useless from the viewpoint of the economic and military and political system. Once this happens, the system also loses the incentive to invest in human beings.
• GoneFishing says:
Will the machines be producing products for themselves? Who will be buying the products when most people have no money?
• GoneFishing says:
After listening to the interview and thinking about this topic, it seems so much like just continuing the trend of the last few centuries. People embrace new technology and push aside any downsides it may have, one thing after the next, never really stopping long enough to think about any of it until it is well entrenched and probably being replaced anyway. AI and more automation! Why not? It’s the way to go, get rid of those pesky people, right?
Just more of the same. All headed to the same end.
I just can’t get my head around the idea of staring into a pair of dead eyes on a machine that looks human. It would be like the walking dead, even worse since it was never alive. Automatons.
Then this idea of men turning themselves into little gods through genetic changes and machine enhancements, it’s the dream of sick minds. Those who are attracted to such adolescent dreams will just be making themselves into nightmares.
If you think Trump is bad, wait until one of those genetic monsters takes control.
Maybe civilization is headed to be the supreme freak show before it fizzles out.
But since they are not human anymore they are fair game. Worst you get is a fine.
17. robert wilson says:
New Yorker on Trump and associates.. Art Robinson is mentioned as a possible science advisor.. Dr Robinson publishes Access to Energy which he took over from the late Petr Beckmann. Art is active in Doctors for Disaster Preparedness. He is a leader in home schooling. He was once associated with Linus Pauling but they had a falling out. http://www.newyorker.com/magazine/2017/03/27/the-reclusive-hedge-fund-tycoon-behind-the-trump-presidency
• Fred Magyar says:
Whiskey Tango Foxtrot!
ROTFLMAO!! Arthur Robinson wears a tinfoil hat! And he is a fucking creationist to boot!
From Wikipedia:
In addition to believing that global warming is a hoax, Robinson opposes abortion and supports gun rights,[1] cutting taxes, increasing border security and building new power plants.[1] He argues for balancing the federal budget, defunding earmarks and ending special-interest influence in Washington.[1] He also supports restoring sound money and ending the Federal Reserve System. Robinson is against bailouts to Wall Street banks.[citation needed] He also supports a strong national defense, but with a more restrained foreign policy.[27] Robinson is a signatory to A Scientific Dissent from Darwinism, a petition circulated by the Discovery Institute to promote intelligent design.[28]
• Oldfarmermac says:
Hi Fred,
You remember the comic who said anybody who hates dogs and kids can’t be all bad?
I don’t know anything about this guy, but if he is a creationist, then he either isn’t very bright, or else he’s just maintaining some camouflage. Plenty of people pretend to believe things they know are jokes.
But anybody who is in favor of doing away with earmarked money ( unless it is tied to a tax especially enacted to provide that money, such as a fishing license fee being used to pay to support the fish and game department, and bailing out bankers can’t be ALL BAD, lol.
This comment is partly jest, partly sarcasm, and I know YOU will get it, but I feel compelled to add this last line for others.
Naked apes have a way of automatically, without thinking, opposing anything that an enemy or outsider finds worthy of support, and it doesn’t hurt to be reminded of this shortcoming occasionally.
Anybody who is opposed to or actively working against the renewables industries, etc, is either less than well informed in respect to our environmental problems, or just looking out for himself and his friends at the expense of everybody else, and eventually at the expense of their own kids and grandchildren, although they are generally not AWAKE ENOUGH to understand this fact. This Arthur Robinson appears to fall squarely into that category.
If you approach them properly, you can often get individuals who are in favor of some particular environmentally dangerous or damaging practice to change their minds about it, especially if their own personal paycheck doesn’t depend on it.
We used to use a lot of dangerous and persistent chemicals in the orchard biz( and still do, but the ones we use these days are substantially less harmful, and we use them in lesser quantities) but the worst of these older pesticides have been banned for some time now.
I often talk with local guys who are still in the biz, and some of the older ones have a mindset opposing any regulation of the way they farm the way some people just automatically oppose or support certain political issues or policies, without ever giving any thought to the opposing arguments.
It’s a total waste of time , the worst thing you can do, to approach this sort of guy and tell him he is stupid, ignorant, selfish, etc, and that he is not doing what is in his own best interests, although all the these arguments may well be true.
What you must do, if you want to have a meaningful conversation with him, is to avoid even mentioning the actual issue you want to talk about, until you have a PLAN in mind, something along the lines of the plan a good soldier makes to win an anticipated battle. You
You start out by being friends, or at least by being a neutral, politically, if at all possible. This means if you are a hard core liberal in favor of no questions asked free choice abortion rights, you simply steer totally clear of any discussion of this topic if at all possible, etc.
If you are a hard core Darwinist, you will do well to remember the Winston Churhill’s famous remark about at least mentioning the Devil himself favorably in Parliament, supposing the Devil were to come out opposed to Hitler.
If your opposite number believes in Jesus, you can find SOMETHING nice to say about the Christian religion, without compromising your own principles, if you have brains enough to think a little about whether you are trying to win a DEBATE, or whether you are trying to win a convert to your point of view, or at least help the other guy develop some insight into the errors of his own position, WITHOUT pushing his hot buttons. Once you push a hot button or two, of the wrong sort, you might as well go back to the ball game and the beer, and forget about it, until the next time.
Now suppose your target is mad about the government taking away a certain chemical, for instance, parathion.
I have knowledge of parathion, having used it myself, and don’t have any problem putting a wistful look on my face and remembering fondly that it killed damn near anything small and creepy crawly that it touched, for weeks after you used it.
But then I follow up by saying well, at least it’s no big loss, because by the time it was outlawed, you had to use twice as much and it lasted half as long, and some of the bugs just didn’t seem to do well without it anyway, like it was a VITAMIN or something.
And you know what? The other guy will AGREE with you, that he can get along without parathion ok, but then he will start bitching about the price of the alternatives on the market.
And THEN, if you have brains enough to remember that you are trying to win a CONVERT, rather than a DEBATE, you sympathize with him, about how tough it is to make a living these days.
And THEN, while remaining sympathetic, you mention that it’s not really even the two of you who are bearing this extra expense, but rather that it is ultimately paid by the end user of the food he produces, because he and all the other farmers just pass their costs along, just like everybody else in business. The price of trucks n’ tractors, insurance, taxes, ever damned dime a farmer pays out is ultimately paid by the ultimate customer, how could it be otherwise ???
Along about this point, you SHUT UP about chemicals, and get back to your football and beer, and the attractive woman over at the other end of the bar, and how you wish you were young and good looking again, lol.
You have done ENOUGH for the moment, because the little seed you have planted between his ears has to be allowed time to grow there, but after a while, he you will hear him say to another farmer, fuck it, whatever it cost us to stay in business, somebody else is paying it anyway. The last new truck you bought cost four times as much as the last new one your daddy bought, didn’t it?
You don’t make converts to a new way of thinking by way of direct frontal assaults.
Now some time later on, maybe weeks or months down the road, you get into ANOTHER conversation about chemicals with this guy, and the best time to do it is when the talk is about people getting sick and dying of cancer, heart disease, etc, more often than the older folks used to.
And if you work it right, you will be able to point out that you read in the news that people who live in jungles and eat what God (remember that this guy probably thinks well of JESUS, and that you are not trying to win a fucking debate, but rather a CONVERT ) intended them to eat, meaning whatever they could find and catch and cook, as opposed to TWINKIES and Big Macs, don’t have heart disease, and they don’t have diabetes, etc. and that the only real good reason you have heard that they DON’T is that they don’t eat the crap we eat these days, all full of chemicals and ten times as much sugar as we ought to eat, and that they get some exercise.
And then you point out that you would have to go off to college to even be able to read the fucking ingredient list on half the stuff in the supermarket these days, never mind having even the foggiest idea what all them damned chemicals are FOR, or what they might do to you, if you eat enough of them long enough, and fondly remember the biscuits and corn bread Momma used to make, that had maybe five or six ingredients at the most, and you knew what they were and where they came from.
And then you DROP IT AGAIN..
Anybody who has read this far has brains enough to understand that the guy who hates the EPA is gradually learning WHY we NEED the EPA, without pushing his hot buttons, without insulting his intelligence, without criticizing his intellect, or his way of making a living, or his culture or his religion.
A year or two down the road, he will have accepted these new thoughts as if he had arrived at them of his own free will, and in a very real sense, he actually does accept them this way, but only because you originally planted them in his mind in such a way as to BYPASS the mental filters he uses to keep out enemy propaganda, lol.
A right thinking true believing free market god fearing self respecting self supporting old country boy ain’t ABOUT to listen to any environmental bullshit preaching coming from a bunch of them there tree hugging whale loving commie socialists dimmercrats.
But if you just use a little sense, you can talk to him about environmental issues, and social issues, and find PLENTY of common ground.
Eventually you get around to talking about how there ain’t no fish in some stream or river you used to fish in because somebody upstream is dumping mine wastes into the water, or about how much it costs somebody you know for city water because it costs so much to get all the crap out of the water so you can drink it these days.
The ONE THING you never do is put your potential convert on the defensive unnecessarily, because the defensive reaction is to shut you out, and prevent any further consideration of what you have to say, and harden his resolve to continue thinking as he always has, in the past.
Will you ever succeed in convincing this guy to vote for HRC instead of Trump ?
That’s a long shot, but it’s not impossible. OTOH, you have a fair shot at getting him to the point while he does not vote D, he simply stays home sometimes.
At election time, a dead enemy soldier, or one missing in action, is about as useful as a live one voting your way.
Elections are won one vote at a time.
It was attributed to W.C. Fields. But Fields actually never said the phrase. It was said about W. C. Fields at a roast.
W. C. Fields
Anyone who hates children and dogs can’t be all bad.
Although very commonly attributed to Fields, this is derived from a statement that was actually first said about him by Leo Rosten during a “roast” at the Masquer’s Club in Hollywood in 1939, as Rosten explains in his book, The Power of Positive Nonsense (1977) “The only thing I can say about W. C. Fields … is this: Any man who hates dogs and babies can’t be all bad.”
I know, this does’n matter at all, but I do enjoy a bit of trivia now and then. 😉
18. sunnnv says:
re: talk of warming climate…
Warm weather leads to first recorded natural gas storage injection in February
19. Survivalist says:
It was big news, but it wasn’t good news. Also, surprise, it wasn’t reported on most television “news,” otherwise known as babysitting for adults.
• Geoff Riley says:
Just how do you think TV news should report on the whole climate change issue? From a news director’s perspective, climate change simply doesn’t provide enough new content on a day-to-day basis to come up with “packages” that are not just compelling to the key demographic sought after by most daily news programs (women ages 25-54), but also are saleable to the program’s advertisers and adhere to mandates stating equal weight be given to the multiple perspectives of a controversial matter. At best, you would get daily segments with an anchor intro “climate scientists say the planet continues to warm, though the consequences of the warming are still being debated,” going to a package with some wire service B-roll of polar bears, Al Gore, etc. and some soundbites from scientists representing both sides of the debate.
• Fred Magyar says:
Like this!
Should we debate whether or not owls exist?!
• Survivalist says:
So you’re a news director who can’t figure out how to cover climate change? Life’s tough, it’s tougher when you’re stupid.
• GoneFishing says:
I think it is astounding that global air temperature rose at the rate of 6 degrees C per century and there was little response to this. Global air temperature rise is still being slowed by ice melt and ocean warming. I wonder how reductions in those inhibitors will affect future temperature rises.
Sea level rise reaching a half inch per year doesn’t surprise me since the inertia in the system is just starting to be overcome and increasing sea level rise was quite predictable. When it reaches one inch per year and above maybe global warming will be taken seriously.
• Oldfarmermac says:
Hey guy’s ,
I don’t have the foggiest idea who Riley is, or what his positions are on the issues, but he HAS pointed out one of the tough little inconvenient truths about the way the advertising supported mass media work in this country.
Like it or lump it, the typical citizen on the street, male or female, old or young liberal or conservative, comes equipped with a brain that’s programmed to run on more immediate time scales than decades.
The repetition of the same basic information day after day almost inevitably results in it being mostly tuned out. We don’t hear the beer ads, the phone ads, the car ads, most of the time, unless something in our immediate environment has focused our attention on beer, or phones, or cars.
This however does not mean these ads don’t work, over the long term. They do.
This is no problem for beer companies, or car companies, they have the money to pay for ads now that barely register on our consciousness, but later on, we are prone to say gimme a bud, rather than asking for something a little more special.
Nobody is willing and able to put the environmental ads up on a saturation basis.
A tv news program producer cannot hope to keep his job running more or less the same thing day after day.
Riley’s right, on this particular point.
20. sunnnv says:
Natural gas leaks are being mapped by (some) google street view cars.
Some added equipment analyzes methane as the cars drive around,
making maps of leaks in different cities.
• GoneFishing says:
Very nice work on the EDF’s and Google’s part. Now the distributors need to act on this information or be forced to act upon it.
21. Oldfarmermac says:
This is long shot in more ways than one, but maybe ten years ago I read about Ford the car company inventing a catalyst that could be put ON the outer surfaces of the coolant radiator, which actually broke down some troublesome pollutants in the air as it is drawn thru the radiator. The energy used is not an issue, in this case, as it is being dumped into the air anyway as waste heat.
Does anybody here know anything about such this potential technology?
• alimbiquated says:
Electric cars don’t need water cooling, so the future of this idea is limited.
22. Oldfarmermac says:
For Ron,
We ran out of reply slots up above.
I just don’t seem to be able to get across the simplest possible thought to you. I can’t say for sure whether this is due to my inability to properly express my argument, or whether you are simply as big an ideologue YOURSELF as the worst ideologues at the opposite end of the political spectrum, because you obviously refuse to EVEN ACKNOWLEDGE my argument, never the less REFUTE IT.
You simply say that by SKY DADDY you will not quit badmouthing people who thru no fault of their own disagree with you. Well, it’s a free country more or less, for now, and that’s your privilege.
And then you post this quote.
“They are dogmatic ideologues. They don’t listen to arguments…. period.
Bertrand Russell:
Human Society in Ethics and Politics.”
There is a great deal of truth in this quote , but it’s also way way WAY WAY THE FUCK WRONG, almost wrong enough to characterize it as pure fucking partisan bullshit, in some respects.
Bertrand Russel isn’t half as fucking smart as you think he is.
For instance, he says “They never attain an intellectual resistance sufficient to counter the influence of dogmatic precepts, to grow up as free individuals.” about children who grow up in religious environments.
Well, I was such a child, and over the years I have known HUNDREDS of others, as a student, as a teacher, and as a worker and businessman, who pretty much forgot their religious training, and are PERFECTLY fucking able to think for themselves.
Further more, religions evolve with and within the societies in which they exist, and the religion of my great grand parents is not the religion of my parents, and the religion of my parents is not the religion of my nieces and nephews and younger cousins.
Now even though I have posted probably THOUSANDS of comments here in this very blog which you founded, to the effect that I support the vast majority of the key policies of the Democratic Party, and have hardly ever posted a comment in favor of any particular Republican Party policy, you want to know why I keep disputing your comments about stupid, ignorant, redneck unwashed, etc etc, etc conservative people.
Well try REALLY REALLY hard to see if you can get your head around this possibility. Try to consider the possibility, no matter HOW remote it might be, in YOUR ideologue estimation, that I am SERIOUS, that I DO support single payer health care, that I DO support the renewables industries, that I DO support strong environmental legislation, ETC ETC ETC.
Now explain to me how it is that I can support these things, and still be a right wing ideologue.
You can’t.
Now try to get your head around THIS observation.
You can be a member of a team, or an impartial observer who is willing to point out the shortcomings of a team, WITHOUT being either a MEMBER of, or a SHILL , for the opposition.
Can you get your head around the possibility that I am actually VERY MUCH in agreement with MOST of the D Party agenda, excepting the parts of it I criticize here and elsewhere, using other internet handles??
Can you get your head around the idea that although I believe HRC is the worst candidate the D party has run in my lifetime, in terms of being a high risk , Republican Lite candidate, that I believed she would win, and said so, quite often, while remarking that she MIGHT LOSE ?
If you were not a total ass kisser, and that in my opinion is the LAST thing you would ever have been, at times when you were working in a management role, you probably believed your team was WRONG, that your team leader or leaders higher up the ladder were making mistakes, and SAID SO.
You can be a dyed in the wool TIDE fan, and scream ROLL TIDE ROLL until you can’t whisper the next day, but that does NOT mean you can’t criticize players and coach, ESPECIALLY IF THE TEAM IS ON A LONG FUCKING LOSING STREAK, without being accused of being a traitor to the team.
But since you ARE an intelligent man, obviously enough, then I must conclude that you simply have an intellectual blind spot that keeps you from understanding what I have been saying. You don’t want to hear anything I say, so you just blank it out, JUST LIKE a backwoods preacher blanks out any criticism of HIS beliefs. Just like Russel says religious people blank out any possible consideration of information contrary to their beliefs. This is true in some respects, and depending on the PARTICULAR TOPIC, or issue, it can be almost universally true, or almost universally false.
It absolutely IS possible to have a rational conversation, an extended dialogue with religious people, about MANY issues and find LOTS and LOTS of common ground that can be exploited politically to the GREAT ADVANTAGE of the Democratic Party, and to the people of this country, and the world.
BUT they won’t talk to people who go around bad mouthing them the way you do. They won’t read this blog. You are ENCOURAGING them to vote R, to vote for Trumpster type politicians.
Your comments are a PRICELESS gift to the R party.
The R’s don’t NEED to actually DO anything for religious people , so long they have people like you to piss them off to the point that they vote R just to spite your kind of people. The R’s can just take them for granted.
The R’s don’t actually need to do much of anything for working class people so long as the D Party runs mostly on a Republican Lite style platform, because the D party isn’t doing much for them either, with that sort of platform and candidate.
The working class is the BIGGEST single class, by any measure, of the voting age population, and it INCLUDES the various minorities and most of the members of various special interest groups as well, when it comes to personal and civil rights.
The D party has nothing to lose, by shifting emphasis away from it’s Republican Lite ways, and more towards the interests of the LARGE majority of the people of this country, and doing it in SUCH A WAY that the majority of working class people BELIEVES the D party is putting their best interests first and foremost.
Note I am not disputing the entirely obvious fact that the overall D agenda is far superior to the overall R agenda, but rather that it needs upgrading in such a way as to convince the people that they should be voting for it.
HRC won the popular vote, true, but that’s just ONE example, whereas the R’s have been consistently mopping the floor with the D’s for years and years.
The D’s can’t control the R party team, but they can make the changes necessary to get the D team back to it’s winning ways.
• Hightrekker says:
The old saying goes, “the people get the government they deserve.” And I think there is a great deal of truth to this. We have become a nation of profoundly ignorant people – ignorant, tending toward stupid, and incredibly selfish, narcissistic. When somebody pops up and promises to make the world the way it was when they were “happy”, well this is what we get.
HRC won the popular vote, but if you take away California, he won the popular vote in the other 49 States.
California is so Democratic (not a major office held by a repug) that it skews the data, and being so large, it dominates)
• HuntingtonBeach says:
I find this comment extremely dismissive and anti American democracy . Californians are 13% of US Americans and already have the least representation in Washington per person. Let’s talk about what happens when 13% of Americans reddest states population are eliminated.
Russ Feingold launches new group aimed at ending Electoral College
• Hightrekker says:
I was born in LA. My father was born in CA, and I’m almost 70.
My Grandmother lived in CA in the late 1800’s.
Of course CA gets screwed by the Fed’s, and only get’s about 70% of the money it pays out in taxes back.
Plus, CA is the Tech and Agriculture center of the US, and also entertains the rest of the Planet.
It is obviously not like the rest of the US.
That was my point– if CA is taken out of the picture, Trump won the National vote.
Get it?
• HuntingtonBeach says:
Well Captain Obvious, what a useless piece of insecure Trump trivia you have uncovered. What’s your point ? That you can spin the numbers or the bible belt is a welfare state.
Trump loss the popular vote by 2.8 million.
Get it ?
Your coal job isn’t coming back, you have been conned
Well hell, I have finally found someone who is smarter than Bertrand Russell was. He died in 1972 at the age of 91. Apparently you were not aware of that little tidbit of knowledge.
Let me quote again the last two sentences of Russell’s quote:
And you reply: but it’s also way way WAY WAY THE FUCK WRONG…
No you do not know HUNDREDS of others who were able to shuck their early religious indoctrination. That is a gross exaggeration and you fucking well know it. Yes, a few do overcome their early religious indoctrination. These people usually have a very high IQ or their indoctrination was just not that strong to begin with… or both. Also there is a genetic factor there as well. Some people are just natural skeptics. These people are a tiny minority but they do exist.
Do you actually believe that Russell meant that no one ever overcome their early religious training? You give yourself as an example. Well congratulations, I am such an example myself. And I can name perhaps a dozen others but certainly not hundreds. Most religious disbelievers were never strongly indoctrinated to begin with. But Obviously Russell was speaking of youths in general, and never meant to imply that there were never exceptions. And I must say that I am a little shocked that such a smart man as yourself did not realize that.
I lived in Saudi Arabia for five year. Not one Saudi in one thousand is a disbeliever. Their indoctrination is almost total. A Saudi male is required, in school, to memorize the Koran, usually by the time he is 12. And all Saudi’s are supposed to pray 5 times a day. Their indoctrination is almost total. Of course even here there are exceptions to the rule. As I said, perhaps one in one thousand are able to overcome such indoctrination.
But I am not surprised that you were able to overcome your early religious training. After all, you are smarter than Nobel Laureate Bertrand Russell. I was indoctrinated early as well and I also overcame that training, though I am not nearly as smart as Russell was. And to repeat, I personally know perhaps a dozen others like myself, who overcame our early indoctrination. Though none of them were nearly as strongly indoctrinated as even the average Muslim. Like I said, it all depends on the strength of your early indoctrination and your cognitive ability to overcome it.
I am going to leave it here. You know I like to keep my posts as short as possible. Otherwise no one would bother to read it.
From Wiki: Bertrand Russell
Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found very unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill’s “Autobiography”, he abandoned the “First Cause” argument and became an atheist.
• Doug Leighton says:
“Bertrand Russel isn’t half as fucking smart as you think he is.”
A selected bibliography of Russell’s books in English, sorted by year of first publication:
1896. German Social Democracy. London: Longmans, Green.
1897. An Essay on the Foundations of Geometry.[168] Cambridge: Cambridge University Press.
1903. The Principles of Mathematics.[169] Cambridge University Press.
1903 A Free man’s worship, and other essays.[170]
1905. “On Denoting”, Mind, Vol. 14. ISSN 0026-4423. Basil Blackwell.
1910. Philosophical Essays. London: Longmans, Green.
1910–1913. Principia Mathematica[171] (with Alfred North Whitehead). 3 vols. Cambridge: Cambridge University Press.
1912. The Problems of Philosophy.[172] London: Williams and Norgate.
1914. Our Knowledge of the External World as a Field for Scientific Method in Philosophy.[173] Chicago and London: Open Court Publishing.
1916. Principles of Social Reconstruction.[174] London, George Allen and Unwin.
1916. The Policy of the Entente, 1904–1914 : a reply to Professor Gilbert Murray. Manchester: The National Labour Press
1917. Political Ideals.[175] New York: The Century Co.
1918. Proposed Roads to Freedom: Socialism, Anarchism, and Syndicalism.[176] London: George Allen & Unwin.
1919. Introduction to Mathematical Philosophy.[177][178] London: George Allen & Unwin. (ISBN 0-415-09604-9 for Routledge paperback)[179]
1920. The Practice and Theory of Bolshevism.[180] London: George Allen & Unwin.
1921. The Analysis of Mind.[181] London: George Allen & Unwin.
1922. The Problem of China.[182] London: George Allen & Unwin.
1927. Why I Am Not a Christian.[183] London: Watts.
1928. Sceptical Essays. London: George Allen & Unwin.
1931. The Scientific Outlook,[184] London: George Allen & Unwin.
1932. Education and the Social Order,[185] London: George Allen & Unwin.
1935. In Praise of Idleness and Other Essays.[186] London: George Allen & Unwin.
1935. Religion and Science. London: Thornton Butterworth.
1945. A History of Western Philosophy and Its Connection with Political and Social Circumstances from the Earliest Times to the Present Day[187] New York: Simon and Schuster.
1949. Authority and the Individual.[188] London: George Allen & Unwin.
1950. Unpopular Essays.[189] London: George Allen & Unwin.
1954. Nightmares of Eminent Persons and Other Stories.[190] London: George Allen & Unwin.
1956. Portraits from Memory and Other Essays.[191] London: George Allen & Unwin.
1959. Common Sense and Nuclear Warfare.[192] London: George Allen & Unwin.
1959. My Philosophical Development.[193] London: George Allen & Unwin.
1963. Essays in Skepticism. New York: Philosophical Library.
1963. Unarmed Victory. London: George Allen & Unwin.
1967. Russell’s Peace Appeals, edited by Tsutomu Makino and Kazuteru Hitaka. Japan: Eichosha’s New Current Books.
1951–1969. The Autobiography of Bertrand Russell,[194] 3 vols., London: George Allen & Unwin. Vol. 2,
I’d say Russell’s command of English, his understand of Math and Logic was rather impressive. And, that his Nobel Prize was well deserved. BTW his IQ was about 180. Oh yeah: “Principia Mathematica”, the landmark work in formal logic written by Alfred Whitehead & Bertrand Russell, served as a major impetus for research in the foundations of mathematics throughout the twentieth century.
23. Survivalist says:
24. Hickory says:
On a non-partisan note,
I went to an electric bike expo (about 20 manufacturers displaying their bikes) this weekend.
Got to ride about 20 different bikes.
If you’ve never ridden an electric bike, try to get a chance. Its hard not to smile when you feel like you are 3 times stronger than you ever were (and handsome too).
These bikes can enable an average person to- easily commute, haul cargo loads up to perhaps 100 lbs, and have an excellent time at recreation or running errands. To have this kind of vehicle available in the future will surely make some communities much more viable and sustainable that would otherwise be.
There has been quite a lot of innovation over the past ten years. Currently you can buy a wide range of bikes, or kit components. A very robust kit today will cost about $1000 bucks- $400 motor, $500 battery, etc. This will power you up a 5000 ft mountain, and is compatible with most bikes.
And there are ones optimized for folding up and taking on the train, ones for hauling cargo, ones for the roughest terrain, ones for sleek commute. Every niche is being explored.
Good bike trails are golden, be safe out there.
As an an example of what manufacturers are offering- here is Bosch
• notanoilman says:
Thanks for the report. That Bosch system seems to be aimed at manufacturers rather than after-market. It looks like the bike needs to be designed around it.
25. Dennis Coyne says:
Hi all,
Ron Patterson’s wife’s health has taken a turn for the worse. She was in the hospital for 6 days with congestive heart failure and came home on Wednesday and is on hospice care. Ron expects she has from days to weeks left and does not think he will be posting much.
• GoneFishing says:
That is a rough time to go through. I hope they have support of family and friends and take advantage of that support.
• Suyog says:
Sorry to hear that. I hope he gets all the support he needs.
• Fred Magyar says:
My heart felt sympathies!
• HuntingtonBeach says:
Your news brings sadness to my heart. Ron your a great loving human being. I know inside that hard shell is a big teddy bear. Cherish these moments and keep reminding the two of you about the wonderful times you have had together. I know all your friends here wish the two of you the best and are going to be here for you when the time comes.
Best wishes, your in my thoughts
• notanoilman says:
My sympathies to you and your family.
• Oldfarmermac says:
Hi Ron,
I’m sorry I am late expressing my sympathy for your family problems.I haven’t had access for a couple of days due to a phone line being out.
I pray ( figuratively) that your wife’s last few days will be comfortable and peaceful. My family does everything possible to keep members at home, surrounded by familiar scenes and familiar faces, which I personally believe is the most important single thing that can be done for somebody when the end is near, and you have my utmost respect for doing the same.
I will not mention politics for the next few days.
After that I will have something more to say.
• wharf rat says:
All the best, Ron.
• Alhall says:
I will said a prayer,tonite,for you///Mr, Patterson….thank,for all the information.you provide here\~bless
26. Survivalist says:
Ron, my thoughts are with you in your time of grieving. I hope all the cherished memories bring you some happiness during this difficult time. My deepest condolences.
27. GoneFishing says:
What are Americans concerned about?
Gallup most important problem poll.
Although economic problems are important, the overall concern for them has dropped lately.
However, dissatisfaction with government/leadership is on the rise, doubling since November.
Immigration/ illegal aliens has taken a sudden rise, almost as if they are being led to believe it’s a bigger problem than before.
Way down on the list is environmental/pollution problems at 3 percent. With little mainstream coverage that is understandable.
• Oldfarmermac says:
The Achilles Heel of our free press system is that it depends mostly on advertising revenue, and to get the ad money, publishers have to chase after the hot topics.
It seems unlikely that the percentage of space on tv, in major papers, on web sites, etc, devoted to environmental issues will stay about where it is, until some things start happening on a regular basis that will make headlines about the environment attract more readers than headlines about athletic contests and show biz people.
• Survivalist says:
One thing that I’ve learnt from the Internet is that there are people willing to write much better for free than those who expect to get paid for it. POB is a great example. George Mobus has a great column. Many more too. I haven’t bought a newspaper or watched TV in years.
28. GoneFishing says:
Warnings on the US climate change plans.
“They are giving up that leadership position and I suspect that it will be taken up by other competitive countries,” said Stocker, adding that China was well-placed to do so.
That view was echoed by Myles R. Allen, a climate scientist at the University of Oxford. “If China saw the U.S. as being short-sighted (…) they might even welcome this as a chance to take over climate leadership,” he said.
Germany’s environment minister, Barbara Hendricks, said promoting renewable energy and energy efficiency is already creating large numbers of jobs around the world. “Whoever tries to change into reverse gear is only going to harm themselves when it comes to international competitiveness,” she said.
• Oldfarmermac says:
I believe Hendricks the German environment minister is dead center in the bullseye when she says that in the future the competitive position of countries that pursue renewable energy policies NOW will be far superior to those of countries that don’t.
In some respects, it doesn’t really matter if you spend a lot more on something new, now , that may be less practical and less economical than the business as usual alternative, TODAY.
An electric car still costs a good bit more than an otherwise comparable ICE car, especially when you take into account the opportunity cost and the time value of the money you save by buying the conventional car. If you pay thirty thousand for an electric, and twenty for a comparable conventional car, and invest the ten thousand difference, at the end of five years you might with some luck have twenty thousand to buy ANOTHER new conventional car. ETC.
But if you can’t get buy gasoline in 2022 except for a rationed gallon or two once a week, because there’s a war on, your existing five year old electric will be worth three times the price of a new conventional car.
It’s not what renewables save you today. It’s what they might save us a few years down the road that really counts.
I’m still looking for quality data about how much the growth of renewable energy, the growth of the electric car industry, etc, is depressing the sale of coal, natural gas, and oil, compared to what the total sales volume thereof would be the case otherwise.
It’s obvious that the sale of coal in the USA for electrical generation has already taken a serious hit. The sale of gas is as generating fuel is up, but up how much LESS than if there were no wind and solar juice being fed into the grid and produced and used behind the meter?
Electric cars aren’t yet popular enough to displace enough oil to matter, but ten or fifteen years down the road ?????
And anytime the sale of a commodity is reduced because the sales of a substitute are increasing, the PRICE per unit sold falls too, meaning the producers take a double hit, lower volume AND lower price.
My guess is that the USA, all of us collectively, will save enough on the purchase of coal and natural gas as generating fuel to repay what we have previously spent on subsidizing the wind and solar industries within the next few years, and continue to earn a substantial “profit” on this investment from there on out.
I can’t prove it, not being a skilled researcher and numbers cruncher, but otoh I haven’t seen any proof that I’m wrong about it.
29. GoneFishing says:
Warming could reach up to 10C.
Our results show that the amount of carbon that drove the PETM warming was about the same amount as the current ‘easily accessible’ fossil fuel reserves of about 4,000 billion tons. But the warming that would result from adding such large amounts of carbon to the climate system would be much greater today than during the PETM and could reach up to 10 degrees. This is partly due to the current atmosphere containing much less CO2 – approximately 400 ppm (parts per million) – compared to before the PETM, where the concentration was about 1,000 ppm and partly because we emit carbon into the atmosphere at a much faster rate than during the PETM. If we then also take into account the fact that climate sensitivity increases with the temperature, it means that it is all the more urgent to limit global warming as soon as possible by reducing the man-made emissions of greenhouse gases,” explains Professor Gary Shaffer, who conducted the study in collaboration with researchers from Purdue University, USA, the University of Chile and the Technical University of Denmark.
Read more at: https://phys.org/news/2016-06-future-global-warmer.html#jCp
30. GoneFishing says:
Warming could reach up to 10C.
• Oldfarmermac says:
Does anybody have a map or chart that predicts how much hotter it will be in the southeastern USA in the event the average temperature goes up ten C?
Our hottest days might be so hot we would have to just about give up on the techniques we use and the crops we grow today.
• GoneFishing says:
It won’t matter Oldfarmermac. Six C essentially ends the environment as we know it. 10C makes it a new world.
Here is five degrees.
Five Degrees
The planet as we know it becomes unrecognisable:-
· No ice sheets remain
· No rain forests left
· Rising sea levels have caused mass inundations far inland totally altering the
geography of the planet
· Humans will herd into shrinking habitable areas
· Drought
· Floods
· Inland temperatures 10° or more higher than now
• GoneFishing says:
To give you an idea, with less than 1C the temps around here expanded their range by about 25 F total. More cold than hot due to jet stream changes shifting Arctic air southward.
With even more heating the Jet streams could commonly cross the equator causing unpredictable weather and temperatures. Agriculture could become a total crapshoot until things stabilized.
• Oldfarmermac says:
Hi GF,
I have to agree with you, just five or six degrees means the end of life as we know it.
But , and this is a very important but, some places will change a lot more and a lot more for the worse, than others.
It will probably get so hot and dry now in a lot of places in the tropical latitudes that large areas will necessarily be more or less abandoned, but at my latitude, well above sea level, and within three hundred miles of the coast, maybe something along the lines of current day tropical or semitropical agricultural techniques will work.
I don’t expect to be around long enough to personally see the average temperature rise more than maybe a tenth, at the outside.
But somebody else will need this place to live and work in times to come, and there are somethings I might be able to do in order to make it a better place in a much hotter world.
I could for instance plant sapling trees as future shade for the house and buildings- trees that are adapted to much hotter weather than the ones here now.
Anybody planning on new construction work could plan it so as to minimize solar gain, etc.
• GoneFishing says:
Your altitude/latitude should resemble south Florida weather with greater temp variation, but drier since the area to west of you (central continental) will get much drier and hotter. Problem is until climate stabilizes somewhat, weather will be chaotic, meaning you might still get cold weather on occasion relieved by heat waves. You are not in a bad place since you don’t depend on the Gulf Stream for heat. Those regions will change even more dramatically as the GS slows or stops.
Forget apples.
• notanoilman says:
Saplings -> Palm trees
• Dennis Coyne says:
Hi Gone fishing,
There are not 4000 Gt of easily accessed fossil fuels, only economists believe that.
Those guys solve problems by assuming the solution. The likely level of easily accessible fossil fuels might be 1500 Gt of Carbon (this includes the fossil fuels already used).
Still a problem, just somewhat smaller in magnitude.
• GoneFishing says:
Hi Dennis, who said anything about 4000 Gt of fossil fuels?
Nature is providing albedo changes, CO2, and plenty of methane to continue man’s initial push.
31. Oldfarmermac says:
I won’t comment on these links for now, but it’s extremely important that everybody be aware of this sort of thing.
And this one might be even more important.
32. Oldfarmermac says:
For anybody, especially anybody fluent in German, since the Germans apparently have a word for just about anything imaginable.
I’m looking for a word or phrase that means the same thing as “everything else held equal” but that can be used in describing situations or making comparisons that are more involved or detailed than usual.
For instance in discussing the effect of wind and solar power on the quantity of coal and gas sold, and effect the PRICE of coal and gas due to declining sales, you need to point out that even though wind and solar power are obviously displacing coal and gas is generating fuel , the quantity of each one sold, and the price of that quantity, might still be going up for other reasons, which we group together as “noise ” in describing them.
There should be a way of expressing this sort of thing gracefully using a few commonly understood short hand phrases.
33. Oldfarmermac says:
This link is a couple of years old now, but it has some great if dated info, and a very nice interactive chart that gives the amount of electricity produced annually in each state by wind,solar,hydro, nuclear, coal, gas etc.
34. Eulenspiegel says:
What about this:
The inventor of the original lithium ion battery has a new toy –
The lithium (or cheap natrium) glas battery, with about the 3 times density than today batteries and the option to be cheaper due to be composed of only cheap and abundant raw materials.
If this happens to be true, this can change a lot – up to medium distant aviation will be possible to run on electricity, or rednecks with a solar roof will get energy independent.
Looks like a game changer.
35. Doug Leighton says:
“Increasingly we are understanding that the Antarctic ice cap is not some enduring monolithic block but a much more slippery ephemeral beast – and the implications of that realization for the future of Antarctic ice sheets in a very rapidly warming world have not escaped us.”
• Doug Leighton says:
• Doug Leighton says:
Now, the good news,
“Iceland has seen a dramatic increase in the followers of its indigenous pagan movement in recent years, making Odin worshipers the country’s fastest-growing religion…the total of Icelanders who revere Odin, Thor and the Goddess Freyja has leapt 50% since 2014 to 3,583, with more than twice as many male as female faithful……..” Exponential growth! Cool.
“Mr Hilmarsson says the country’s first pagan temple in 1,000 years will be used to mark weddings, naming ceremonies, and funerals, and should accommodate 250 people at a time. Built on land donated by Reykjavik Council but funded by donations, it should be ready early in 2018. And there can be little doubt that Asaru’s media-savvy High Chieftain will ensure that its dedication attracts plenty of coverage in Iceland and abroad.”
• GoneFishing says:
Time to buy land above 20 feet msl.
• George Kaplan says:
A new daily CO2 record for Keeling curve today – 409.56 ppm. It was a big jump from earlier in the week so might drop again over the next couple of days.
• GoneFishing says:
It normally peaks in April/May and falls until sometime in August due to plant growth. Last year it had only one reading below 400 ppm. This year that should not happen.
36. Dennis Coyne says:
A new post is up on World Oil Production
also there is a new Open Thread Non-Petroleum
Comments are closed. |
8e57d71d7d4646d7 | Open Access Highly Accessed Methodology
Data model, dictionaries, and desiderata for biomolecular simulation data indexing and sharing
Julien C Thibault1, Daniel R Roe2, Julio C Facelli1* and Thomas E Cheatham2
Author Affiliations
1 Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
2 Department of Medicinal Chemistry, University of Utah, Salt Lake City, UT, USA
For all author emails, please log on.
Journal of Cheminformatics 2014, 6:4 doi:10.1186/1758-2946-6-4
Received:30 September 2013
Accepted:15 January 2014
Published:30 January 2014
© 2014 Thibault et al.; licensee Chemistry Central Ltd.
Few environments have been developed or deployed to widely share biomolecular simulation data or to enable collaborative networks to facilitate data exploration and reuse. As the amount and complexity of data generated by these simulations is dramatically increasing and the methods are being more widely applied, the need for new tools to manage and share this data has become obvious. In this paper we present the results of a process aimed at assessing the needs of the community for data representation standards to guide the implementation of future repositories for biomolecular simulations.
We introduce a list of common data elements, inspired by previous work, and updated according to feedback from the community collected through a survey and personal interviews. These data elements integrate the concepts for multiple types of computational methods, including quantum chemistry and molecular dynamics. The identified core data elements were organized into a logical model to guide the design of new databases and application programming interfaces. Finally a set of dictionaries was implemented to be used via SQL queries or locally via a Java API built upon the Apache Lucene text-search engine.
The model and its associated dictionaries provide a simple yet rich representation of the concepts related to biomolecular simulations, which should guide future developments of repositories and more complex terminologies and ontologies. The model still remains extensible through the decomposition of virtual experiments into tasks and parameter sets, and via the use of extended attributes. The benefits of a common logical model for biomolecular simulations was illustrated through various use cases, including data storage, indexing, and presentation. All the models and dictionaries introduced in this paper are available for download at webcite.
Biomolecular simulations; Molecular dynamics; Computational chemistry; Data model; Repository; XML; UML
Graphical abstract
Thanks to a dramatic increase in computational power, the field of biomolecular simulation has been able to generate more and more data. While the use of quantum mechanics (QM) is still limited to the modelling of small biomolecules [1] composed of less than a couple hundred of atoms, atomistic or coarser-grain molecular representations have allowed researchers to simulate large biomolecular systems (i.e. with hundreds of thousands of atoms) on time scales that are biologically significant (e.g. millisecond for protein folding) [2]. Classical molecular dynamics (MD) and hybrid approaches such as quantum mechanics/molecular mechanics (QM/MM) are some of the most popular methods to simulate biomolecular systems. With the explosion of data created by these simulations — generating terabytes of atomistic trajectories — it is increasingly more difficult for researchers to manage their data. Moreover results of these simulations are now becoming of interest to bench scientists to aid in the interpretation of increasingly complex experiments and to other simulators for assessing force fields and to develop coarse-grain models. Opening these large data sources to the community, or at least within collaborative networks, will facilitate the comparison of results to detect and correct issues with the methods, identify biologically relevant patterns or anomalies, and provide insight for new experiments. While the Protein Data Bank [3] is very useful as a central repository for structural data, the number of repositories for biomolecular simulations is still very limited. To the best of our knowledge the only databases that currently provide access to MD data for the community are Dynameomics [4,5] and MoDEL (Molecular Dynamics Extended Library [6]). Dynameomics and MoDEL were populated with about 11,000 and 17,000 MD trajectories of proteins respectively. One of the problems with such repositories is that the published data was generated in a specialized environment to study a given biological process (e.g. protein folding), resulting in fairly homogeneous data compared to the range of methods and software available to the community. These repositories are somewhat tied to these environments and it is uncertain how one would publish data generated outside these environments or how external systems would index or interface with these repositories. As more repositories are created the need for a common representation of the data becomes crucial to achieve semantic interoperability and enable the development of federated querying tools and scientific gateways. Note that other efforts to build repositories and scientific gateways, such as the BioSimGrid project [7] and work by Terstyanszky et al. [8], have been undertaken but so far none has been widely adopted outside their original deploying institution or organization.
In the computational quantum chemistry community, more progress has been achieved towards the development of repositories using standard data representations to enable collaborative networks. One of the main on-going efforts is led by the Quixote project [9] which aims to create a federated infrastructure for quantum chemistry calculations where data is represented with CML CompChem (Chemical Markup Language – Computational chemistry [10]) and integrated into the semantic web through RDF (Resource Description Framework, webcite). The Chemical Markup Language [11] (CML) and its computational component CML-CompChem aim to provide a standard representation of computational chemistry data. While the core CML XML specifies the requirements to represent molecular system topologies and properties, CML-CompChem supplements CML to allow the representation of computational chemistry data, including input parameters and output data (calculations). So far these extensions have mainly focused on representing quantum computational chemistry experiments as XML files. These files can be created by converting input and/or output files generated by a particular software package through file parsers such as the ones supported by the Blue Obelisk group [12] (e.g. Chemistry Development Kit, Open Babel). While CML-CompChem has a great potential for QM calculations [13], its usefulness for MD and biomolecular simulations in general might be limited. For example, typically trajectories of atomic positions need to be compressed or binary encoded for data movement, storage purposes, and/or accuracy. Embedding this information into a verbose XML file such as CML will not be the optimal solution, at least not for the description and formatting of the raw output. Another obstacle to the conversion of MD experiments to a single-file representation is the common definition of many separate input files (e.g. system topology, method parameters, force field) necessary to prepare an MD simulation and define the different iteration cycles (e.g. minimization, equilibration, production MD). In quantum chemistry, the targeted molecules and calculation parameters are typically defined in a single input file (e.g. “.com” file for Gaussian [14] and “.nw” file for NWChem [15]) which makes this conversion much simpler. The output files generated by quantum chemistry software packages usually already contain the final results the user is interested in while in MD the raw output – i.e. multiple files containing the trajectories of atomic positions, energies and other output information – has to be further processed through various analysis tasks to create meaningful information. These post-processing steps involve the creation of new input and output files, making the conversion of an experiment to a single XML file even more difficult.
Perhaps one of the main barriers to build repositories for biomolecular simulations is the lack of standard models to represent these simulations. To the authors’ knowledge no published study has assessed the needs of the community regarding biomolecular simulation repository data models. Therefore it is unclear which pieces of information are considered essential by researchers and how they should be organized in a computable manner, so that users can:
– Index their data and build structured queries to find simulations or calculations of interest, not only via the annotations, but also with access to the raw data (files).
– Summarize, present, and visualize simulation data either through a web portal or more static documents (e.g. PDF document, XML file).
These models should be designed to include not only the description of the various independent computational tasks performed but also a high-level description of the overall simulated experiment. Each experiment can be related to multiple concepts that help understanding what was simulated, how, and in which context. These concepts can be grouped into the following categories:
– Authorship: information about the author, grants and publications related to the experiment
– Methods: computational method description (e.g. model building, equilibration procedure, production runs, enhanced sampling methodology) and associated inputs / parameters
– Molecular system: description of the simulated molecules from a structural, chemical, and biological point of view
– Computational platform: description of the software used to run the computational tasks, the host machine (computational environment), and execution configuration
– Analysis: derived data that can be used for quality assessment of the simulations
– Files: information about the raw simulation input and output files, such as format, size, location, and hosting file system
In this study we describe our efforts to formalize the needs of the community regarding the elements necessary to index simulation data. This work was initiated in part to support the iBIOMES (Integrated BIOMolEcular Simulations) project [16], an effort to create a searchable repository for biomolecular simulations, where the raw data (input and output files) is made available so that researchers can rerun the simulations or calculations, or reuse the output to perform their own analysis. In the initial prototype a set of software-specific file parsers were developed to automatically extract common data elements (metadata) and publish the raw data (i.e. the input and output files) to a distributed file system using iRODS (integrated Rule-Oriented Data System [17]). The published files and collection of files (experiments) are indexed based on the extracted data elements and are stored as attribute-value-unit triplets in a relational database. In this paper we introduce a list of common data elements and a data model that will help iBIOMES and future biomolecular simulation data repository developments move towards semantic interoperability.
Motivation for a common data representation: examples
The development of a common framework for data representation provides users with a large amount of flexibility to develop new tools for managing the data while maintaining interoperability with external resources. In this section we present three different examples that demonstrate the need for a standard representation of biomolecular simulation data, whether it is for indexing or presentation to the user. All three examples have been implemented to some extent in prototype form here. The first example is based on our experience with iBIOMES [16], where simulation-specific metadata is associated at the file or directory level, through a specialized file system (iRODS [17]). The second example shows how one would use a model-based approach to build a repository where simulation parameters and provenance metadata are stored in a relational database. Finally the last example illustrates how a model-based API (Application Programming Interface) can be used to automatically generate XML and HTML summaries for the simulations being published.
Example 1: building a repository based on file annotations
One of the simplest ways to index simulations is to tag the associated files and directories with user annotations summarizing their content. These tags can be simply stored in a database or indexed via dedicated systems such as Hadoop [18,19] or Apache Lucene [20]. This approach is well suited for fast searches based on keywords or attribute-value pairs. In the iBIOMES system [16] these tags are managed by the iRODS framework [17], which enables the assignment of attribute-value-unit triplets to each file and directory in a distributed file system. This approach is very flexible since it allows the use of tags that represent common concepts such as computational methods and biological features, and user- or lab-specific attributes as well. In iBIOMES, a catalogue of common attributes was defined for users to annotate their data. The definition of such attributes is important as they can be tied to actionable processes, such as analyses, visualizations, and ultimately more complex workflows. It is then possible to build a user interface that presents the data and performs certain actions based on the existence of certain attributes or their associated values. For example if the format of a file is PDB (File format = “PDB”), then the user interface could enable 3D rendering of the associated molecules through Jmol [21]. A data dictionary that would offer possible values for a particular attribute is important as well. Each term should be well defined to leave no ambiguity to the user. A dictionary of force fields for example could list all the common force fields with a textual description, a type (e.g. classical, polarizable, coarse-grained), and the associated citations for each entry. A catalogue of common data elements, associated to a data dictionary, is also useful for users to pick from to facilitate annotations and build queries. The catalogue used in iBIOMES was defined internally by our lab and probably is not yet sufficiently exhaustive for the community at large. However, creating a catalogue of common data elements (CDE) supported by the community is a first step towards the standardization of biomolecular simulation data description. Defining a subset as recommended (i.e. the core data elements) would go a step further and set a criterion to assess the quality of the data publication process. Finally, linking these CDEs to existing terminologies or ontologies would bring semantic meaning to the annotations, enabling data discovery and query via external systems.
Example 2: building a repository based on a relational database
While a CDE catalogue is important, it lacks the representation of relationships between elements unless it is linked to a well-structured taxonomy. For example, if a user is interested in simulations of nucleic acids, a hierarchical representation of biomolecules could be used to infer that the user is actually looking for any simulation of DNA or RNA. The aim of a logical data model is to give a representation of the domain that captures the business needs and constraints while being independent from any implementation concern [22]. Such a model can provide the foundations for the design of a database and can be used to automatically generate API skeletons using modern modelling tools (e.g. Enterprise Architect, ArgoUML, Visual Paradigm). Since it is a domain-specific representation of the data, it can also serve as a starting point to develop a terminology or ontology specific to this domain. In this second example we demonstrate how a data model could be used to prototype a repository for biomolecular simulations where simulation parameters and provenance metadata are organized and stored in a relational database. We created a UML (Unified Modeling Language, webcite) model including logical and physical entities to build a relational database that could eventually be wrapped as a Grid service. The Grid [23] represents a great infrastructure for collaboration because of the underlying authentication scheme and data discovery services available, but also because of the semantic and syntactic integration. For this example we decided to mock up a data grid service using the caGrid [24] framework. caGrid was supported by the National Cancer Institute (NCI) and aimed to create a collaborative network for researchers to share cancer data, including experimental and computational data. The caCORE (cancer Common Ontologic Representation Environment) tools that were developed in this context facilitate the creation of the grid interfaces by automatically generating the necessary Java code from a UML model. These tools are now maintained by the National Cancer Informatics Program (NCIP) and available at: webcite. For this example we mapped the logical model to a data model using the caAdapter graphical tool. The final UML model and database creation scripts for MySQL ( webcite) are available for download at: webcite. More details about the UML model are provided in the section introducing the logical data model. The caCORE SDK (Software Development Kit) was then used to generate the Hibernate ( webcite) interfaces to the database along with a web interface that can be used to create simple queries or browse the published data. A screenshot of the generated interface is given in Figure 1 (listing of various published computational tasks). To actually build and deploy the data service onto a Grid, one would have to use the Introduce module. Semantic integration is also possible via the Semantic Integration Workbench (SIW), which enables tagging of the domain model with concepts from standard terminologies (e.g. ChEBI, Gene Ontology).
thumbnailFigure 1. Screenshot of the web interface generated via the caGrid tools. The screenshot presents a listing of the computational tasks that were published into the caGrid test system. The user request was automatically translated into a SQL query via Hibernate to return the rows form the tables mapping to the class ExperimentTask and its child classes MinimizationTask (minimizations), MDTask (MD runs), and QMTask (QM calculations). For each row, a set of get methods (e.g. getSoftware) link to the associated objects for more details (e.g. Software name and version).
Example 3: representing experiments using XML
While a database provides a single endpoint to query data, other types of data descriptors become necessary when moving data between file systems, or simply to provide a light-weight description of the data. XML has been widely adopted by the scientific community to represent structured data because of its flexibility and support by web technologies. In the field of computational chemistry CML-CompChem[10] aims to provide a detailed representation of computations but currently lacks support in the molecular dynamics community. BioSimML (Biomolecular Simulation Markup Language [25]) was developed specifically for biomolecular modelling and supports QM/MM simulation representations but its current status in uncertain. The Unified Molecular Modeling (UMM) XML schema [26] is currently being developed by ScalaLife (Scalable Software for Life Sciences, webcite) and will attempt to provide a detailed description of MD runs, so that these files can be used as a standard input to run within various MD engines. So far these XML-based formats have focused on giving a low-level representation of the simulation runs so that data can be converted between legacy formats. In this example we generate an XML-based representation of the experiment as a whole (multiple tasks), with a limited granularity for the description of each task. For this purpose we developed a Java API based on our logical model to generate XML representations of experiments (Figure 2). Format-specific file parsers developed for the iBIOMES project [16] read in input and output files associated to an experiment to create an internal representation of the experiment and associated computational tasks. In the Java code, classes are annotated with Java Architecture for XML Binding (JAXB, webcite) annotations to map the logical model to an XML schema. The JAXB API can then be used to automatically output XML documents based on the internal Java representation of the experiment or read in an XML file to build the Java objects. The same process could be implemented in various languages, using CodeSynthesis XSD ( webcite) in C++ or PyXB ( webcite) in Python for example.
thumbnailFigure 2. Generating an XML representation of experiments using a Java API. The Java API is used to parse the input files and create an internal representation of the virtual experiment as a set of computational tasks. JAXB is then used to generate an XML representation of this internal model, while XSLT is used to perform a last transformation into a user-friendly HTML page.
The XML output does not aim to be sufficient to recreate input or output files in legacy formats but it will provide enough information for users to rapidly understand the computational methods and structures represented by the associated raw data. This type of XML document can be used as a way to give a detailed summary of experiments when exchanging data, compressed with the raw data for example. These documents can be transformed through XSLT (eXtensible Stylesheet Language Transformations) to be rendered as HTML pages and build repository web interfaces. A sample XML output along with an HTML-based tree view generated through XSLT are presented in Figure 3. For this example a set of AMBER-specific [27] file parsers was used to parse a directory containing all the input and output files associated to an MD study of RNA. Common data elements related to the molecular system topology were extracted from the AMBER parameter/topology file while task (minimization and MD runs), parameter set (e.g. implicit solvent, number of iterations), and computational platform information were extracted from the AMBER MD output files.
thumbnailFigure 3. XML and HTML-based representations of an experiment. Auto-generated XML sample (left) and corresponding HTML tree view (right) representing the different tasks run for an MD study of RNA using the AMBER software package.
These three prototypes serve as examples demonstrating the need for a catalogue of CDEs and the representation of relationships between concepts through a data model. The catalogue of CDEs, associated to a data dictionary, provides the basis for a controlled vocabulary that can be used to annotate experiment data (e.g. files and directories) and build queries. The data model provides extra information as it links concepts together and allows more complex and structured queries, through a relational database for example. The second example showed how modern software engineering tools can use data models to generate database schemas and APIs for repository developments. Finally the last example showed that XML representations can be easily generated if the API follows a model-based approach.
In this paper we introduce a list of CDEs built upon community feedback, and a logical model that ties dictionaries and common data elements together. Common data elements for simulation data indexing and presentation were identified through a survey, while recommendations are made for trajectory and analysis data description. The common data elements were organized through a logical data model, which was refined to include dictionaries and minimize data redundancy. Finally the design and implementation for a subset of these dictionaries is introduced.
Identification of core data elements
A survey was distributed to the community to assess the list of data elements that was defined in iBIOMES [16]. This initial list of common data elements was based on the BioSimGrid [7] data model and supplemented with new elements to reflect the needs of our lab and various collaborators at the University of Utah, and to add descriptions of quantum chemistry calculations. The main goal of the survey was to identify which elements were missing and which ones were not so important according to the community. A list of 47 data elements describing simulation runs and the associated files was presented to experts. These data elements were grouped into 6 categories for organizational purpose: authorship (user information and referenced citations related to a particular run), platform (hardware/software), molecular system (molecules being studied, independently from the model chosen), molecules (info about the molecules composing the system), methods (can apply to any method, including QM and MD), molecular dynamics, and quantum mechanics. The experts were asked to score the data elements based on how important they are to them to describe their own data and/or to index community data and build search queries. Scoring was based on a Likert scale (1 = “Not important at all”, 2 = “Not very important”, 3 = “Not sure, 4 = “Important”, 5 = “Very important”, and “N/A” for non-applicable). In each group, the experts were also allowed to propose missing data elements and/or comment on the listed elements.
The survey was made available online (see extract in Additional file 1) in March 2012 for about a month and promoted through the Computational Chemistry List (CCL) and the AMBER developers’ mailing list. The CCL list is a fairly well known group for general discussions related to computational chemistry, perhaps with an emphasis on QM-related methods. The AMBER developers group represents a variety of theoretical disciplines (MD, QM, QM/MM), with developments targeting various types of systems (e.g. proteins, nucleic acids, lipids, carbohydrates, small compounds) and discussions on how to best use the software, methods and force fields. Individual emails were also sent to different research groups at the University of Utah that are specialized in computational chemistry.
Additional file 1. Online survey extract. This picture shows the section of the online survey assessing the computational platform-related data elements.
Format: PNG Size: 23KB Download fileOpen Data
Trajectory and analysis data
The survey did not include any analysis- or file-related data elements. The Dublin Core metadata ( webcite) can be used as a good reference to describe files at a high level (e.g. author, format). Analysis data on the other hand is very complex to describe because of its direct relation to the raw data it derives from (e.g. use of multiple input files representing experimental and computed data) and the existence of numerous analysis methods that can be problem-specific (e.g. Protein vs. RNA, QM vs. MD). In most cases it will not make sense to use analysis data to index an experiment either. For example looking for MD trajectories with a particular RMSD (root mean square deviation) value would be irrelevant without providing more context about the system and the method used to calculate the value. Although analysis data is a key factor to assess the quality of a simulation, its use for data indexing and retrieval is not trivial and therefore was not included in the survey. A generic framework for the description of trajectory and derived data is nevertheless provided in the Results section.
Logical model
The logical model presented here was derived from a conceptual model that organized all the identified common data elements into a defined domain. The conceptual model was reduced into a logical model with the assumption that the raw input and output files are made available (in a repository similar to iBIOMES or MoDEL) and that the model would be used to index the data rather than providing a complete view of the results (e.g. calculation output, structures defined in each MD trajectory frame). Although analysis data and quality criteria are crucial to provide an objective perspective on experiment results, no associated concept was included in the current model. The granularity of the model was limited to a sufficient level of details that makes it computable. For example, the description of the theory behind modelling methods is not part of the model. The end-goal being to share the results of the simulations or calculations with the community we limited our model to include only popular methods that are used for the study of biomolecules or smaller ligands.
Use of dictionaries
One of the main features of this logical model is the integration of dictionaries to avoid data redundancy. For example a dictionary containing definitions of force fields (e.g. name, type, citations) can be referenced by molecular dynamics tasks, instead of creating individual force field definition entries every time the force field is used. The integration of dictionaries into the model should not enforce mappings to standard definitions but rather enable links between specific values and standard definitions only if they exist. If no mapping exists the user should still be able to publish the data. This is achieved through the storage of “specific names” outside the dictionaries with an optional reference to the term definition, where the standard version of the name (not necessarily different) is defined. For example if the basis set “LANL2DZ” is used in a QM calculation, but no corresponding entry exists in the basis set dictionary, the name of the basis set will still be stored in the database when publishing the data to allow queries on the calculation.
Certain attributes need to be associated to a unit to be understood by a human or a computer. Different software packages might use different units to represent the same attribute. For example, distances in AMBER [27] are measured in Ångströms while GROMACS [28] uses nanometres. When publishing data to a repository one should either convert the values using units previously agreed upon or make sure that the units are published along with the values. In both cases, mechanisms should be in place to provide a description of the units when pulling data from the repository. For the description of this model we assume that the units are already set in the repository, therefore they are not included in the description of the model.
While most of the data described in a logical model for biomolecular simulations can be directly parsed from the input and output files, dictionaries containing standard definitions and values for certain data elements need to be prepopulated. In this paper we present the design and implementation of several dictionaries that can be used to facilitate data publication and queries. For example, if a user is interested in QM calculations based on Configuration Interaction (CI) theory, a dictionary of all CI methods will be needed to return all the calculations of interest (e.g. CISD, CISD(T)). Another interesting use of these dictionaries is within the code of the file parsers. Instead of defining standard values within the code, one can use these dictionaries to lookup information on the fly, and possibly use it to publish the data into the target repository.
An initial set of dictionaries was populated using the BiosimGrid [7] database dictionaries (source code available at: webcite). They were then refined internally and supplemented with new dictionaries, especially to include QM-related definitions (e.g. basis sets, QM Methods).
Identification of core data elements
At the closing of the survey we were able to collect 39 responses (20 through CCL, 10 through the AMBER developers list, and 9 through emails). The results of the survey are presented in Additional file 2. The respondents listed a few data elements they felt were missing from the proposed list or that needed to be refined (see comments in Additional file 3). For instance, in the authorship category, a data element representing research grants was missing. For the representation of the molecular system, data elements representing important functional groups of the solute molecules should be added, along with, optionally, the apparent pH of the solvent. Adjustments should also be made to distinguish the different species in the system, and flag them as part of the solvent or the solute. For the computing environment information, a respondent showed interest in knowing whether the software package is compiled in single, double, or mixed precision, what the memory requirements are for a run, and even what parallelization scheme is used. All these elements are very technical and might interest only a very limited number of users, even in the developer’s community. The notion of hardware architecture was not clearly defined in the survey since it should have already included the use of GPU (see comment in Additional file 3). A better representation of the hardware architecture can be done through three different data elements: the CPU architecture (e.g. x86, PowerPC), the GPU or accelerator architecture (e.g. Nvidia GeForce GTX 780, AMD Radeon HD 7970, Intel PHI), and possibly a machine or supercomputer architecture identification (e.g. Cray XK7, IBM Blue Gene/Q, commodity Infiniband cluster, etc.) and name (,,, etc.). For the computational methods, data elements were missing for the representation of both MD and QM-specific parameters. In QM, the following elements were missing: exchange-correlation functionals (for DFT), pseudopotentials and plane wave cut-offs, and whether frozen core calculations are performed or not. Some comments pointed the fact that the notion of convergence can be very subjective, especially when dealing with MD trajectories where multiple minima (conformations) can be found over time (see comments in Additional file 3). The convergence flag and criteria were assigned as QM-specific data elements to reflect this. For MD, the context of the run (i.e. whether it is a minimization, an equilibration, or a production run) was missing. Representations of restraints and advanced sampling methods (e.g. replica-exchange, umbrella sampling) were also missing. More detailed properties were listed by the respondents. These included the order of expansion for LINCS-based constraints and the order of interpolation for Particle-Mesh Ewald. At this point it is not clear if such parameters need to be tracked since users would hardly use these to create queries and we assume that they can be directly read from the raw input files if necessary.
Additional file 2. Results of the survey. This table presents results of the survey, based on the following Likert scale: 1 = “Not important at all”, 2 = “Not very important”, 3 = “Not sure, 4 = “Important”, 5 = “Very important”, N/A = “Not applicable”. N is the number of responses for a particular data element. The reported score is the average of points assigned by responders using the Likert scale.
Format: CSV Size: 6KB Download fileOpen Data
Additional file 3. Summary of survey comments for each data element category. This table summarizes the comments of the respondents for each category of data elements. The last column lists only the comments that were either proposing new data elements or changes to the original ones, and that were related to the data element category. The number of respondents N is the number of people who provided at least one comment for the associated category.
Format: DOCX Size: 15KB Download fileOpen Data
Based on the results of the survey and the various comments of the community we propose a set of common data elements for biomolecular simulation data indexing, listed in Additional file 4. The table reorganizes the granularity of the identified elements by making a distinction between data elements (concepts) and attributes (properties). For example the barostat data element has at least one property: an implementation name (e.g. “Andersen”, “Berendsen”). Depending on the type of barostat other properties could include a time constant and a chain length (e.g. Nose-Hoover barostat). We also included “derived” properties that would be inferred from other properties if the right terminology or dictionary is available. For example, the name of a QM method (e.g. MP2, B3LYP) should be enough to infer the level of theory (e.g. Møller-Plesset, DFT), and the name of the force field (e.g. AMBER FF99SB) should be sufficient to infer its type (e.g. classical). This distinction is important as it can help the developers choose which properties should be actually stored (e.g. in a database or an XML file) and which ones could be inferred. The set also contains recommended and optional data elements/attributes. An attribute is marked as recommended if its average score (i.e. the sum of Likert scale scores divided by the number of responses for that element) is greater than 4.0 (“Important”), otherwise it is marked as optional. Attributes proposed by the respondents were categorized through an internal review performed by our lab, composed of researchers running molecular dynamics simulations and quantum chemistry calculations on a daily basis. A data element is considered recommended if it has at least one recommended attribute. The current list contains 32 data elements and 72 attributes (including 30 recommended attributes).
Additional file 4. Final set of common data elements. This file contains several tables (one for each data element category) presenting the identified common data elements. Each data element can be described through multiple attributes. Recommended attributes are marked with an “R” and attributes that can be derived from other attributes are marked with a “D”. Attributes that should be associated to a unit are marked with a “U”.
Format: DOCX Size: 22KB Download fileOpen Data
We recognize that the process by which the data elements were defined and characterized is not perfect. Although the number of respondents was fair (between 37 and 39 depending on the data element), certain data elements had to be added or redefined based on an internal review by some of our lab members, which might have created some bias towards the needs of our lab rather than a general consensus in the community. Despite these limitations the list of data elements proposed here may be considered the first attempt to summarize the needs of the computational chemistry community to enable biomolecular simulation data indexing and queries. This list should be a good starting point to create a list of standard metadata to tag files using simple attribute-value pairs or attribute-value-unit triplets, as it is the case for iBIOMES via the iRODS metadata catalogue [17]. Although this list is fairly exhaustive, it is not complete and we hope that by publishing it the community will be able to provide more feedback and build on it, with the intent of this data model being extensible. The list is available on the iBIOMES Wiki at: webcite. Field experts who want to contribute to the list can request an account on the wiki.
Trajectory files
In most MD software packages the computed trajectories of atomic coordinates are stored in large files (~MB-TB) with each containing one or multiple time frames (e.g. PDB, AMBER NetCDF, DCD). This is the raw data that repositories would actually store and index for retrieval. Until now we have been focusing on the description of the computational tasks that were used to generate this data, i.e. the provenance metadata. This metadata can be used to find a given experiment and all associated trajectory files. On the other hand new attributes need to be assigned at the trajectory file level to describe their content and ultimately enable automatic data extraction and processing by external tools (e.g. VMD [29], CPPTRAJ [30], MDAnalysis [31]). Such attributes include the number of time frames, time between frames, number of atoms in the system and/or reference to the associated topology file, presence or absence of box coordinates, velocity information, and so on. It is important to note that the use of self-descriptive formats such as NetCDF ( webcite) would allow trajectory files to carry not only the description of the dataset, but also the provenance metadata, for example using the CDEs previously defined. Perhaps one of the most important attributes to give context within a full experiment is the index of a trajectory file within the set of all trajectory files representing a given task or series of tasks. Although self-descriptive formats could easily keep track of this information, it is non-trivial to generate such an index as tasks can be run independently outside of a managed workflow such as MDWeb [32], which would be able to assign these indexes at file creation time. The order of trajectory files is therefore commonly inferred from their names (e.g. “1.traj, 2.traj, 3.traj”). This approach usually works well although some errors might occur when trying to automate this ordering process. For example “10.traj” would be ranked before “2.traj” if a straight string comparison is performed (vs. “02.traj”). Strict naming conventions for trajectory data (raw, averaged, and filtered on space or time) should help circumvent these problems.
Analysis data
Although some analysis tasks are common to most biomolecular systems for a particular method (e.g. RMSD calculations of each frame in the trajectory to a reference structure) the number of analysis calculations one can perform is virtually infinite. There is currently no standard to describe the output of the analysis. Some formats might enable the description of the values (e.g. simple CSV or tab-delimited file with labelled columns and/or rows) but more structured files are required to describe the actual analysis process that generated the set of values contained in the file. Formats such as NetCDF are adapted to store this kind of description but are not commonly used to store biomolecular simulation analysis data. Instead comma- or tab-delimited files formats are usually preferred for their simplicity, readability, and support by popular plotting tools (e.g. MS Excel, OpenOffice, XmGrace). Assuming that the dataset is physically stored in such a file or in a relational database, a minimal set of attributes should be defined to facilitate reproduction of the analysis, as well as enable reading and loading into visualization tools with minimal user input. We believe that the strategy used in the NetCDF framework to break down data into variables with associated dimensions is a simple and logical one, and so we follow a similar strategy here.
– Data dimensions: Defines dimension sizes for defined data sets (i.e. variables). Any number of dimensions (including zero if data is scalar) can be defined.
– Data variables: The actual data. Report type (e.g. integer, float), labels, and units for all the values contained in a given set. One or more dimensions can be associated with a given variable based on its overall dimensionality. Zero dimensions correspond to a single value (e.g. average RMSD value), one dimension is an array (e.g. RMSD time series), two dimensions are a matrix (e.g. coordinate covariance), etc.
Another set of attributes need to be defined to represent the provenance metadata, i.e. how the analysis data was derived from the raw trajectories. Although different analysis tasks will require different input data types and parameters, a list of common attributes can be defined to provide a high-level description of the analysis task:
– Name (e.g. “RMSD”) and description (“Root mean square deviation calculation”) of analysis method (see entries defined in our MD analysis method dictionary)
– Path to the input file describing the task (if applicable).
– Name and version of the program used, along with the actual command executed.
– Execution timestamp
– Reference system, if any (self, experimental, or other simulated structure)
While these attributes might not be sufficient to automatically replicate the results they should provide enough information for users other than the publisher to understand how the analysis data was generated and how the analysis task can be replicated.
A further set of attributes can be defined to provide additional details on the scope of the analysis and describe in detail the data from which the current data has been derived:
– File dependencies
– Filter on time
– Filter on space (e.g. heavy atoms only, specific residue)
These would facilitate maximum reproducibility as well as enable detailed searches on very specific types of analysis. The ‘File dependencies’ attribute may include information like the trajectory used in a given calculation, which could also be used to check if the current analysis is up-to-date (e.g. if the trajectory file is newer than the analysis data, the analysis can be flagged as needing to be updated). The ‘Filter on time’ attribute might describe a specific time window or subset of frames used in the analysis. Since these attributes are perhaps not as straightforward for analysis programs to report as the other attributes, they could be considered optional and/or set by the user after the data is published. The ‘Filter on space’ attribute could be particularly useful, since it would allow one for example to search for all analyses of a particular system done using only protein backbone atoms or only heavy atoms, etc. However, this would require translation of each individual analysis program’s atom selection syntax to some common representation, which is no small task and would increase the size of the metadata dramatically for certain atom selections. In many cases it is likely that the atoms used in the analysis could be inferred from the command used, so this attribute could also be considered optional. Two examples of how these attributes might be applied to common analysis data are given in Additional file 5.
Additional file 5. Analysis dataset description examples. This document presents two examples of how the proposed data elements might be applied to common analysis data.
Format: DOCX Size: 16KB Download fileOpen Data
Logical model
In this model the central concept is the virtual experiment, a set of dependent computational tasks represented by several input and output files. The goal of this model is to help create a common description of these virtual experiments (stored in a database or distributed file system for example) for indexing and retrieval. The overall organization of virtual experiments is illustrated in Figure 4. For the rest of this paper virtual experiments will be simply denoted as experiments. The organization of an experiment as a list of processes and tasks was inspired by the CML-CompChem[10] schema. In CML-CompChem the job concept represents a computer simulation task and can be included into a series of consecutive sub-tasks designated as a job list. The concepts of experiment, process group, process, and task are introduced here to handle the representation of tasks that might be run in parallel or sequentially, and that might target the same or different systems. An experiment process group is defined as a set of computational processes targeting the same molecular system, where a process is defined as a set of similar tasks (e.g. minimization tasks, MD tasks, QM tasks). In MD, the minimization-heating-production steps can be considered as a single process group with 3 different process instances. If multiple copies of the system are simulated, each copy will be considered a separate process group. In QM, a process would represent a set of sequential calculations on a compound. If various parts of the overall system are studied separately (e.g. ligand vs. receptor), each subsystem should be assigned to a different process group.
thumbnailFigure 4. Illustration of the data model used to represent virtual experiments. Each experiment is a set of tasks, grouped into processes (e.g. minimization, equilibration, production MD) and process groups applied to the same molecular system (e.g. B-DNA oligomer).
Within the scope of an experiment, multiple tasks and group of tasks will be created sequentially or in parallel, and based on intermediate results. To keep track of this workflow, dependence relationships (dependencies) can be created between tasks, between processes, and between process groups.
In the following sections we present the overall organization of the model through an object-oriented approach where the concepts (e.g. experiments, tasks, parameter sets, and molecular systems) are represented by classes with attributes. The description is supported by several class diagrams using the UML notation. For example inheritance is characterized through a solid arrow with an unfilled head going from the child to the parent class. Along with standard UML notations, we defined the following colour scheme to guide the reader:
– Blue: classes giving a high-level description of the experiments and tasks
– Yellow/orange: method/parameter description
– Green: classes describing the molecular system independently from the computational methods
– Pink: classes related to authorship and publication (e.g. citations, grants)
– Grey: description of the hardware or software used to run the tasks
Finally, classes representing candidates for dictionary entries are marked with wider borders.
Experiments, processes, and tasks
Figure 5 presents the concepts that can be used to describe the context of an experiment. Each experiment can be given a role, i.e. the general rationale behind the experiment. Examples of experiment roles include simulation (dynamics), geometry optimization, and docking. These roles should not be associated to any computational method in particular. Each experiment can be linked to a particular author (including institution, and contact information) to allow collaborations between researchers with common interests. Publications related to a particular experiment (citations) or that use the results of the experiments can be referenced. Grant information is important as well since it allows researchers to keep track of what their funding actually supports.
thumbnailFigure 5. Concepts used to describe the context of the experiments.
Experiment sets (Figure 2) are collections of independent experiments that are logically associated together, because of similar context (e.g. study of the same system using different methods) or simply for presentation purpose or to ease retrieval by users (e.g. all the experiments created by a certain working group). An experiment can be assigned to multiple experiment sets.
An experiment task corresponds to a unique computational task defined in an input file. Figure 6 presents the main concepts associated to experiment tasks. These include the definition of the actual calculation (e.g. frequency calculation and/or geometry optimization in QM, whether the dynamics of the system are simulated), the description of the simulated conditions (reference pressure and temperature), and the definition of the method (e.g. QM, MD, minimization) and input parameters (e.g. basis set, force field). More details about the different types of tasks and simulation parameters are given in the computational method section. Each task is executed within a computing environment, i.e. the set of hardware and software components used to run the simulation software package. These components include the operating system, the processor architecture, and the machine/domain name. Information about the task execution within the computing environment, including execution time, start and end timestamps, and termination status can be tracked as well. The software information includes name (e.g. “AMBER”) and version (“12”). In certain cases a more specific name for the executable is available. This can provide extra information about the compilation step and/or the features available. In Gaussian [14], for example, this information can be found in the output files: “Gaussian 09” would give a generic version of the software package while “EM64L-G09RevC.01” would give the actual revision number (“C.01”) and the target architecture of the executable (e.g. Intel EM64). For AMBER, the executable name would be either “SANDER” (Simulated Annealing with NMR-Derived Energy Restraints) or “PMEMD” (Particle-Mesh Ewald Molecular Dynamics), which are two alternatives to run MD tasks within the software package.
thumbnailFigure 6. Description of experiments, processes, and tasks.
Computational methods
The most common methods for biomolecules include QM, MD, and hybrid QM/MM. In this model we focus on these methods but we allow the addition of other methods by associating each task to one or multiple parameter sets that can be combined to create new hybrid approaches. This decomposition was applied to MD, minimizations (e.g. steepest descent, conjugate gradient), QM, and QM/MM methods as illustrated in Figure 7.
thumbnailFigure 7. Organization of computational methods into tasks and parameter sets.
Method-specific tasks and parameter sets
Common attributes of any computational method are represented at the ExperimentTask level. These include names (e.g. “Molecular dynamics”), description (e.g. “new unknown method”), types of boundary conditions (periodic or not), and the type of solvent (in vacuo, implicit, or explicit). Method-specific tasks (MinimizationTask, MDTask, QMTask, QMMMTask) are created to capture the parameters that would not be shared between all methods. Simulation parameters include any parameter related to the method or task that would be set before a simulation is run. These parameters are aggregated into sets that can be reused between methods. For example, the MD-specific task (MDTask) references MDParameterSet, which includes the definitions of the barostat, thermostat and force fields. The QM/MM-specific task (QMMMTask) references the same parameter set since these definitions are necessary to describe the computational method to treat the MM region. It also references a QM-specific parameter set to describe the QM method and a QM/MM-specific parameter set to describe the treatment of the QM/MM boundary. A new task type could be created for multi-level quantum calculations. In this case the task would reference multiple QM parameter sets and a new type of parameter sets that would define at least the algorithm or implementation used to integrate the different levels (e.g. ONIOM [33]).
In molecular dynamics, the behaviour of the simulated system is governed by a force field: a parameterized mathematical function describing the potential energy of the system, and the parameters of the function, with dynamics propagated using Newton’s equations of motion and the atomic forces determined from the forces or first derivatives of the potential energy function. Different parameters will be used for different types of atoms (or group of atoms in the type of coarse grain dynamics). A given force field parameter set is usually adapted to particular types of residues in molecules (e.g. nucleobases in nucleic acids vs. amino acids in proteins). For a single molecular dynamics task multiple force fields and parameter sets can be used simultaneously. When simulating an explicit water-based solvent for example, the specific force field parameter set used to represent these water molecules (e.g. TIP3P, TIP4P, SPC/E [34]) will typically be different from the set used to parameterize the atoms of the solute or the ions. The ForceField class presented in Figure 8 represents instances of force fields referenced by a particular run while ForceFieldDefinition represents an entry from the dictionary listing known force fields. Force field types include classical, polarizable, and reactive force fields.
thumbnailFigure 8. Description of MD tasks and parameter sets.
Molecular dynamics methods can be classified into more specific classes of methods. For example in stochastic dynamics (Brownian or Langevin Dynamics), extra parameters can be added to represent friction and noise [35]. In coarse-grain dynamics the force field is applied to groups of atoms rather than individual atoms. The differentiation between atomistic and coarse-grain dynamics is then achieved solely based on the type of force field used. In this model Langevin dynamics and coarse-grain dynamics are not represented by different types of tasks as they share the same parameter set as classic molecular dynamics. The collision frequency attribute used specifically by stochastic dynamics was added to the MD parameter set while a flag specifying whether the force field is atomistic or coarse grain is set in the force field dictionary.
Each parameter set can be associated to a barostat and a thermostat to define how pressure and temperature are constrained in the simulated system (Figure 8). The ensemble type (microcanonical, canonical, isothermal–isobaric, or generalized) can be defined directly in the parameter set. The model also includes the concepts of constraints and restraints. Both have a target (i.e. the list of atoms they apply to), which can be described by an atom mask or a textual description (e.g. ‘:WAT’, ‘water’). The type of constraint is defined by the algorithm used (e.g. SHAKE, LINCS) while the type of restraint is characterized by the property being restrained (e.g. bond, angle).
Enhanced sampling methods are gaining interest in the MD community as larger systems and longer time scales can be simulated faster than with classic approaches [36]. These methods usually involve the creation of multiple ensembles or replica that can be run in parallel (e.g. temperature replica-exchange, umbrella sampling). A dictionary of such methods was created to list popular enhanced sampling methods. At the core the runs based on these methods can still be represented with multiple molecular dynamics tasks. Depending on the method, the implementation, and the definition of the input files, the set of MD tasks corresponding to a given enhanced sampling run can be grouped into processes where each process represents either a separate ensemble/replica or a group of tasks run in parallel. For a replica exchange MD (REMD) run using 4 replicas, one could either group the 4 MD tasks into a single process representing the whole REMD run or 4 separate processes with a single task each.
In quantum chemistry the two main elements that define the theory and approximations made for a particular run are the level of theory (or QM method) and the basis set (Figure 9). Basis sets provide sets of wave functions to create molecular orbitals and can be categorized into plane wave basis sets or atomic basis sets. They are defined in a dictionary (BasisSetDefinition). Different levels of theory are available to approximate the selected basis set and find a discrete set of solutions to the Schrödinger equation. Popular methods include Hartree-Fock and post-Hartree-Fock methods (e.g. Configuration Interaction, Møller-Plesset, Coupled-Cluster), multi-reference methods, Density Functional Theory (DFT), and Quantum Monte Carlo [37]. The classification of QM methods is not trivial because of the range of features dependent on the level of theory. For example, DFT method names typically correspond to the name of the exchange-correlation functional while semi-empirical method names provide a reference to the empirical approximations of the method. For this model we defined the concepts of QM method, class and family. At the highest level the family defines the method as “ab initio”, “semi-empirical”, or “empirical”. The class defines the level of theory for ab-initio methods (e.g. Hartree-Fock, Møller-Plesset, Configuration Interaction, DFT, Multi-reference), or the type of semi-empirical method (pi-electron restricted or all valence electron restricted). Note that one method can be part of multiple classes (e.g. Multi-reference configuration interaction, hybrid methods). At the lowest level the method name (e.g. MP2, B3LYP, AM1) corresponds to a specific method, as it would be called by a particular software package. Approximations of pure ab-initio quantum methods can be used to reduce the computational cost of the simulations. Typical approximations include the use of frozen cores to exclude inner shells from the correlation calculations and pseudo-potentials (effective core potentials) to remove the need to use basis functions for the core electrons. The use of such approximations is noted at the QM parameter set level.
thumbnailFigure 9. Description of QM tasks and parameters.
Molecular dynamics methods can be “improved” by injecting quantum characteristics to the models (semi-classical methods). In ab-initio molecular dynamics, the forces for the system are calculated using full electronic structure calculations, avoiding the need to develop parameters a prior. In hybrid QM/MM, the simulation domain is divided into an MM space where the MD force field applies, and a QM space where molecular orbitals will be described. Different methods exist to treat the boundaries between the two spaces. The decomposition of runs into tasks and parameter sets make the integration of such methods possible and fairly straight forward. For example, one could create a new type of tasks for ab-initio molecular dynamics that would have at least two parameter sets: the QM parameter set defined earlier and a new parameter specific to ab-initio molecular dynamics that would define the time steps (number, length) and the type of method (e.g. Car-Parinello MD, Born-Oppenheimer MD).
Molecular system
In this model a distinction is made between biomolecules (e.g. RNA, protein) and “small molecules” (Figure 10). Here we define a small molecule as a chemical or small organic compound that could potentially be used as a ligand. They are defined at the level of a single molecule while biomolecules are described by chains of residues. Typically, QM calculations will target small molecules while MD simulations will target larger biomolecules and ligand-receptor complexes. Properties such as molecular weight and formula are worth being tracked for small compounds but their importance is not that obvious when dealing with larger molecules.
thumbnailFigure 10. Decomposition of the molecular system into molecules with structural and biological features.
Three dictionaries are necessary to provide definitions for standard residues, atomic elements (as defined in the periodic table), and element families (e.g. “Alkaline”, “Metals”). Note that here we minimize the amount of structural data by keeping track of occurrences of residues (ResidueOccurrence) and atom types (AtomOccurrence) in a particular molecule, rather than storing individual instances. For example, in the case of water, there will be a single entry for the hydrogen atom with a count set to 2, and another entry for the oxygen atom with a count set to 1. The same approach is used to keep track of the various molecules in the system. For example explicit solvent using water would be represented by the definition of the water molecule and the count of these molecules in the system. To enable searches of specific ligands a simple text representation of the compound is necessary. Molecule identifiers such as SMILES (Simplified Molecular-Input Line-Entry System [38]) or InChI (International Chemical Identifier [39]) strings can be associated to small molecules to enable direct molecule matching and similarity and substructure searches. The residue sequence is also available to search biomolecules based on an ordered list of residues. The residue sequence can be represented by two different strings: the original chain, or specific chain, as referenced in the input file defining the molecular topology, and a normalized chain. The specific chain, can potentially give more information about the individual residues within the context of the software that was used, and reference non-standard residues defined by the user. The normalized chain on the other hand uses a normalized nomenclature for the residue: one-letter codes representing either amino-acids or nucleobases. The normalized chain can be used to query the related molecule without prior knowledge about the software used, and enables advanced matching queries (e.g. BLAST [40]).
Both residue and atom occurrences can be given a specific symbol, which represents a software-specific name, usually referencing a computational model for the entity. In MD the specific symbol would be the force field atom type while in QM this would be used to specify which basis set should be applied.
The description of the biomolecules should include at least a generic type such as DNA, RNA or protein to classify the simulated molecules at a high level. Other biological information such as species (e.g. Mus musculus, Homo sapiens) and molecule role can be added as well. As defined by the Chemical Entities of Biological Interest (ChEBI [41]), each molecule can have one or multiple roles (application, chemical role, and/or biological role). This data element is very important as it would allow researchers to query molecules based on their function rather than their structure. On the other hand this type of information is not included in the raw simulation files, which means that it would have to be entered manually by the owner of the data. To avoid this one can imagine populating this information automatically by referencing external databanks that already store these attributes (e.g. Protein Data Bank [3]). This is reflected in this model by the reference structure concept, which keeps track of the database and the structure entry ID. If the topology of a simulated system is actually derived from a reference structure an extra field can be used to describe the protocol used to prepare the reference structure so that it serves as an input of the simulations. Possible steps include: choice of the specific model number if several are available in a single PDB entry or which PDB entry if multiple entries are possible, possible addition of missing residues from disordered regions, or specification of homology or other putative models.
Files and file system
So far the description of the model focused on the data elements related to the experiment itself to explain why the different tasks were run and what they represent. Another important aspect of this model is the inclusion of a reference to the files (input and output) that contain the actual data being described. This is illustrated in Figure 11. Each experiment can be associated to one or several file collections stored on local or remote file systems (e.g. NFS, Amazon S3, iRODS server). For each of these collections no assumption should be made on the location or the implementation of the file system, therefore it is necessary to keep track of the type of file server and host information to find a route to the host and access the files using the right protocol and/or API. The individual files should be associated to the tasks they represent and a distinction between input (parameters and methods) and output (e.g. logs, trajectories) files should be made. The topology files should be associated to the molecular system instead. Note that in certain cases, especially for QM calculations, the topology and input parameters might be contained in the same file. Each file reference should at least contain a unique identifier (UID) within its host file system and a format specification.
thumbnailFigure 11. References to the file system and hosted files containing the raw data.
Extended attributes
It is obvious that no single data model will be able to capture the needs of any lab running biomolecular simulations. The intent of this logical model is to provide a simple yet fairly exhaustive description of the concepts involved. To allow the addition of new properties, to provide more details about the experiment or to keep track of user- or lab-defined attributes, the notion of extended attribute can be introduced to the model. Each extended attribute would be an attribute-value-unit triplet referenced by a given class to extend its own attributes, as defined in the logical model. For example one user might want to keep track of the order of interpolation and the direct space tolerance for PME-based simulations. These parameters are currently not represented in the model, which only keeps track of the name of the electrostatics model (“PME”). To add these two parameters, one could add two extended attributes to the MD parameter set class (Figure 8) called “PME interpolation order” and “PME tolerance”.
From an object-oriented perspective, all the classes introduced in the logical model could inherit from a single superclass that would reference extended attributes, where each extended attribute would be an attribute-value-unit triplet with a possible link to a concept identifier defining the attribute in an existing terminology. From a database perspective, an extra table would be needed to store all the extended attributes. Such table would need the necessary columns to represent the attribute-value-unit triplet, a possible concept identifier, and the name of the table each attribute would extend. Although this is an easy way to gather all the extended attributes in a single table this approach is not rigorous from a relational approach. To allow SQL queries that do not involve injection of table names each table would have to be associated to an extra table storing its extended attributes.
The logical model presented here defines a domain that should be sufficient to index biomolecular simulation data at the experiment level. In total over 60 classes were defined to represent the common data elements identified through the survey, along with new elements and dictionaries that should avoid data redundancy and facilitate queries using standard values. From a developer’s perspective this model provides some guidelines for the creation of a physical data model that would be more dependent on a particular technology, whether it is for the implementation of a database or an API. At a more abstract level the concepts introduced in this logical model provide a good starting point for the creation of a new terminology or ontology specific to biomolecular simulations.
The current list of dictionaries include: force field parameter set names and types (e.g. classical, polarizable), enhanced sampling methods, MD analysis functions, barostats, thermostats, ensemble types, constraint algorithms, electrostatics models, basis sets and their types, calculation types (e.g. optimization, frequency, NMR), residues, atomic elements (periodic table) and their families, functional groups, software packages, and chemical file formats. The list also includes a dictionary of computational methods (e.g. Langevin dynamics, MP2, B3LYP) with their class (e.g. MD, Perturbation Theory, DFT) and family (e.g. ab initio, semi-empirical, empirical). All these dictionaries are available for browsing and lookups at: webcite. Examples of dictionary entries are also provided in Additional file 6 (force fields) and Additional file 7 (computational methods).
Additional file 6. Table representing the force field dictionary. This table lists common parameter sets available for popular MD software packages. Each entry in the table is described through an ID (ID), a name (TERM), a description (DESCRIPTION), a possible list of citations (CITATION), a force field type ID (TYPE_ID), and whether the force field is coarse grain or not (IS_COARSE_GRAIN).
Format: CSV Size: 13KB Download fileOpen Data
Additional file 7. Table representing the dictionary of computational methods. This dictionary lists “specific” methods which can be referenced within an input file for a computational task. Each entry in the table is described through an ID (ID), a name (TERM), a description (DESCRIPTION), and a possible list of citations (CITATION).
Format: CSV Size: 11KB Download fileOpen Data
All our dictionaries follow the same implementation method. The raw data is defined in CSV files and can be loaded into a database for remote queries and/or indexed using Apache Lucene [20] for local access via Java APIs (Figure 12). Apache Lucene is a text search engine written in Java that uses high-performance indexing to enable exact and partial string matching. Each CSV file contains a list of entries for a given dictionary with at least three columns representing: the identifiers, the terms (e.g. “QM/MM”), and the term descriptions (e.g. “Hybrid computational method mixing quantum chemistry and molecular mechanics”). More columns can be defined depending on the type of dictionary, either to represent extra attributes or to link to other dictionaries (foreign keys). For example the CSV file listing the QM method classes would have an extra column with the IDs of the associated QM method families. A set of SQL scripts was written to automatically create the database schema necessary to store the dictionaries and to load the CSV data into the tables. These scripts become very useful if one wants to integrate these dictionaries into a repository. Another script was written to automatically build the Lucene indexes. The script calls a Java API which parses the CSV files and uses the Lucene API to build the indexes. These indexes can then be used locally by external codes via the Lucene API, avoiding the need for static definitions of these dictionaries within the code or the creation of dependencies with remote resources such as a database. They should also help future developments of chemical file parsers and text processing tools for chemical information extraction from the literature (i.e. natural language processing). The Lucene-based dictionaries can be directly queried through a simple command-line interface. Additional file 8 demonstrates how one would look up a term using this program. This design is fairly simple and enables updates of the dictionary entries directly through the CSV files. One limitation is the lack of synonyms for the terms defined. To create richer lists it will be necessary to add an extra CSV file for each dictionary that would contain the list of all the synonyms and the ID of the associated terms. Successful implementations of terminologies in other domains, such as the UMLS (Unified Medical Language System [42]), should be used to guide the organization of the raw data and facilitate the integration of existing terminologies representing particular aspects of the biomolecular simulations (e.g. chemical data, biomolecules, citations).
thumbnailFigure 12. Building process for the dictionaries. Each dictionary can be either indexed via Apache Lucene for use via a Java API or loaded into a database to enable remote SQL queries.
Additional file 8. Lucene-based dictionary usage and lookup example. This document demonstrates the use of the command-line interface to lookup terms in the Lucene-based dictionary. In this example the user searches terms that start with “AMBER FF”. The ‘-n 2’ option specifies that no more than 2 matches should be returned.
Format: DOCX Size: 14KB Download fileOpen Data
Maintenance and community support
Until this point the development of the dictionaries has been restricted to an internal effort by our lab. To support the work of the community at large these dictionaries have to be extended and adjusted based on user feedback. For this purpose the dictionaries are now available on our project Wiki at webcite, which enables discussions and edits by identified users. This will serve as a single endpoint to draft new versions of the dictionaries. The source code for the dictionaries, including the CSV files, SQL scripts, and Java API, is available from GitHub at: webcite. Updates on the CSV files hosted there should occur according to the status of the dictionaries in the Wiki. With time we might find that a dedicated database with a custom user interface becomes necessary for a defined group of editors to update existing terms, add new entries, add new dictionaries, and keep track of changes (logs). In any case, the number of editors should be limited to a small group of experts, actively participating and working together [43,44].
In this paper we introduced a set of common data elements and a logical data model for biomolecular simulations. The model was built upon community needs, identified through a survey and refined internally. Elements described by the model cover the concepts of authorship, molecular system, computational method and platforms. Although the model presented here might not be complete, it integrates the methods that are the most significant for simulations of biomolecular systems: molecular dynamics, quantum chemistry and QM/MM. We introduced a new representation of the method landscape through method-specific parameter sets, which should allow the integration of more computational methods in the future. The addition of extended attributes to the model should enable customization by labs to fit their specific needs or represent properties that are currently not described by the model. The use cases presented here showed how the model can be used in real applications, to partially automate the creation of database schemas and generate XML descriptions. Multiple dictionaries, populated through reviews of online resources and literature, were implemented to supplement the model and provide developers with new tools to facilitate text extraction from chemical files and population of repositories. Although the current version of the dictionaries is fairly exhaustive they will become a powerful tool only if they are updated by the community. A missing piece in this model is a catalogue of available force field parameter sets and atom types that could be used to generate force field description files and serve as an input for popular MD software packages. The EMSL Basis Set Exchange [45] already offers something similar for basis sets, and provides a SOAP-based web service to access the data computationally.
While it is important to allow the whole community to provide input on the CDEs and dictionaries, eventually a consensus needs to be made by a group of experts representing the main stakeholders: simulation engine developers, data repository architects, and users. The creation of a consortium including users, developers and informaticians from the QM and the MD community could help formalize this process if such entity leads:
– Active polling, for example via annual surveys assessing the need for changes or additions in the CDEs, dictionaries, or the data model. Information about the respondents such as software usage, preferred computational methods (e.g. all-atom or coarse-grain MD, DFT) and target systems (e.g. chemical compounds, biomolecules) will provide more details for the development of more adequate recommendations for specialized communities.
– Monitoring of community discussions, which might take place on a dedicated online forum or a wiki such as the one introduced here
– Recurring creation and distribution of releases for the CDEs, dictionaries, and data model. The CDEs in particular should include at least 2 levels of importance (recommended or optional) to provide some criteria about the completeness of the data descriptors. A third level characterizing certain CDEs as mandatory might provide a standard for developers and data publishers to populate repositories.
Our current focus is on indexing data at the experiment level so that the associated collection of input and output files can be retrieved. While the CDEs can be used to tag individual files it is not clear yet how much metadata is necessary to enable automatic data extraction (e.g. extract properties for a single frame from a time series) and processing, and if such metadata can be extracted directly from the files without user input. The popularization of self-explanatory formats (e.g. NetCDF, CML) to store calculation results or MD trajectories would certainly help. The ongoing work within the ScalaLife programme should help the community move in this direction, while the data model presented here will provide a good framework to organize, describe, and index computational experiments comprising multiple tasks. By publishing this model and the list of CDEs we hope to encourage developments of new repositories for biomolecular simulations, whether they are part of an integrated computational environment (e.g. MDWeb) or not (e.g. iBIOMES). Both approaches should be addressed. On one hand, computational environments can easily keep track of the tasks performed during an experiment since the input parameters and topologies are directly specified within the environment. On the other hand, we still need to think about the developer community that works on new simulation engines, new force fields and new computational methods. They will still need to customize their simulation runs within more flexible environments where they can manually edit input files or compile new codes, and use local or allocated high-performance computing resources. Independent data repositories where data can be deposited through a publication process are probably more viable to overcome these requirements. Finally it is not clear who will be given access to these large computational environments or who will have the computational, storage, and human resources to deploy, sustain, and make such complex systems available to the community.
The goal of the proposed data model is to lay the foundations for a standard to represent biomolecular simulations, from the experiment level to the task level. For this purpose we wanted to integrate MD, QM, and QM/MM methods, all of which play a particular role in the field. Although classical MD is arguably the most popular approach for biomolecular simulations we believe that QM/MM approaches and ab initio MD for example will gain more and more interest as computational power increases and they should not be left out of a future standard. On the other hand we recognize that our model might not be as granular as others. The UMM XML [26] schema for example will be one of the first attempts to describe MD simulation input with enough granularity so that software-specific input files can be generated without information loss. Such effort is highly valuable for the MD community, and our data model will certainly evolve to integrate such models. Our short-term goal is to engage current repository and data model developers such as the ScalaLife ( webcite) and Mosaic ( webcite) groups for MD and the Blue Obelisk ( webcite) group for QM and cheminformatics so that we can learn more about each other’s experience and try to align our effort towards an integrated data model that would fit the needs of the whole biomolecular simulation community.
The framework presented here introduces a data model and a list of dictionaries built upon community feedback and selected experts’ experience. The list of core data elements, the models, and the dictionaries are available on our wiki at: webcite.
As more implementation efforts are taken, the community will be able to assess the present data model more accurately and provide valuable feedback to make it evolve, and eventually support collaborative research. The list of desiderata for data model developments, for both conceptual and physical representations, should provide some guidance for the long task at play.
This paper uses semi-structured interview methods to establish the community needs and preferences regarding biomolecular simulation data indexing and presentation. The common data elements were identified using an approach similar to [46], while the data model was built using standard modelling techniques to derive logical and physical models. Interested readers can find details of these techniques in [22].
MD: Molecular dynamics; MM: Molecular mechanics; QM: Quantum Mechanics; CDE: Common data elements; PME: Particle-Mesh Ewald.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
JCT designed the data model and implemented the various examples and dictionaries. DRR worked on the description of trajectory and analysis data. TEC3 and JCF participated in the design of the data model and helped to draft the manuscript. All authors read and approved the final manuscript.
Computational support was provided by the Center for High Performance Computing (CHPC) at the University of Utah, the Blue Waters sustained-petascale computing project (NSF OCI 07–25070 and PRAC OCI-1036208), and the NSF Extreme Science and Engineering Discovery Environment (XSEDE, OCI-1053575) and allocation MCA01S027P. Research funding came from the NSF CHE-1266307 (TEC3). Thanks to the CHPC staff for hardware and software support that allowed the implementation of the prototypes.
1. Šponer J, Šponer JE, Mládek A, Banáš P, Jurečka P, Otyepka M: How to understand quantum chemical computations on DNA and RNA systems? A practical guide for non-specialists.
Methods 2013, 64(1):3-11. PubMed Abstract | Publisher Full Text OpenURL
2. Dror RO, Dirks RM, Grossman JP, Xu H, Shaw DE: Biomolecular simulation: a computational microscope for molecular biology.
Annu Rev Biophys 2012, 41:429-452. PubMed Abstract | Publisher Full Text OpenURL
3. Bernstein FC, Koetzle TF, Williams GJB, Meyer EF, Brice MD, Rodgers JR, Kennard O, Shimanouchi T, Tasumi M: The protein data bank.
Eur J Biochem 2008, 80(2):319-324. OpenURL
4. Simms AM, Toofanny RD, Kehl C, Benson NC, Daggett V: Dynameomics: design of a computational lab workflow and scientific data repository for protein simulations.
Protein Eng Des Sel 2008, 21(6):369-377. PubMed Abstract | Publisher Full Text OpenURL
5. Toofanny RD, Simms AM, Beck DA, Daggett V: Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection.
BMC Bioinformatics 2011, 12:334. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
6. Meyer T, D’Abramo M, Hospital A, Rueda M, Ferrer-Costa C, Perez A, Carrillo O, Camps J, Fenollosa C, Repchevsky D, et al.: MoDEL (molecular dynamics extended library): a database of atomistic molecular dynamics trajectories.
Structure 2010, 18(11):1399-1409. PubMed Abstract | Publisher Full Text OpenURL
7. Ng MH, Johnston S, Wu B, Murdock SE, Tai K, Fangohr H, Cox SJ, Essex JW, Sansom MSP, Jeffreys P: BioSimGrid: grid-enabled biomolecular simulation data storage and analysis.
Future Gen Comput Syst 2006, 22(6):657-664. Publisher Full Text OpenURL
8. Terstyanszky G, Kiss T, Kukla T, Lichtenberger Z, Winter S, Greenwell P, McEldowney S, Heindl H: Application repository and science gateway for running molecular docking and dynamics simulations.
Stud Health Technol Inform 2012, 175:152-161. PubMed Abstract | Publisher Full Text OpenURL
9. Adams S, de Castro P, Echenique P, Estrada J, Hanwell MD, Murray-Rust P, Sherwood P, Thomas J, Townsend J: The quixote project: collaborative and open quantum chemistry data management in the internet age.
J Cheminform 2011, 3:38. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
10. Phadungsukanan W, Kraft M, Townsend JA, Murray-Rust P: The semantics of Chemical Markup Language (CML) for computational chemistry: CompChem.
J Cheminform 2012, 4(1):15. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
11. Murray-Rust P, Rzepa HS: Chemical markup, XML, and the World Wide Web. 4. CML schema.
J Chem Inf Comput Sci 2003, 43(3):757-772. PubMed Abstract | Publisher Full Text OpenURL
12. Guha R, Howard MT, Hutchison GR, Murray-Rust P, Rzepa H, Steinbeck C, Wegner J, Willighagen EL: The Blue Obelisk-interoperability in chemical informatics.
J Chem Inf Comput Sci 2006, 46(3):991-998. Publisher Full Text OpenURL
13. de Jong WA, Walker AM, Hanwell MD: From data to analysis: linking NWChem and Avogadro with the syntax and semantics of Chemical Markup Language.
J Cheminform 2013, 5(1):25. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
14. Frisch MJ, Trucks GW, Schlegel HB, Scuseria GE, Robb MA, Cheeseman JR, Scalmani G, Barone V, Mennucci B, Petersson GA, et al.: Gaussian 09, Revision C.01. Wallingford, CT: Gaussian, Inc; 2009. OpenURL
15. Valiev M, Bylaska EJ, Govind N, Kowalski K, Straatsma TP, Van Dam HJJ, Wang D, Nieplocha J, Apra E, Windus TL: NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations.
Comput Phys Commun 2010, 181(9):1477-1489. Publisher Full Text OpenURL
16. Thibault JC, Facelli JC, Cheatham TE III: IBIOMES: managing and sharing biomolecular simulation data in a distributed environment.
J Chem Inf Model 2013, 53(3):726-736. PubMed Abstract | Publisher Full Text OpenURL
17. Rajasekar A, Moore R, Hou CY, Lee CA, Marciano R, de Torcy A, Wan M, Schroeder W, Chen SY, Gilbert L: iRODS Primer: integrated rule-oriented data system.
Synth Lect Inform Concepts Retrieval Serv 2010, 2(1):1-143. OpenURL
18. Abouzied A, Bajda-Pawlikowski K, Huang J, Abadi DJ, Silberschatz A: HadoopDB in action: building real world applications. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data. Indianapolis, IN, USA: ACM; 2010:1111-1114. OpenURL
19. Thusoo A, Sarma JS, Jain N, Shao Z, Chakka P, Zhang N, Antony S, Liu H, Murthy R: Hive-a petabyte scale data warehouse using hadoop. In Data Engineering (ICDE), 2010 IEEE 26th International Conference on. Long Beach, CA, USA: IEEE; 2010:996-1005. OpenURL
20. Apache Lucene. webcite. Access January 2014
21. Herráez A: Biomolecules in the computer: jmol to the rescue.
Biochem Mol Biol Educ 2006, 34(4):255-261. PubMed Abstract | Publisher Full Text OpenURL
22. Tillmann G: A practical guide to logical data modeling. New York: McGraw-Hill; 1993. OpenURL
23. Foster I, Kesselman C: The Grid 2: Blueprint for a new Computing Infrastructure. 2nd edition. San Francisco, CA: Morgan Kaufmann; 2003. OpenURL
24. Saltz J, Oster S, Hastings S, Langella S, Kurc T, Sanchez W, Kher M, Manisundaram A, Shanbhag K, Covitz P: caGrid: design and implementation of the core architecture of the cancer biomedical informatics grid.
Bioinformatics 2006, 22(15):1910-1916. PubMed Abstract | Publisher Full Text OpenURL
25. Sun Y, McKeever S: Converting biomolecular modelling data based on an XML representation.
J Integr Bioinform 2008., 5(2)
26. Goni R, Apostolov R, Lundborg M, Bernau C, Jamitzky F, Laure E, Lindhal E, Andrio P, Becerra Y, Orozco M, et al.: ScalaLife white paper: standards for data handling.
ScalaLife, Scalable Software Services for Life Science 2013.
Available at webcite, access January 2014)
27. Case DA, Cheatham TE 3rd, Darden T, Gohlke H, Luo R, Merz KM Jr, Onufriev A, Simmerling C, Wang B, Woods RJ: The amber biomolecular simulation programs.
J Comput Chem 2005, 26(16):1668-1688. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
28. Hess B, Kutzner C, van der Spoel D, Lindahl E: GROMACS 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation.
J Chem Theory Comput 2008, 4(3):435-447. Publisher Full Text OpenURL
29. Humphrey W, Dalke A, Schulten K: VMD: visual molecular dynamics.
J Mol Graph 1996, 14(1):33-38. PubMed Abstract | Publisher Full Text OpenURL
30. Roe DR, Cheatham TE III: PTRAJ and CPPTRAJ: software for processing and analysis of molecular dynamics trajectory data.
J Chem Theory Comput 2013, 9(7):3084-3095. Publisher Full Text OpenURL
31. Michaud‒Agrawal N, Denning EJ, Woolf TB, Beckstein O: MDAnalysis: a toolkit for the analysis of molecular dynamics simulations.
J Comput Chem 2011, 32(10):2319-2327. Publisher Full Text OpenURL
32. Hospital A, Andrio P, Fenollosa C, Cicin-Sain D, Orozco M, Lluis Gelpi J: MDWeb and MDMoby: an integrated Web-based platform for molecular dynamics simulations.
Bioinformatics 2012, 28(9):1278-1279. PubMed Abstract | Publisher Full Text OpenURL
33. Svensson M, Humbel S, Froese RD, Matsubara T, Sieber S, Morokuma K: ONIOM: A multilayered integrated MO+ MM method for geometry optimizations and single point energy predictions. A test for Diels-Alder reactions and Pt (P (t-Bu) 3) 2+ H2 oxidative addition.
J Phys Chem 1996, 100(50):19357-19363. Publisher Full Text OpenURL
34. Jorgensen WL, Tirado-Rives J: Potential energy functions for atomic-level simulations of water and organic and biomolecular systems.
Proc Natl Acad Sci USA 2005, 102(19):6665-6670. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
35. Nadler W, Brunger AT, Schulten K, Karplus M: Molecular and stochastic dynamics of proteins.
Proc Natl Acad Sci USA 1987, 84(22):7933-7937. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
36. Schlick T: Molecular dynamics-based approaches for enhanced sampling of long-time, large-scale conformational changes in biomolecules.
F1000 Biol Rep 2009, 1:51. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
37. Cramer CJ: Essentials of Computational Chemistry : Theories and Models. 2nd edition. Chichester, West Sussex, England ; Hoboken, NJ: Wiley; 2004. OpenURL
J Chem Inf Comput Sci 1988, 28(1):31-36. Publisher Full Text OpenURL
39. McNaught A: The IUPAC International Chemical Identifier: InChI – a new standard for molecular informatics.
Chem Int 2006, 28(6):12-14. OpenURL
41. Degtyarenko K, De Matos P, Ennis M, Hastings J, Zbinden M, McNaught A, Alcántara R, Darsow M, Guedj M, Ashburner M: ChEBI: a database and ontology for chemical entities of biological interest.
Nucleic Acids Res 2008, 36(suppl 1):D344. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
42. Bodenreider O: The unified medical language system (UMLS): integrating biomedical terminology.
Nucleic Acids Res 2004, 32(Database Issue):D267. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
43. Hardiker N, Kim TY, Bartz CC, Coenen A, Jansen K: Collaborative development and maintenance of health terminologies. In AMIA Annu Symp Proc 2013. Washington DC: American Medical Informatics Association; 2013:572-577. OpenURL
44. Noy NF, Tudorache T: Collaborative ontology development on the (semantic) web. In AAAI Spring Symposium: Symbiotic Relationships between Semantic Web and Knowledge Engineering. Stanford University, CA: AAAI Press; 2008:63-68. OpenURL
45. Schuchardt KL, Didier BT, Elsethagen T, Sun L, Gurumoorthi V, Chase J, Li J, Windus TL: Basis set exchange: a community database for computational sciences.
J Chem Inf Model 2007, 47(3):1045-1052. PubMed Abstract | Publisher Full Text OpenURL
46. Kawamoto K, Del Fiol G, Strasberg HR, Hulse N, Curtis C, Cimino JJ, Rocha BH, Maviglia S, Fry E, Scherpbier HJ, et al.: Multi-national, multi-institutional analysis of clinical decision support data needs to inform development of the HL7 virtual medical record standard. In AMIA Annu Symp Proc 2010. Washington DC: American Medical Informatics Association; 2010:377-381. OpenURL |
80c11da39b615836 | Chandrasekhar limit
From Wikipedia, the free encyclopedia
(Redirected from Chandrasekhar Limit)
Jump to: navigation, search
The Chandrasekhar limit (/ʌndrəˈʃkɑr/) is the maximum mass of a stable white dwarf star. The limit was first indicated in papers published by Wilhelm Anderson and E. C. Stoner, and was named after Subrahmanyan Chandrasekhar, the Indian-American astrophysicist who independently discovered and improved upon the accuracy of the calculation in 1930, at the age of 19. This limit was initially ignored by the community of scientists because such a limit would logically require the existence of black holes, which were considered a scientific impossibility at the time. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure. (By comparison, main sequence stars resist collapse through thermal pressure.) The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, white dwarfs with masses greater than the limit would be subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. (However, white dwarfs generally avoid this fate by exploding before they undergo collapse.) Those with masses under the limit remain stable as white dwarfs.[1]
The currently accepted value of the limit is about 1.39 \begin{smallmatrix}M_\odot\end{smallmatrix} ( 2.765 × 1030 kg).[2][3][4]
Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons will increase upon compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure.
Radius–mass relations for a model white dwarf. The green curve uses the general pressure law for an ideal Fermi gas, while the blue curve is for a non-relativistic ideal Fermi gas. The black line marks the ultrarelativistic limit.
In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P = K_1 \rho^{5\over 3}, where P is the pressure, \rho is the mass density, and K_1 is a constant. Solving the hydrostatic equation then leads to a model white dwarf which is a polytrope of index 3/2 and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.[5]
As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form P = K_2 \rho^{4\over 3}. This will yield a polytrope of index 3, which will have a total mass, Mlimit say, depending only on K2.[6]
For a fully relativistic treatment, the equation of state used will interpolate between the equations P = K_1 \rho^{5\over 3} for small ρ and P = K_2 \rho^{4\over 3} for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit.[7] The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii[8] or kilometers, and mass in standard solar masses.
Calculated values for the limit will vary depending on the nuclear composition of the mass.[9] Chandrasekhar[10], eq. (36),[7], eq. (58),[11], eq. (43) gives the following expression, based on the equation of state for an ideal Fermi gas:
M_{\rm limit} = \frac{\omega_3^0 \sqrt{3\pi}}{2}\left ( \frac{\hbar c}{G}\right )^{3/2}\frac{1}{(\mu_e m_H)^2},
As \sqrt{\hbar c/G} is the Planck mass, the limit is of the order of
A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature.[9] Lieb and Yau[12] have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation.
In 1926, the British physicist Ralph H. Fowler observed that the relationship among the density, energy and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei which obeyed Fermi–Dirac statistics.[13] This Fermi gas model was then used by the British physicist E. C. Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming them to be homogeneous spheres.[14] Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37×1030 kg.[15] In 1930, Stoner derived the internal energydensity equation of state for a Fermi gas, and was then able to treat the mass-radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for μe=2.5) 2.19 · 1030 kg.[16] Stoner went on to derive the pressuredensity equation of state, which he published in 1932.[17] These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter.[18] Frenkel's work, however, was ignored by the astronomical and astrophysical community.[19]
A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas.[20] In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state,[5] and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above.[6][7][10][21] Chandrasekhar reviews this work in his Nobel Prize lecture.[11] This value was also computed in 1932 by the Soviet physicist Lev Davidovich Landau,[22] who, however, did not apply it to white dwarfs.
Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Stanley Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied:
The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. … I think there should be a law of Nature to prevent a star from behaving in this absurd way!
Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P=K1ρ5/3 universally applicable, even for large ρ.[24] Although Bohr, Fowler, Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar.[25], pp. 110–111 Through the rest of his life, Eddington held to his position in his writings,[26][27][28][29][30] including his work on his fundamental theory.[31] The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar.[25] In Miller's view:
Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten.
—p. 150, [25]
The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process will be exhausted, and the core will collapse, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.[32]
If a main-sequence star is not too massive (less than approximately 8 solar masses), it will eventually shed enough mass to form a white dwarf having mass below the Chandrasekhar limit, which will consist of the former core of the star. For more massive stars, electron degeneracy pressure will not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities will destroy the star completely.)[33][34][35][36] During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos.[32], pp. 1046–1047. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy which is on the order of 1046 joules (100 foes). Most of this energy is carried away by the emitted neutrinos.[37] This process is believed to be responsible for supernovae of types Ib, Ic, and II.[32]
Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbonoxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation which disrupts the star and causes the supernova.[38], §5.1.2
A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, MV is approximately -19.3, with a standard deviation of no more than 0.3.[38], (1) A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy.
Super-Chandrasekhar mass supernovae[edit]
Main article: Champagne Supernova
In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf which grew to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" by University of Oklahoma astronomer David R. Branch, may have been spinning so fast that centrifugal force allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles.[39][40][41]
Since the observation of the Champagne Supernova in 2003, more very bright type Ia supernovae have been observed that are thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if and SN 2009dc.[42] The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses.[42] One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely.[42]
Tolman–Oppenheimer–Volkoff limit[edit]
After a supernova explosion, a neutron star may be left behind. Like white dwarfs these objects are extremely compact and are supported by degeneracy pressure, but a neutron star is so massive and compressed that electrons and protons have combined to form neutrons, and the star is thus supported by neutron degeneracy pressure instead of electron degeneracy pressure. The limit of neutron degeneracy pressure, analogous to the Chandrasekhar limit, is known as the Tolman–Oppenheimer–Volkoff limit.
1. ^ Sean Carroll, Ph.D., Cal Tech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 44, Accessed Oct. 7, 2013, "...Chandrasekhar limit: The maximum mass of a white dwarf star, about 1.4 times the mass of the Sun. Above this mass, the gravitational pull becomes too great, and the star must collapse to a neutron star or black hole..."
2. ^ Israel, edited by S.W. Hawking, W. (1989). Three hundred years of gravitation (1st pbk. ed., with corrections. ed.). Cambridge [Cambridgeshire]: Cambridge University Press. ISBN 0-521-37976-8.
3. ^ p. 55, How A Supernova Explodes, Hans A. Bethe and Gerald Brown, pp. 51–62 in Formation And Evolution of Black Holes in the Galaxy: Selected Papers with Commentary, Hans Albrecht Bethe, Gerald Edward Brown, and Chang-Hwan Lee, River Edge, New Jersey: World Scientific: 2003. ISBN 981-238-250-X.
4. ^ Mazzali, P. A.; Röpke, F. K.; Benetti, S.; Hillebrandt, W. (2007). "A Common Explosion Mechanism for Type Ia Supernovae". Science (PDF) 315 (5813): 825–828. arXiv:astro-ph/0702351v1. Bibcode:2007Sci...315..825M. doi:10.1126/science.1136259. PMID 17289993. edit
5. ^ a b The Density of White Dwarf Stars, S. Chandrasekhar, Philosophical Magazine (7th series) 11 (1931), pp. 592–596.
6. ^ a b The Maximum Mass of Ideal White Dwarfs, S. Chandrasekhar, Astrophysical Journal 74 (1931), pp. 81–82.
7. ^ a b c The Highly Collapsed Configurations of a Stellar Mass (second paper), S. Chandrasekhar, Monthly Notices of the Royal Astronomical Society, 95 (1935), pp. 207--225.
8. ^ Standards for Astronomical Catalogues, Version 2.0, section 3.2.2, web page, accessed 12-I-2007.
9. ^ a b The Neutron Star and Black Hole Initial Mass Function, F. X. Timmes, S. E. Woosley, and Thomas A. Weaver, Astrophysical Journal 457 (February 1, 1996), pp. 834–843.
10. ^ a b The Highly Collapsed Configurations of a Stellar Mass, S. Chandrasekhar, Monthly Notices of the Royal Astronomical Society 91 (1931), 456–466.
11. ^ a b On Stars, Their Evolution and Their Stability, Nobel Prize lecture, Subrahmanyan Chandrasekhar, December 8, 1983.
12. ^ A rigorous examination of the Chandrasekhar theory of stellar collapse, Elliott H. Lieb and Horng-Tzer Yau, Astrophysical Journal 323 (1987), pp. 140–144.
13. ^ On Dense Matter, R. H. Fowler, Monthly Notices of the Royal Astronomical Society 87 (1926), pp. 114–122.
14. ^ The Limiting Density of White Dwarf Stars, Edmund C. Stoner, Philosophical Magazine (7th series) 7 (1929), pp. 63–70.
15. ^ Über die Grenzdichte der Materie und der Energie, Wilhelm Anderson, Zeitschrift für Physik 56, #11–12 (November 1929), pp. 851–856. DOI 10.1007/BF01340146.
16. ^ The Equilibrium of Dense Stars, Edmund C. Stoner, Philosophical Magazine (7th series) 9 (1930), pp. 944–963.
17. ^ The minimum pressure of a degenerate electron gas, E. C. Stoner, Monthly Notices of the Royal Astronomical Society 92 (May 1932), pp. 651–661.
18. ^ Anwendung der Pauli-Fermischen Elektronengastheorie auf das Problem der Kohäsionskräfte, J. Frenkel, Zeitschrift für Physik 50, #3–4 (March 1928), pp. 234–248. DOI 10.1007/BF01328867.
19. ^ The article by Ya I Frenkel' on `binding forces' and the theory of white dwarfs, D. G. Yakovlev, Physics Uspekhi 37, #6 (1994), pp. 609–612.
20. ^ Chandrasekhar's biographical memoir at the National Academy of Sciences, web page, accessed 12-I-2007.
21. ^ Stellar Configurations with degenerate Cores, S. Chandrasekhar, The Observatory 57 (1934), pp. 373–377.
22. ^ On the Theory of Stars, in Collected Papers of L. D. Landau, ed. and with an introduction by D. ter Haar, New York: Gordon and Breach, 1965; originally published in Phys. Z. Sowjet. 1 (1932), 285.
23. ^ Meeting of the Royal Astronomical Society, Friday, 1935 January 11, The Observatory 58 (February 1935), pp. 33–41.
24. ^ On "Relativistic Degeneracy", Sir A. S. Eddington, Monthly Notices of the Royal Astronomical Society 95 (1935), 194–206.
25. ^ a b c Empire of the Stars: Obsession, Friendship, and Betrayal in the Quest for Black Holes, Arthur I. Miller, Boston, New York: Houghton Mifflin, 2005, ISBN 0-618-34151-X; reviewed at The Guardian: The battle of black holes.
26. ^ The International Astronomical Union meeting in Paris, 1935, The Observatory 58 (September 1935), pp. 257–265, at p. 259.
27. ^ Note on "Relativistic Degeneracy", Sir A. S. Eddington, Monthly Notices of the Royal Astronomical Society 96 (November 1935), 20–21.
28. ^ The Pressure of a Degenerate Electron Gas and Related Problems, Arthur Eddington, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 152 (November 1, 1935), pp. 253–272.
29. ^ Relativity Theory of Protons and Electrons, Sir Arthur Eddington, Cambridge: Cambridge University Press, 1936, chapter 13.
30. ^ The physics of white dwarf matter, Sir A. S. Eddington, Monthly Notices of the Royal Astronomical Society 100 (June 1940), pp. 582–594.
31. ^ Fundamental Theory, Sir A. S. Eddington, Cambridge: Cambridge University Press, 1946, §43–45.
32. ^ a b c The evolution and explosion of massive stars, S. E. Woosley, A. Heger, and T. A. Weaver, Reviews of Modern Physics 74, #4 (October 2002), pp. 1015–1071.
33. ^ White dwarfs in open clusters. VIII. NGC 2516: a test for the mass-radius and initial-final mass relations, D. Koester and D. Reimers, Astronomy and Astrophysics 313 (1996), pp. 810–814.
34. ^ An Empirical Initial-Final Mass Relation from Hot, Massive White Dwarfs in NGC 2168 (M35), Kurtis A. Williams, M. Bolte, and Detlev Koester, Astrophysical Journal 615, #1 (2004), pp. L49–L52; also arXiv astro-ph/0409447.
35. ^ How Massive Single Stars End Their Life, A. Heger, C. L. Fryer, S. E. Woosley, N. Langer, and D. H. Hartmann, Astrophysical Journal 591, #1 (2003), pp. 288–300.
36. ^ Strange quark matter in stars: a general overview, Jürgen Schaffner-Bielich, Journal of Physics G: Nuclear and Particle Physics 31, #6 (2005), pp. S651–S657; also arXiv astro-ph/0412215.
37. ^ The Physics of Neutron Stars, by J. M. Lattimer and M. Prakash, Science 304, #5670 (2004), pp. 536–542; also arXiv astro-ph/0405262.
38. ^ a b Type IA Supernova Explosion Models, Wolfgang Hillebrandt and Jens C. Niemeyer, Annual Review of Astronomy and Astrophysics 38 (2000), pp. 191–230.
39. ^ The weirdest Type Ia supernova yet, LBL press release, web page accessed 13-I-2007.
40. ^ Champagne Supernova Challenges Ideas about How Supernovae Work, web page,, accessed 13-I-2007.
41. ^ The type Ia supernova SNLS-03D3bb from a super-Chandrasekhar-mass white dwarf star, D. Andrew Howell et al., Nature 443 (September 21, 2006), pp. 308–311; also, arXiv:astro-ph/0609616.
42. ^ a b c Hachisu, Izumi; Kato, M. et al. (2012). "A single degenerate progenitor model for type Ia supernovae highly exceeding the Chandrasekhar mass limit". The Astrophysical Journal 744 (1): 76–79 (Article ID 69). arXiv:1106.3510. Bibcode:2012ApJ...744...69H. doi:10.1088/0004-637X/744/1/69.
Further reading[edit] |
6587654985557cf9 | Take the 2-minute tour ×
share|improve this question
33 Answers 33
If $a$ and $b$ are positive integers, and you make the definition $$ a \cdot b = \underbrace{a + \cdots + a}_{b \text{ times} }$$ then it's a slightly surprising fact that $a \cdot b$ is actually equal to $b \cdot a$.
share|improve this answer
Indeed, this fails in general when $a,b$ are ordinals. – Terry Tao Dec 15 '13 at 4:51
It's even more surprising if you start with the inductive definitions of plus and times. The proof that $ab=ba$ comes as Proposition 72 in the first development of this theory, by Grassmann in 1861. – John Stillwell Jan 13 '14 at 9:12
A nice example from classical mechanics is this: there is a hidden $SO(4)$ symmetry in the elliptical orbits of a particle in an inverse square potential, ie. the Kepler problem.
The system has an obvious $SO(3)$ symmetry because the inverse square law is invariant under rotations. But there's no a priori clue that an $SO(4)$ symmetry exists in this system.
You can read about it here: http://math.ucr.edu/home/baez/classical/runge_pro.pdf
This carries over to the quantum mechanical case when you solve the Schrödinger equation for an inverse square potential.
You can read about that here: http://hep.uchicago.edu/~rosner/p342/projs/weinberg.pdf
The result is that the hidden $SO(4)$ symmetry explains the "coincidence" that many hydrogen atom states have the same energy.
share|improve this answer
1. I think that if you put yourself back in the position of someone discovering this for the first time, the equality (under suitable hypotheses) $${\partial^2f\over\partial x\partial y}={\partial^2 f\over\partial y\partial x}\quad (1)$$ should count.
2. Here's a surprising application of that suprising equality. Suppose you're a profit-maximizing competitive firm, hiring both labor ($L$) (at a wage rate of $W$) and capital ($K$) (at a rental rate of $R$). Then an increase in $W$ will, in general, lead you to reduce your output and so employ less capital, but at the same time lead you to substitute capital for labor and so employ more capital. On balance, the derivative $dK/dW$ could be either positive or negative. Likewise for the derivative $dL/dR$. It does not seem to me to be at all intuitively obvious that these derivatives even have the same sign, much less that they are equal. But if one takes $f$ in (1) to be profit as a function of $x$ (labor) and $y$ (capital) then one discovers that in fact
$${dK\over dW}={dL\over dR}$$
(Of course this looks more symmetric if you write $X_1$ and $X_2$ for labor and capital, and $P_1$ and $P_2$ for the wage rate and the rental rate.)
share|improve this answer
Higher Homotopy groups $\pi_n(X)$ are abelian. This is quite surprising if you see the defintion for the first time and probably got in touch with the classical fundamental group before, which is not abelian in general.
In fact, higher homotopy groups should serve as a generalization to the fundamental group in contrast to the abelian homology groups, when they were introduced, but as one recognized, that they are abelian too, they seemed to be not a nice generalization.
share|improve this answer
Rolling one surface on another without slipping binds the velocity of the rolling surface and its angular velocity, giving a rank 2 subbundle in the tangent bundle of the 5-dimensional space of tangential positionings of the 2 surfaces in space. This subbundle, when you roll one sphere on another, has an 8 dimensional symmetry group, unless one sphere has exactly one third the radius of the other sphere, in which case the subbundle is preserved by a 14 dimensional group of diffeomorphisms of the 5-dimensional manifold: the split real form of the simple Lie group $G_2$.
share|improve this answer
This subbundle is my favorite example of a non-integrable distribution (if the surfaces are "generic", at least) - you can physically see that rolling a sphere in an "infinitesimal square" on a plane makes the sphere rotate. – Peter Samuelson Dec 14 '13 at 15:29
Consider the Desargues configuration. It consists of (1) two triangles, say $ABC$ and $A'B'C'$ such that the lines $AA'$, $BB'$, and $CC'$ all meet at a point $P$, and (2) the three points of intersection of corresponding sides $X=(BC)\cap(B'C')$, $Y=(AC)\cap(A'C')$, and $Z=(AB)\cap(A'B')$. Desargues's theorem says that then $X$, $Y$, and $Z$ are collinear. The Desargues configuration consists of the 10 points mentioned above ($A,B,C,A',B',C',P,X,Y,Z$) and the 10 lines mentioned (the three sides of both triangles, the three lines through $P$, and the line $XYZ$). The surprising (to me) symmetry is an action of the cyclic group of order 5. In fact, the graph whose vertices are the 10 points of the Desargues configuration and whose edges join any two points that are not together on any of the configuration's 10 lines is the Petersen graph, which is usually drawn in a way that makes the cyclic 5-fold symmetry visible.
share|improve this answer
Have used Desargues for easily a hundred times in my schooldays and never realized this. I actually wasn't aware that the Petersen graph had any deeper meaning than that of a counterexample to some conjectures of days gone by. Nice!! – darij grinberg Dec 14 '13 at 21:01
Hermite's reciprocity: as representations of $GL_2$, we have $$ S^k(S^l\mathbb{C}^2)\simeq S^l(S^k\mathbb{C}^2). $$
share|improve this answer
The joint distribution of IID normal random variables is spherically symmetric.
Although invariance under permutations of the coordinates is obvious for any IID variables, spherical symmetry is rare. In fact, this characterizes the normal distribution.
share|improve this answer
In fact, the "correct" definition of Littlewood-Richardson coefficients shows a surprising $S_3$-symmetry among all the indices $\lambda,\mu,\nu$. See http://arxiv.org/abs/0704.0817.
A further example related to symmetric functions is the symmetry between the area and bounce statistics of Dyck paths. See for instance Chapter 3 of http://www.math.upenn.edu/~jhaglund/books/qtcat.pdf. No combinatorial proof of symmetry is known.
There are many enumeration problems with "hidden symmetry." For instance, what is the probability that 1 and 2 are in the same cycle of a (uniform) random permutation of $1,2,\dots,n$? More interesting, suppose that I shuffle an ordinary deck of 26 red cards and 26 black cards. I turn the cards face up one at a time. At any point before the last card is dealt, you can guess that the next card is red. What strategy maximizes the probability of guessing correctly? The surprising answer is that all strategies have a probability of 1/2 of success! There is a very elegant way to see this.
share|improve this answer
@StevenLandsburg: imagine the dealer turns over the bottom card of the deck when you guess, instead of the top one. Clearly this situation is symmetric to the one described above, but also clearly every strategy gives 50/50 odds as the outcome is determined before the game even starts. – Sam Hopkins Dec 14 '13 at 1:00
Can you fix the first link to point to the abstract rather than directly to the PDF? Thank you! – Harry Altman Dec 14 '13 at 18:11
From school days... Take positive reals x,y,z,w. The following statement is actually symmetric in x,y,z,w:
"there exists an equilateral triangle of side length w, and a point whose distances from the three vertices are x,y,z"
enter image description here
A quick proof: Let $ABC$ be equilateral and $P$ arbitrary. Construct $BPQ$ equilateral. Let $AB=AC=BC=w$, $AP=x$, $BP=y$ and $CP=z$. Then $BP=PQ=BQ=y$ by construction, $CP=z$ and $CB=w$ obviously, so it remains to check that $CQ=x$. Now note that triangle $CBQ$ is the $60^\circ$ rotation of $ABP$ around $B$.
share|improve this answer
A pedestrian definition of the rank of a matrix as the maximum number of linearly independent columns equals the maximum number of linearly independent rows.
share|improve this answer
The combinatorial definition of the Schur functions is $$ s_\lambda(x) = \sum_{T \in SSYT(\lambda)} x^{cont(T)} $$ where $SSYT(\lambda)$ is the set of semi-standard Young tableaux of shape $\lambda$ and $x^{cont(T)}$ is the product over all $i$ of $x_i^{\# i\text{'s in }T}$. This is not manifestly a symmetric function. The Bender-Knuth involution proves that $s_\lambda(x)$ is invariant after swapping $x_i$ with $x_{i+1}$, and thus $s_\lambda(x)$ is, indeed, symmetric.
share|improve this answer
And more startlingly (or at least far less obviously), the Stanley symmetric functions and their generalizations. – darij grinberg Jan 22 '14 at 17:43
The outer automorphism of $S_6$.
share|improve this answer
This is a rather specialized example, but dear to my heart.
Consider the set of "Richardson subvarieties" of the flag manifold $GL_n/B$, intersections of Schubert and opposite Schubert varieties. The only part of the Weyl group that preserves this set is $\{1,w_0\}$ where the $w_0$ exchanges Schubert and opposite Schubert varieties.
Now project these varieties to a $k$-Grassmannian, obtaining "positroid varieties". This includes the Richardson varieties in the Grassmannian, and many new varieties.
Now the part of the Weyl group that preserves this collection is the dihedral group $D_n$! The symmetry has gotten bigger by a factor of $n$.
share|improve this answer
I always found $\mathrm{Tor}_R\left(M,N\right) \cong \mathrm{Tor}_R\left(N,M\right)$ for a commutative ring $R$ and two $R$-modules $M$ and $N$ to be mysterious. Then again I have no idea about homology and thus wouldn't be surprised if this is a triviality from an appropriate viewpoint.
Volker Strehl's generalized cyclotomic identity (Corollary 6 in Volker Strehl, Cycle counting for isomorphism types of endofunctions states that $\prod\limits_{k\geq 1} \left(\dfrac{1}{1-az^k}\right)^{M_k\left(b\right)} = \prod\limits_{k\geq 1}\left(\dfrac{1}{1-bz^k}\right)^{M_k\left(a\right)}$ in the formal power series ring $\mathbb Q\left[\left[z,a,b\right]\right]$, where $M_k\left(t\right)$ denotes the $k$-th necklace polynomial $\dfrac{1}{k}\sum\limits_{d\mid k} \mu\left(d\right) t^{k/d}$. I recall this being not particularly difficult, but quite useful.
Every nontrivial commutativity of some family of operators probably qualifies as an unexpected symmetry. Here are three examples:
1. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $Y_i \in \mathbb Z\left[S_n\right]$ by $Y_i = \left(1,i\right) + \left(2,i\right) + ... + \left(i-1,i\right)$ (a sum of $i-1$ transpositions). Then, $Y_i Y_j = Y_j Y_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is a simple exercise, and the $Y_i$ are called the Young-Jucys-Murphy elements.
2. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{0,1,...,n\right\}$, define an element $\mathrm{Sch}_i \in \mathbb Z\left[S_n\right]$ as the sum of all permutations $\sigma \in S_n$ satisfying $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$. (Note that $\mathrm{Sch}_0 = \mathrm{Sch}_1$ when $n\geq 1$.) Then, $\mathrm{Sch}_i \mathrm{Sch}_j = \mathrm{Sch}_j \mathrm{Sch}_i$ for all $i$ and $j$ in $ \left\{0,1,...,n\right\}$. In fact, $\mathrm{Sch}_i \mathrm{Sch}_j = \sum\limits_{k=0}^{\min\left\{n,i+j-n\right\}} \dbinom{n-j}{i-k} \dbinom{n-i}{j-k} \left(n+k-i-j\right)! \mathrm{Sch}_k$, which makes the symmetry maybe not that surprising (no similar equalities hold in cases 1 and 3!). See Manfred Schocker, Idempotents for derangement numbers, Discrete Mathematics, vol. 269 (2003), pp. 239-248 for a proof.
3. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $\mathrm{RSW}_i \in \mathbb Z\left[S_n\right]$ as
$\sum\limits_{1\leq u_1 < u_2 < ... < u_i\leq n} \sum\limits_{\substack{\sigma\in S_n, \\ \sigma\left(u_1\right) < \sigma\left(u_2\right) < ... < \sigma\left(u_i\right)}} \sigma$.
Then, $\mathrm{RSW}_i \mathrm{RSW}_j = \mathrm{RSW}_j \mathrm{RSW}_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is Theorem 1.1 in Victor Reiner, Franco Saliola, Volkmar Welker, Spectra of Symmetrized Shuffling Operators, arXiv:1102.2460v2, and a nice proof remains to be found.
share|improve this answer
The Tor symmetry is basically just that $M \otimes N \cong N \otimes M$, and you take the derived functors of both sides. Generalizing, any and all nice properties of (co)homology groups would seem to be mysterious symmetries if you consider the definition to be messing around with projective or injective modules, and not something more intrinsic like derived functors. – Ryan Reich Dec 15 '13 at 5:14
A couple very disparate answers that spring to mind (fortunately, this is community wiki, and actual experts should feel very free to improve my exposition of either):
The negative gradient flow for the Chern-Simons functional on a 3-manifold $M$ naturally satisfies a four-dimensional symmetry. Namely, if one has a principal $G$-bundle on $M$ and some connection $A$ on this $G$-bundle (which I'll carelessly think of as a $\mathfrak{g}$-valued $1$-form on $M$), the Chern-Simons functional $CS(A) = \int_M \Big( dA + \frac{2}{3} A \wedge A \Big) \wedge A$ is a perfectly well-defined function on the space of connections, and one can attempt to perform the negative gradient flow with respect to a natural metric on this space of connections (this being a very natural thing to do from the point of view of Morse theory, for example). If you want, you can interpret the solution to this flow as a connection on the bundle pulled back to $M \times \mathbb{R}$, and while this connection clearly transforms nicely under $Diff(M)$, there's no particular reason to think it's a well-behaved object under the diffeomorphism group of the four-manifold $M \times \mathbb{R}$. However, this negative gradient flow equation turns out to be exactly the anti-self dual equation $F^+ = 0$, where the curvature $F = dA + A \wedge A$ and its self-dual part is $F^+ = \frac{1}{2}(F + *F)$. This equation manifestly respects the symmetries of the entire four-manifold, and this point of view is a very effective one for proving even basic things, like gauge invariance, of the Chern-Simons functional. Witten is very fond of making this point and my understanding is that this insight allowed him to extend his QFT description of the Jones polynomial to a QFT description of its categorification, Khovanov homology.
And now for something completely different: associativity of the quantum cup product. A familiar object to many people is the cohomology ring $H^*(X)$ of a space $X$, which is associative, (graded) commutative, and just generally great. If $X$ is a symplectic manifold, there's an interesting way to deform the multiplication on this ring using counts of $J$-holomorphic curves passing through various cycles. In effect, one picks a compatible almost-complex structure on the symplectic manifold, and then if one writes $\alpha * \beta = \sum_{\gamma} c_{\alpha \beta \gamma} \gamma$, where we think of $\alpha, \beta, \gamma$ as cycles in $X$ (using Poincare duality), the coefficient $c_{\alpha \beta \gamma}$ is a generating function in some formal variables, the coefficients of which are counts of holomorphic curves of fixed genus and homology class intersecting our three cycles $\alpha, \beta, \gamma$. Using this deformed multiplication gives the quantum cohomology ring $QH^*(X)$. Now, some properties of this ring, like graded commutativity, are fairly easy to see from the definition, but associativity is really quite tricky! (I realise this isn't exactly what you asked in your question as it's not just a symmetry of some coefficient, but you can phrase associativity as a symmetry of something or other -- if you want to be technical, a four-point Gromov-Witten invariant -- so I think it qualifies.) The associativity is somehow not so bad to see in the algebro-geometric case (or perhaps this is just my bias as an algebraic geometer), but in symplectic geometry you really need some nontrivial analytic estimates at some point in the proof. And you get a lot out of it! Associativity of this quantum cohomology ring encapsulates a wealth of information on enumerative geometry counts associated to $M$; indeed, it was basically this idea that allowed Kontsevich to find his recursion for the number of degree $d$ curves through $3d + 1$ general points in $\mathbb{P}^2$.
Finally, I kind of want to mention strange duality, even though that now really isn't an answer to the question, as you have to modify one side or the other; I'll just copy a very quick summary from the abstract to arxiv.org/abs/math/0602018: ``For X a compact Riemann surface of positive genus, the strange duality conjecture predicts that the space of sections of certain theta bundle on moduli of bundles of rank r and level k is naturally dual to a similar space of sections of rank k and level r.'' The paper itself is a great place to learn more about it if you're interested!
share|improve this answer
Morley's trisector theorem allows you to build a triangle which is maximally symmetric out of one which has no symmetry at all.
share|improve this answer
Let $G$ be a finite group with order $n$. For each $d$ dividing $n$, the number of subgroups of $G$ of order $d$ equals the number of subgroups of order $n/d$ if $G$ is abelian. More broadly, the lattice of subgroups of a finite abelian group looks the same if you flip it around by 180 degrees.
This is not at all obvious at the level at which the statement can first be understood, essentially because there is no natural way to construct subgroups of index $d$ from subgroups of order $d$ in a general finite abelian group with order divisible by $d$. It is not clear at a beginning level how the commutativity of the group leads to such conclusions.
share|improve this answer
Here is an example from potential theory where symmetry is a not-so-obvious property: the Green function of a bounded open subset $\Omega \subset \mathbb{C}$. More precisely, having specified a point $a \in \Omega$, one defines the classical Green function for $\Omega$ with pole at $a$, , as a function on $\mathbb{C}$ with the following properties: (i) $G_\Omega(\cdot; a)$ is harmonic in $\Omega \setminus \{a\}$; (ii) $z \mapsto G(z;a) + \log |z-a|$ extends to a harmonic function on $\Omega$; (iii) for each $w \in \partial \Omega$, $\lim_{z \to w} G_\Omega(z;a)=0$.
The symmetry property says that $G_\Omega(z;w)=G_\Omega(w;z)$ for any $z,w \in \Omega$ such that $z \ne w$. Note that the functions on either side of the equation are different: one has a pole at $w$ and the other at $z$. It is not very hard to prove the symmetry property, but it is not obvious either.
The existence of such a function is related to the solution of a Dirichlet problem for the Laplace equation in $\Omega$. Analogous functions can be considered for domains in $\mathbb{R}^n, \ n>2$ or in $\mathbb{C}^n, n > 1$, and they also enjoy the symmetry property.
share|improve this answer
In number theory, Terry Tao already mentioned Quadratic Reciprocity in his first comment, but there's also the reciprocity formula $$ s(b,c) + s(c,b) = \frac1{12}\left( \frac{b}{c} + \frac1{bc} + \frac{c}{b} \right) - \frac14 $$ for Dedekind sums, symmetrized further in Rademacher's formula $$ D(a,b;c) + D(b,c;a) + D(c,a;b) = \frac1{12} \frac{a^2+b^2+c^2}{abc} - \frac14. $$ [Here $D(a,b;c) = \sum_{n\,\bmod\,c} ((an/c)) ((bn/c))$, where $((\cdot))$ is the sawtooth function taking $x$ to $0$ if $x \in {\bf Z}$ and to $x - \lfloor x \rfloor - 1/2$ otherwise; and the Dedekind sum is the special case $s(b,c) = D(1,b;c)$.]
share|improve this answer
But I don't understand what is so special about this, at least in terms of symmetry: for about any function $s(\cdot,\cdot)$, including the Legendre symbol, $s(b,c)+s(c,b)$ or $s(b,c)s(c,b)$ is symmetric in $b$ and $c$. Where is the surprise? – Wolfgang Dec 18 '13 at 18:01
@Wolfgang asks a fair question. To add to Matt Young's answer, we can define $s'(b,c) = s(b,c) + 1/8 - b/12c - 1/24bc$, and then the reciprocity formula says that $s'(b,c)$ is antisymmetric: $s'(b,c) = -s'(c,b)$. – Noam D. Elkies Dec 18 '13 at 20:25
@NoamD.Elkies Granted. That reminds me of the relation between $\zeta(1-s)$ and $\zeta(s)$, cast as $\Xi(1-s)=\Xi(s)$ with appropriate $\Xi$. – Wolfgang Dec 19 '13 at 7:56
Consider a differential inequality, like the Hardy-Sobolev inequality $$\left|\int\int_{{\mathbb R}^N\times{\mathbb R^N}}\frac{\overline{f(x)}g(y)}{|x-y|^\lambda}dxdy\right|\leq C\|f\|_r\|g\|_s.$$ Even if you put the sharp constant $C$ in this inequality, for most functions the inequality is strict. Now look for maximizers, i.e., functions for which the LHS is equal to the RHS: they are highly symmetric functions, actually spherically symmetric and very smooth. This is a general phenomenon, connected with monotonicity of $L^p$ and Sobolev norms with respect to symmetrization procedures.
share|improve this answer
Characters of affine Kac-Moody Lie algebras and Virasoro Lie algebra are modular forms. These modular symmetries are not that much evident from the definitions.
share|improve this answer
Maxwell's equations were originally formulated for Newtonian physics. However, special relativity has found that these equations have a surprising symmetry to Lorentz transformations. The equations remain true in a moving reference frame. The transformation of the values is such that (loosely speaking) what looks like pure electric charge in one reference frame can be electric current and charge in another reference frame; and what looks like pure electric field from one reference frame can be magnetic and electric field in another reference frame.
See https://en.wikipedia.org/wiki/Covariant_formulation_of_classical_electromagnetism for a precise formulation.
share|improve this answer
Betti numbers: the symmetry $\dim(H^k(M^n))=\dim(H^{n-k}(M^n))$ does not immediately follow from the definition.
share|improve this answer
@DanielLitt, I know, I just don't want to deal with torsion, and for the purpose of this question Betti numbers' symmetry is sufficient. – Michael Dec 13 '13 at 21:23
My point is that the symmetry does not come from the Betti numbers, but from the space $M$; I don't think this is an example of what the question asks for. – Daniel Litt Dec 13 '13 at 23:40
There is a philosophy that the functional equation of a zeta function should be a consequence of Poincare duality on some exotic space. For zeta functions of varieties over finite fields, this was made rigorous in the 1960s, but over number fields it's still just a philosophy. So we have two non-obvious symmetries that are the same, but not obviously the same. In other words, we have a non-obvious symmetry between non-obvious symmetries. – JBorger Jan 12 '14 at 19:01
Two (unrelated) examples from combinatorics:
The first is Proposition 7.19.9 of volume 2 of Stanley's "Enumerative Combinatorics." Define a descent of a (skew) Standard Young Tableau $T$ of shape $\lambda/\mu$ to be an index $i$ such that $i+1$ is in a lower row than $i$. Let $D(T)$ denote the set of descents of $T$. Then for any $|\lambda/\mu|=n$ and for any $1 \leq i \leq n-1$, the number of SYTs $T$ of shape $\lambda/\mu$ such that $i \in D(T)$ is independent of $i$.
The second follows from a bijection of De Médicis and Viennot (1994, Adv. Appl. Math.) Let $\mathcal{M}_n$ denote the set of perfect matchings of $[2n]$, i.e. the set of partitions of $[2n] := \{1,2,\ldots,2n\}$ into pairs. Let $M \in \mathcal{M}_n$. For $p = \{a,b\}, q = \{c,d\} \in M$ with $a<b$, $c<d$, and $a<c$, we say that $p$ and $q$ cross if $a < c < b< d$ and we say they nest if $a<c<d<b$. Finally, we say they are aligned if they neither cross nor nest, i.e., $a<b<c<d$. Define:
$\mathrm{ne}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ nest}\}|;$
$\mathrm{cr}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ cross}\}|;$
$\mathrm{al}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ are aligned}\}|.$
Then $\sum_{M \in \mathcal{M}_n}x^{\mathrm{ne}(M)}y^{\mathrm{cr}(M)}=\sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{ne}(M)}$. However, crossings and alignments (or nestings and alignments) are not equidistributed: $\sum_{M \in \mathcal{M}_n}x^{\mathrm{al}(M)}y^{\mathrm{cr}(M)} \neq \sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{al}(M)}$.
share|improve this answer
The Jacobson radical of a ring $R$ is defined to be the intersection of all maximal left ideals in $R$. It turns out that the Jacobson radical is the intersection of all maximal right ideals in $R$ as well, so the Jacobson radical does not depend on whether one considers left or right ideals. In particular, the Jacobson radical of a ring is a two-sided ideal. In fact, there are several characterizations of the Jacobson radical that do not appear to be symmetric with respect to "leftness" and "rightness" including the following.
1. The intersection of all maximal left ideals.
2. $\bigcap\{\textrm{Ann}(M)|M\,\textrm{is a simple left}\,R-\textrm{module}\}$
3. $\{x\in R|1-rx\,\textrm{has a left inverse for each}\,r\in R\}$
4. $\{x\in R|1-rx\,\textrm{has a two-sided inverse for each}\,r\in R\}$
share|improve this answer
Let $r_4(n)$ be the number of $4$-tuples $a,b,c,d\in \bf Z$ satisfying $a^2+b^2+c^2+d^2=n$. Then $\sum_{n\geq 0}r_4(n)e^{2\pi i\, nz}dz$ is a holomorphic differential form on the upper half-plane that is invariant by a subgroup of finite index in ${\rm SL}_2(\bf Z)$ (acting by $\frac{az+b}{cz+d}$).
The same is true if you replace $r_4(n)$ by $a_n(E)$ where:
-- $E$ is an elliptic curve defined over $\bf Q$,
-- if $p$ is a prime number, $a_p(E)=p+1-N_p(E)$ and $N_p(E)$ is the number of points of $E$ in ${\bf Z}/p{\bf Z}$,
-- $a_n(E)$, for $n\in\bf N$, is defined by $\sum_n a_n(E)n^{-s}=\prod_p(1-a_p(E)p^{-s}+p^{1-2s})^{-1}$ (the product has to be taken over the prime numbers $p$ such that $E$ remains an elliptic curve modulo $p$ which excludes finitely many of them).
share|improve this answer
I would like to add an example coming from the area of additive theory known as Freiman's structure theory. If I am not (too) blind, this has not been mentioned yet, and hopefully it qualifies as an appropriate answer.
Assume that $\mathbb{A} = (A, +)$ is a (possibly non-commutative) semigroup, and let $X$ be a non-empty subset of $A$. Given an integer $n \ge 1$, we write $nX$ for $\{x_1+\cdots + x_n: x_1, \ldots, x_n \in X\}$. In principle, we have $1 \le |nX| \le |X|^n$, and for all $k \in \mathbb{N}^+$ and $i \in \{1, \ldots, k\}$ we can actually find a pair $(\mathbb{A}, X)$ such that $|X| = k$ and $|nX| = i$, with the result that, in general, not much can be concluded about the "structure" of $X$. However, if $|nX|$ is sufficiently small with respect to $|X|$ and $\mathbb{A}$ has suitable properties, then "surprising" things start happening, and for instance we have the following:
Theorem. If $\mathbb{A}$ is a linearly orderable semigroup (i.e., there exists a total order $\preceq$ on $A$ such that $x + z \prec y + z$ and $z + x \prec z + y$ for all $x,y,z \in A$ with $x \prec y$) and $|2X| \le 3|X|-3$, then the smallest subsemigroup of $\mathbb{A}$ containing $X$ is abelian.
This implies at once an analogous result by Freiman and coauthors which is valid for linearly ordered groups; see Theorem 1.2 in [F] (a preprint can be found here). I don't know of any similar result for larger values of $n$.
[F] G. Freiman, M. Herzog, P. Longobardi, and M. Maj, Small doubling in ordered groups, to appear in J. Austr. Math. Soc.
share|improve this answer
In the definition of "Latin square" there is complete symmetry between the roles of "row", "column" and "symbol", so that any of the 6 permutations of that role produces another Latin square.
share|improve this answer
share|improve this answer
Your Answer
|
981a5c7d2496e8a8 | Tuesday, November 28, 2006
Chemistry (from Greek χημεία khemeia[1] meaning "alchemy") is the science of matter at the atomic to molecular scale, dealing primarily with collections of atoms, such as molecules, crystals, and metals. Chemistry deals with the composition and statistical properties of such structures, as well as their transformations and interactions to become materials encountered in everyday life. Chemistry also deals with understanding the properties and interactions of individual atoms with the purpose of applying that knowledge at the macroscopic level. According to modern chemistry, the physical properties of materials are generally determined by their structure at the atomic scale, which is itself defined by interatomic forces.
Chemistry is often called the "central science" because it connects other sciences, such as physics, material science, nanotechnology, biology, pharmacy, medicine, bioinformatics, and geology.[2] These connections are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. For example, physical chemistry involves applying the principles of physics to materials at the atomic and molecular level.
Chemistry pertains to the interactions of matter. These interactions may be between two material substances or between matter and energy, especially in conjunction with the First Law of Thermodynamics. Traditional chemistry involves interactions between substances in chemical reactions, where one or more substances become one or more other substances. Sometimes these reactions are driven by energetic (enthalpic) considerations, such as when two highly energetic substances such as elemental hydrogen and oxygen react to form the less energetic substance water. Chemical reactions may be facilitated by a catalyst, which is generally another chemical substance present within the reaction media but unconsumed (such as sulfuric acid catalyzing the electrolysis of water) or a non-material phenomenon (such as electromagnetic radiation in photochemical reactions). Traditional chemistry also deals with the analysis of chemicals both in and apart from a reaction, as in spectroscopy.
All ordinary matter consists of atoms or the subatomic components that make up atoms; protons, electrons and neutrons. Atoms may be combined to produce more complex forms of matter such as ions, molecules or crystals. The structure of the world we commonly experience and the properties of the matter we commonly interact with are determined by properties of chemical substances and their interactions. Steel is harder than iron because its atoms are bound together in a more rigid crystalline lattice. Wood burns or undergoes rapid oxidation because it can react spontaneously with oxygen in a chemical reaction above a certain temperature.
Substances tend to be classified in terms of their energy or phase as well as their chemical compositions. The three phases of matter at low energy are Solid, Liquid and Gas. Solids have fixed structures at room temperature which can resist gravity and other weak forces attempting to rearrange them, due to their tight bonds. Liquids have limited bonds, with no structure and flow with gravity. Gases have no bonds and act as free particles. Another way to view the three phases is by volume and shape: roughly speaking, solids have fixed volume and shape, liquids have fixed volume but no fixed shape, and gases have neither fixed volume nor fixed shape.
Water (H2O) is a liquid at room temperature because its molecules are bound by intermolecular forces called Hydrogen bonds. Hydrogen sulfide (H2S) on the other hand is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole-dipole interactions. The hydrogen bonds in water have enough energy to keep the water molecules from separating from each other but not from sliding around, making it a liquid at temperatures between 0 °C and 100 °C at sea level. Lowering the temperature or energy further, allows for a tighter organization to form, creating a solid, and releasing energy. Increasing the energy (see heat of fusion) will melt the ice although the temperature will not change until all the ice is melted. Increasing the temperature of the water will eventually cause boiling (see heat of vaporization) when there is enough energy to overcome the polar attractions between individual water molecules (100 °C at 1 atmosphere of pressure), allowing the H2O molecules to disperse enough to be a gas. Note that in each case there is energy required to overcome the intermolecular attractions and thus allow the molecules to move away from each other.
Scientists who study chemistry are known as chemists. Most chemists specialize in one or more sub-disciplines. The chemistry taught at the high school or early college level is often called "general chemistry" and is intended to be an introduction to a wide variety of fundamental concepts and to give the student the tools to continue on to more advanced subjects. Many concepts presented at this level are often incomplete and technically inaccurate, yet they are of extraordinary utility. Chemists regularly use these simple, elegant tools and explanations in their work because they have been proven to accurately model a very wide array of chemical reactivity, are generally sufficient, and more precise solutions may be prohibitively difficult to obtain.
The science of chemistry is historically a recent development but has its roots in alchemy which has been practiced for millennia throughout the world. The word chemistry is directly derived from the word alchemy; however, the etymology of alchemy is unclear (see alchemy).
History of chemistry
The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glass. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called Alchemy. Alchemy was practiced by many cultures throughout history and often contained a mixture of philosophy, mysticism, and protoscience (see Alchemy).
Alchemists discovered many chemical processes that led to the development of modern chemistry. As history progressed the more notable alchemists (esp. Geber and Paracelsus) evolved alchemy away from philosophy and mysticism and developed more systematic and scientific approaches. The first alchemist considered to apply the scientific method to alchemy and to distinguish chemistry from alchemy was Robert Boyle (1627–1691); however, chemistry as we know it today was invented by Antoine Lavoisier with his law of Conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table of the chemical elements by Dmitri Mendeleyev.
The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery in the past 100 years. In the early part of the 20th century the subatomic nature of atoms were revealed and the science of quantum mechanics began to explain the physical nature of the chemical bond. By the mid 20th century chemistry had developed to the point of being able to understand and predict aspects of biology spawning the field of biochemistry.
• Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry.
• Biochemistry is the study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, as in medicinal chemistry or neurochemistry. Biochemistry is also associated with molecular biology and genetics.
• Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton.
• Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, and spectroscopy. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry.
• Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Essentially from reductionism theoretical chemistry is just physics, just like fundamental biology is just chemistry and physics.
• Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern Transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field.
Other fields include Astrochemistry, Atmospheric chemistry, Chemical Engineering, Chemo-informatics, Electrochemistry, Environmental chemistry, Flow chemistry, Geochemistry, Green chemistry, History of chemistry, Materials science, Medicinal chemistry, Molecular Biology, Molecular genetics, Nanotechnology, Organometallic chemistry, Petrochemistry, Pharmacology, Photochemistry, Phytochemistry, Polymer chemistry, Solid-state chemistry, Sonochemistry, Supramolecular chemistry, Surface chemistry, and Thermochemistry.
Fundamental concepts
An atom is a collection of matter consisting of a positively charged core (the atomic nucleus) which contains protons and neutrons, and which maintains a number of electrons to balance the positive charge in the nucleus.
The most convenient presentation of the chemical elements is in the periodic table of the chemical elements, which groups elements by atomic number. Due to its ingenious arrangement, groups, or columns, and periods, or rows, of elements in the table either share several chemical properties, or follow a certain trend in characteristics such as atomic radius, electronegativity, electron affinity, and etc. Lists of the elements by name, by symbol, and by atomic number are also available. In addition, several isotopes of an element may exist.
An ion is a charged species, or an atom or a molecule that has lost or gained one or more electrons. Positively charged cations (e.g. sodium cation Na+) and negatively charged anions (e.g. chloride Cl−) can form neutral salts (e.g. sodium chloride NaCl). Examples of polyatomic ions that do not split up during acid-base reactions are hydroxide (OH−) and phosphate (PO43−).
A compound is a substance with a fixed ratio of chemical elements which determines the composition, and a particular organization which determines chemical properties. For example, water is a compound containing hydrogen and oxygen in the ratio of two to one, with the oxygen between the hydrogens, and an angle of 104.5° between them. Compounds are formed and interconverted by chemical reactions.
A molecule is the smallest indivisible portion of a pure compound or element that retains a set of unique chemical properties.
A chemical substance can be an element, compound or a mixture of compounds, elements or compounds and elements. Most of the matter we encounter in our daily life are one or another kind of mixtures, e.g. air, alloys, biomass etc.
Chemical bond
A chemical bond is the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. These potentials create the interactions which holds together atoms in molecules or crystals. In many simple compounds, Valence Bond Theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to predict molecular structure and composition. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory fails and alternative approaches, primarily based on principles of quantum chemistry such as the molecular orbital theory, are necessary. See diagram on electronic orbitals.
States of matter
Chemical reactions
A Chemical reaction is a process that results in the interconversion of chemical substances. Such reactions can result in molecules attaching to each other to form larger molecules, molecules breaking apart to form two or more smaller molecules, or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. For example, substances that react with oxygen to produce other substances are said to undergo oxidation; similarly a group of substances called acids or alkalis can react with one another to neutralize each other's effect, a phenomenon known as neutralization. Substances can also be dissociated or synthesized from other substances by various different chemical processes.
Quantum chemistry
Quantum chemistry mathematically describes the fundamental behavior of matter at the molecular scale. It is, in principle, possible to describe all chemical systems using this theory. In practice, only the simplest chemical systems may realistically be investigated in purely quantum mechanical terms, and approximations must be made for most practical purposes (e.g., Hartree-Fock, post Hartree-Fock or Density functional theory, see computational chemistry for more details). Hence a detailed understanding of quantum mechanics is not necessary for most chemistry, as the important implications of the theory (principally the orbital approximation) can be understood and applied in simpler terms.
In quantum mechanics (several applications in computational chemistry and quantum chemistry), the Hamiltonian, or the physical state, of a particle can be expressed as the sum of two operators, one corresponding to kinetic energy and the other to potential energy. The Hamiltonian in the Schrödinger wave equation used in quantum chemistry does not contain terms for the spin of the electron.
Solutions of the Schrödinger equation for the hydrogen atom gives the form of the wave function for atomic orbitals, and the relative energy of say the 1s,2s,2p and 3s orbitals. The orbital approximation can be used to understand the other atoms e.g. helium, lithium and carbon.
Chemical Laws
Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers (i.e. 1:2 O:H in water); although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Such compounds are known as non-stoichiometric compounds
Interpersonal chemistry
In the fields of sociology, behavioral psychology, and evolutionary psychology, with specific reference to intimate relationships or romantic relationships, interpersonal chemistry is a reaction between two people or the spontaneous reaction of two people to each other, especially a mutual sense of attraction or understanding.[4] In a colloquial sense, it is often intuited that people can have either good chemistry or bad chemistry together. Other related terms are team chemistry, a phrase often used in sports, and business chemistry, as between two companies.[5] Recent developments in neurochemistry have begun to shed light on the nature of the "chemistry of love", in terms of measurable changes neurotransmitters such as oxytocin, serotonin, and dopamine.
The word chemistry comes from the earlier study of alchemy, which is basically the quest to make gold from earthen starting materials. As to the origin of the word “alchemy” the question is a debatable one; it certainly has Greek origins, and some, following E. Wallis Budge, have also asserted Egyptian origins. Alchemy, generally, derives from the old French alkemie; and the Arabic al-kimia: "the art of transformation." The Arabs borrowed the word “kimia” from the Greeks when they conquered Alexandria in the year 642 AD.
Saturday, November 25, 2006
INTRODUCTION: The fungal functional type is comprised of sessile heterotrophs with cell walls. Rather than ingesting food as animals do, fungal organisms absorb food across the cell wall. The assemblage of organisms termed fungi are classified into two general categories. First, are the true fungi (Kingdom Fungi) which evolved from motile, aquatic protozoa that are also ancestors to the animal kingdom. True fungi first evolved as the chytrids (Phylum Chytridiomycota) who produce an enlarged globular cell from which numerous filaments grow into the food source. Chytrids produce motile spores and gametes, and the vegetative cells are ceonocytic (many nuclei float around in one big cell). Chytrids gave rise to the Zygomycetes (Phylum Zygomycota), which produce no motile cells but form ceonocytic hyphae. From the Zygomycetes, the advanced fungi arose and in time formed the Phylum Dikaryomycota. Organisms in the Dikaryomycota produce hyphae comprised of individual nucleated cells separated by walls (septate hyphae) with each cell having two haploid nuclei in what is called an N + N configuration. The two main groups of dikaryotic fungi are the ascomycetes (sac-forming fungi) and the basidiomycetes (club-forming fungi). The majority of fungi affecting humans are ascomycetes and basidiomycetes.
The second category of fungal organisms is the pseudofungi, made up of various unrelated protista groups. The pseudofungi were formerly classified into the catch-all kingdom Protista but have recently been reclassified into more-specific kingdoms that reflect genetic relationships. Important pseudofungi are the Oomycetes (egg fungi and water molds), and the slime molds. Oomycetes are closely related to the stramenopilous algae - the brown algae, golden-brown algae and diatoms of the Kingdom Stramenopila. The close relationship between the oomycetes and the brown algae is evident in that both have cellulose walls, and they share the same type of flagella. Oomycetes descend from algae that lost their chloroplasts, and hence have adopted a heterotrophic life form. Oomycetes also produce filamentous hyphae to better absorb nutrients from the food source. Because they have no relationship with the chytrids, it is clear that the oomycetes and chytridiomycetes independently evolved the mycelial life form.
Slime molds evolved from various ancient protozoa and have little genetic affinity to other fungi or algae groups. They are animal-like in that they ingest food early in the life-cycle, but are fungal-like in that they produce walled sporangia and spores.
In today’s lab, you will have a chance to examine the diversity of both false and true fungi. The lab will begin with the false fungi and then follow an evolutionary sequence through the true fungi. Many specimens illustrate important reproductive phases in the life-cycles of these organisms. You should examine these carefully because reproductive features are important to understanding and distinguishing the various groups of fungi.
Although not required, we urge you to draw and label the specimens you examine. Our experience has been that drawings with appropriate labels are the best way to learn the features emphasized in lab and lecture. Drawings are also an excellent study tool to refresh your memory just prior to the exam
A) Slime molds (Kingdoms Myxomycota and Dictyosteliomycota): Slime molds are largely saprophytic and are typically found on decaying wood in moist forests. During the vegetative phase of the life cycle (see figure 16-6 on page 353 of your text), they begin life as independent amoebae, ingesting microscopic bits of organic debris. The free-living amoebae eventually swarm together to produce a multicellular blob called a plasmodium. After a while, the plasmodium forms cellulose walls around the nuclei and produces sporangia (or fructifications). The sporangia release large numbers of air-borne spores which germinate in the presence of water to form free-living amoebae, thus completing the life cycle. Slime molds defy simple categorization. They are animal-like in that they ingest food during the amoeboid- and plasmodial-phase. They are plant-like in their formation of cell walls and sporangia during reproduction. Because of the cell-walls formed during the reproductive phase they are considered fungal in nature. There are two major types of slime molds:
A) the Myxomycota are the acellular, or true plasmodial slime modes: The plasmodium stage of this group is made up of large blobs of ceonocytic protoplasm with many nuclei inside.
B) the Dictyosteliomycota are the cellular clime molds, where the plasmodium is made of individual cells separated by membranes (but not cell walls).
Observations on Display: Fruiting bodies of slime molds are readily found in Ontario forests in the autumn. Some will be on display for you to examine, along with a diagram of the life cycle.
B) Oomycetes: the water molds, or egg fungi (Kingdom Stramenopila)
Oomycetes are characterized by the formation of large egg-bearing cells on the tips of specialized hyphae termed oogonia (see figure 17-4 on page 374 of your text). Large, non-motile eggs form inside the oogonia and are fertilized by male-like hyphae termed antheridia. The antheridia grow into the egg and deposit the male gametes, which then fuse with the egg to form a zygote, termed the oospore. The oospore undergoes mitosis and forms a sporangia. The spores that are produced disperse to infect leaves, seedlings, fish and dead organisms.
Oomycetes are important saprophytes in aquatic habitats. In terrestrial habitats, they are generally parasitic. Important diseases caused by water molds are downy mildews (Peronospora), potato late blight (Phytophthora infestans), and damping off disease (Pythium spp.). We will examine three species, Achlya, a saprophytic water mold; Albugo candida, the white rust of mustard plants and Phytophthora infestans.
Examine the following cultures with the dissecting microscope:
1. Achlya whole mounts: Achlya is a water mold that grows on organic debris in lakes and rivers. It forms floating mycelia mats arising from the food material, and in these mats sexual reproduction occurs. On your bench are prepared slides for you to examine with the light microscope. Focus on the blue-green material in the center of the slide. This is a hyphal mat with oogonia.
Observe the large conspicuous oogonia within the mycelium. Inside the oogonia you can see zygotes (oospores), that will later divide to form sporangia. Examine the oogonia closely to see if additional hyphae are attached to it. These would be antheridia, which present the sperm nucleus to the egg cells
within the oogonia. Note the properties of the vegetative hyphae. Do you see cross walls, or are the cells continous within a filament?
2. Phytophthora infestans: This organism causes potato late blight, one of the worst crop diseases in the history of humanity. In the 1840’s, Phytophthora infestans was introduced to Europe from Peru. It rapidly spread across the continent, destroying much of the potato crop. In Ireland, peasants were particularly dependent on the potato for survival, and the crop losses in 1845-1848 killed millions of Irish and forced the migration of millions more to North America. In continental Europe, the loss of the potato crop led to widespread economic failure and social revolt. Many of the radical worker movements that would later influence world history, such as Marxism-Leninism, arose in the wake of the food crisis caused by the Phytophthora outbreak.
Examine the Phytophthora slide prepared fresh from a culture stained with cotton blue. Identify the round oogonia in the slide and examine the hyphae for cross walls between the cells. Are there any oospores within the Oogonia? Can you see any antheridial hyphae attached to the oogonia? If you do, show other students in your group.
3. Albugo candida (White Rust of Mustards): Albugo infects plants of the mustard family, forming white pustules on the leaves. These pustules are comprised of many asexual conidiospores bursting through the plants epidermis. Inside the plant, fungal hyphae form oogonia and antheridia, which will mate and form oospores. The oospores develop sporangia which disperse genetically-distinct spores. The two prepared slides show the extent of the infection by the parasitic white rust fungus.
Examine the prepared cross section showing Albugo conidiaspores bursting through the epidermis of a mustard fruit. Note the strings of conidiospores forming under the epidermis. The bulge formed by the mass of conidia produce the rust pustule. Upon rupturing, the conidiospores are released on the wind to start a series of new infections.
Examine the prepared slide of the Albugo sexual organs (oogonia and antheridia). The oogonia are evident as dense, red-staining circles among the cells of the leaf tissue.
Notes and Drawings:
We have assembled a range of specimens from the Kingdom Fungi, and they are arranged for you to examine beginning with the most primitive (the Chytrids) and then progressing through the Zygomycetes to the Ascomycetes and Basidiomycetes.
Chytrids are mainly aquatic or parasitic. They are important decomposers of pollen, dead insects and seeds that fall into ponds and rivers. Others are parasites of algae, higher fungi, mosquitoes, rotifers, and water molds. The general body plan is to form a large, central ceonocytic globule that either directly invades the body of the host, or produces diminutive hyphae termed rhizomycelium, which invade the surface cells of the host. The rhizomycelium grows into the food source and absorbs nutrients across the chitenous cell wall.
Chytrids are difficult to maintain in culture and have to be baited from natural sources. We have purchased slides showing a common parasitic chytrid, Synchytrium, invading a plant host, and have prepared a fresh culture of chytrids on snake skin. Chytrids are important decomposers of animal epithelial tissue in aquatic habitats. To capture them, we have placed strips of snake skin in pond water. Chytrid spores swim to the snake skin, and then germinate on the skin, forming a simple globular body with rhizomycelium.
Synchytrium: Examine the prepared slide of Synchytrium that has infected either leaves or potato tubers. Note the simple globular body (the sorus) of the chytrid embedded in the tissue of the plant. Some of the globules may have matured into sporangia, and you may see spores inside.
Chytrids on snakeskin: In the dissecting scope, you can see the rhizomycelia extending from the snake skin, along with many protozoa. Note the pinhead-like cells within the mycelia. These are either the central globules from which multiple filaments of rhizomycelium arise, or they are sporangia.
In the light microscope, examine the globular cells and the hyphae growing away from them. The globular cell and rhizomycelium form the basic body plan of the saprophytic chytrids.
You will also see many tiny creatures swimming about the mycelium. Some of these may be zoospores released from sporangia. Examine the mycelium for any evidence of sporangia. Sporangia will be apparent as they will have only one hyphae attached to them.
The Zygomycetes are important saprophytes, including species that are major decomposers of dung and food. Members of this group have a zygotic lifecycle (see figure 15-11 on page 316 of your text). The gametes are non-motile, and are born on the tips of specialized, fertile hyphae termed gametangia. The gametangia contain many haploid nuclei. In zygomycetes, the sexual act consists of two fertile hyphae growing towards each other. As they approach each other the ends of the hyphae form gametangia. The gametangia come in contact, after which the end walls disintegrate, releasing the haploid gamete nuclei into one common space. Pairs of haploid nuclei then fuse, creating many diploid nuclei.
The nucleate cell formed from the residuals of the two gametangia is termed a zygosporangium. Zygosporangia typically develop a thick wall that protects the diploid nuclei from harsh conditions, forming a many nucleate (ceonocytic) resting cell. Zygosporangia germinate when the diploid nuclei undergo meiosis to produce many new haploid nuclei. The haploid nuclei are walled off into distinct spores, which are released from a dispersal sporangium that grows out of the zygosporangium.
The most commonly encountered zygomycetes are the bread molds, which are important saprophytes that grow on carbohydrate-rich foods, including bread. The mycelium formed on the surface of the bread is a cottony mass that is initially white but soon darkens as the mycelium forms asexual sporanagia. Large numbers of mitospores are released, allowing the fungus to quickly spread.
Examine the following, first under the dissecting scope and then with the light microscope:
1. Zygorrhynchus moelleri: This mold growing on an agar plate shows the major stages of a typical Zygomycete life cycle. First examine the culture under the dissecting microscope, and then take a very small scraping of the agar with a dissecting needle or knife. Place this on a microscope slide, stain with the cotton blue stain on your bench, and examine at medium power with both phase contrast and normal visible light.
With the dissecting scope, examine the cottony matrix of the mycelium, and the sporangia rising above it, forming dark spheres on elongated stalks. These are mostly mitosporangia used in asexual reproduction. You may be able to see zygosporangia mixed in amongst the mycelium. They will be dark, barrel-shaped granules on the surface of the agar.
With the light microscope, observe the slide of agar, note the clear tubular nature of the hyphae and the absence of cross walls. Next observe any mitosporangia you may have opened while pressing down the cover slip.
Finally, observe the zygosporangia. These are dark, barrel-shaped structures with rough walls. Note the hyphae attached to the zygosprangia. These are the stalks of the gametangia, and are termed suspensors.
2. Rhizopus stolonifera: This is the common bread-mold, a regular feature of most pantries. We have provided you with a Petri plate of Rhizopus to examine. Note the following with the dissecting scope under high power:
Examine the mycelium and sporangia. You may be able to see elongated, horizontal hyphae connecting the sporangial stalks. Rhizopus spreads by these elongated hyphae, termed stolons (after the strawberry runners of the same name). Where stolons settle onto a food source, they produce anchoring hyphae that penetrate the food. Sporangia form above this contact point. This habit allows for rapid spread of Rhizopus over a loaf of bread. Typically, Zygomycetes reproduce asexually by mitospores when conditions are good, allowing for rapid spread over a new food source. They switch to sexual reproduction when the food is exhausted and conditions deteriorate.
3. Ungulate dung: We may display some moose or cow feces which may show a range of dung-zygomycetes, possibly including the hat-throwing fungus Pilobus. If anything of interest appears to be present, examine it with the dissecting scope and note the nature of the sporangia.
The dikaryomycota were formerly classified as the phyla Ascomycota and Basiodiomycota (for example, see chapter 15 in your text), but recent advances in the systematic understanding have led to the merging of these two groups in a single phylum of higher fungi, the Dikaryomycota, with the ascomycetes and basiodiomycetes being separated into subphyla termed Ascomycotina and Basiodiomycotina. We will focus on these two groups.
The common feature of these groups is the formation of dikaryotic hyphae. The dikaryon arises when the protoplast of haploid hyphae fuse (they undergo plasmogamy). The nuclei do not initially fuse, and the resulting mycelium is made up of cells that are dikaryotic, or in the N + N state. Fusion of the two nuclei occurs in the fruiting body of the fungus, forming a diploid cell that immediately undergoes meiosis and mitosis to produce four to eight spores. The spores are released from sporangia formed in the fruiting body. In the ascomycetes, eight spores are released from sacs termed asci (singular is ascus, from the Latin word for sac). In the basiodiomycetes, the spores are formed on the end of a club-like sporangia termed a basidium (from the Latin word for club). The fruiting bodies of each fungus are termed an ascocarp (ascoma in your text), and the basidiocarp (basidioma in your text). The mushroom cap is a basidiocarp.
We have a number of specimens for you to examine today from each subphylum.
1. Subphyllum Ascomycotina
a. Unicellular forms: the yeasts: These are single-celled fungi that typically live within the food medium. Most are saprophytic, although some can become parasitic. The yeast fungus Candida albicans is an important pathogen in humans, forming diaper-rash, vaginal and urethral tract infections, and the potentially deadly sexually-transmitted disease candidiasis. The common yeast Saccharomyces cerviseae is the yeast of baking, brewing and enology (wine making). This yeast is preferred in fermentations as it rapidly grows, produces pleasant as opposed to noxious or toxic waste-products, and is tolerant of high (>10%) concentrations of ethanol.
Saccharomyces cereviseae: The common brewers yeast is growing on agar. Take a very small portion off the culture and smear into a drop of water on a microscope slide. Add a cover slip, and examine at low and then high power with the compound scope.
Look for budding cells amongst the large numbers of indistinct single yeast cells. These are apparent from the blob-like cellular extensions, termed buds that arise from mature yeast cells. Rather than simply dividing in two as most algae and plant cells do, yeasts divide by extruding protoplasm into a bud. This extrusion is then encapsulated in a wall and split off to form a new independent cell.
Occasionally, you may see some yeast forming asci: Yeasts live in both a haploid and diploid state. When conditions are harsh, two diploid yeast nuclei merge to form a zygote, which then undergoes meiosis to produce a four celled sac, the ascus. The ascus splits open to release the four cells, which then bud to start a new population of yeast cells. You may be able to see four-celled asci floating among the many cells in your slide. If so, show your classmates.
b. Filamentous Ascomycetes: Multicellular ascomycetes produce hyphae and mycelium, and form ascocarps. Three types of ascocarps are produced by these fungi, cleistothecia (enclosed spheres), perithecia (vase-like) and apothecia (cup shaped). You should examine examples of each.
b.1 Cleistothecial species:
i. Powdery mildew (Uncinula spp.): Powdery mildews are common pathogenic fungi that infect leaves, forming a powdery mycelium on the surface. Powdery mildews reproduce asexually by forming chains of spores (conidiospores) on special hyphae termed conidophores. During sexual reproduction, they form a simple enclosed ascocarp, the cleistothecia. Cleistothecia are completely enclosed, with no opening for the developing spores to escape. When mature, the ascocarp wall ruptures, allowing enclosed asci with their ascospores to spill out and disperse. Often, cleistothecia have barbs and hooks, which can help disperse the entire ascocarp by clinging onto the fur of passing animals.
Examine the following:
Dissecting scope: Scan across the leaf infected with Uncinula to note the powdery mycelium, with chains of conidia rising above it. Periodically, you will see a pepper grain-like object with multiple elongated hooks attached. This is the cleistothecia.
Light microscope: Scrape some cleistothecia onto a microscope slide and cover gently with a cover slip. Examine at low power. Next press of the cover slip to rupture the ascocarp and release the spores inside.
ii. Powdery mildew on leaves: We also have specimens of unknown powdery mildews collected on leaves from around Toronto. Examine these under the dissecting scope for cleistothecia and conidia.
b.2 Perithecial species: The perithecium is a vase-shaped ascocarp with a narrow, open neck. Inside are multiple asci with spores. When mature, the asci protrude from the neck of the perithecia and forcibly eject the spores into the air. Sordaria is a dung saprophyte that is closely related to Neurospora, the fungus that has become one of the leading model organisms in genetic research.
Dissecting scope: Sordaria is growing on agar plates, and the perithecia can be seen as dark pepper-like grains mixed in a mass of conidia-forming hyphae. Examine the perithecia closely and note the pear-like shape of the ascocarp. Are any asci protruding from the perithecia?
Light microscope: Scoop some perithecia onto a slide, cover and examine at low-to medium power. Gently push on the cover slip to squash open the perithecia. Note any football-shaped spores and asci that emerge.
b.3. Apothecial species: The apothecium is an ascocarp where the asci are directly exposed to the air in a cup, dome or invaginated surface. The fungi are commonly called the saucer, or cup fungi, and they include many beautiful, brightly colored forest species. The delectable morel is an apothecial ascocarp. To demonstrate the apothecium structure and form, we have a set of prepared slides and live specimens from a number of species.
i. Bispora centrina (Yellow-fairy cups – live specimens): These are wood decomposing fungi that form small, brightly yellow cup shaped apothecium. Examine the stick with the fairly cups closely. You may take the stick to you bench to examine with a dissecting scope. The asci are formed on the inner surface of the cup. Return the stick to the display area when finished as we have only a few specimens.
ii. Peziza (prepared slides): Peziza is a cup fungus that grows on wood and is similar in form to Bispora. A life cycle of Peziza is shown in Fig. 15-14 on page 318 of your textbook.
Examine the prepared cross sections of the Peziza ascocarp under medium power with your light microscope. You will note the sac-like structure of the ascus with 8 haploid spores inside. These arose from meiosis and one subsequent round of mitosis. Note the zone of fertile tissue where the asci form. Below this are fattened vegetative cells that form the support structure of the ascocarp.
b.4 Ascomycetes of special note
i. Claviceps purpurea (Ergot of Rye)
We have display specimens of rye shoots infected with Ergot, caused by the perithecia-forming ascomycete Claviceps purpurea. Claviveps is an example of an endo-parasite, a fungus that grows within the stem and leaves of grasses. The fungus retards growth, but does not kill the host plant. In many instances, toxins produced by the fungus deter herbivory, and so the grass host can actually show superior performance relative to a non-infected plant that is eaten. In the case of ergot, the toxin produced is lysergic acid amide, from which the hallucinogenic drug lysergic acid diethyamide (LSD) was derived.
Examine the infected rye and note the grain heads with enlarged, dark-colored protrusions extending out from the grass stalk. These are sclerotia, which occur where the fungus has completely infected a developing grain and replaced the grain with a tight mat of interwoven mycelium. As the growing season ends, the sclerotia fall to the ground and overwinter. In the spring, they produce perithecia and in turn, large numbers of spores that infect the new rye crop.
Sclerotia break free and mix with the rye grain at harvest. People eating rye contaminated with ergot sclerotia experience severe poisoning, called ergotism. Symptoms include wild hallucinations coupled with extreme burning sensations in the extremities. Constriction of minor veins is common, leading to limbs dying and falling off. The pain is severe, and a typical victim would scream in agony while madly hallucinating. Before modern science explained the cause, people in the past would interpret the symptoms as an attack of demons, and in regions affected by ergot outbreaks, the citizens often turned to extreme religious practices to exorcise the devil. Throughout history, witch hunts, new religious movements, and mass hysteria have been attributed to ergot outbreaks.
Today, ergot poisoning is rare, and rye grain is routinely screened to filter out the larger sclerotia. Sclerotia are now intentionally grown as a source of drugs to control internal bleeding, migraine headaches, and to alter mental states in psychiatric patients.
ii. Peach Brown Rot (Monilinia fructicola): Many ascomycetes are severe pathogens of fruit crops. One of the worst is Peach Brown rot, which stunts trees and destroys mature peaches, apricots, cherries and related fruit. Infected trees form cankers on the twigs and leaves. Conidia erupting from the cankers are dispersed to infect other trees by asexual means. Fruits are infected as they near maturity.
After infection, lesions develop on the fruit and it prematurely rots, falls to the ground and dries to a mummified carcass. Peach mummies are completely infected with the mycelium and in this form, the fungus will overwinter. In the spring, the fungus in the mummy form apothecia, from which spores will be released in huge numbers to infect new trees.
Examine the Monilinia cultures on agar with the dissecting microscope and note the lemon-shaped conidiospores arising from the mycelium. You can examine these more closely by taking a small piece of agar and preparing it on a slide for examination with the light microscope.
iii. Penecillium: Many ascomycetes are saprophytes that infect food and building materials in the home. Some are also sources of important drugs, while other produce powerful carcinogens. Penicillium is one of the most common molds in the household pantry, where it infects bread, fruits and milk products. Penicllium species are also important in making strong-flavored cheeses such rouquefort, gorgonzola, chamenbert, brie and Danish Blue. The blue-green color is actually the reproductive conidia of the Penicillium mold. Pennicilium is also the source of penicillin, the antibiotic that prevents wall synthesis in gram negative bacteria.
Examine the culture with the dissecting scope and note the green-blue broom-like conidiospore masses rising above the mycelium. These masses give Penicillium molds their characteristic color. Take a sample and prepare a microscope slide of it. Examine the conidiophores with conidia under the compound microscope. Note the broom-like structure of the spore-bearing mass.
We also have some blue-cheese on display. Examine the Pennicilium colony through the dissecting scope and try to identify the sporangia.
iv. Aspergillus: Aspergillus species are common black-colored molds in the household environment. They are frequently found on bread, drywall, and grains. Many species produce aflotoxins, which are powerful carcinogens of the liver found in stored grains, peanuts and cereals, including corn flakes. It is unwise to eat foods contaminated with wild Aspergillus species as they likely contain aflotoxins. (For example, never eat wild peanuts, or musty old grain). Beneficial Aspergillus spp. are used to produce soy sauce, miso (fermented soy paste), and to ferment rice in an early step in sake production.
Examine with a dissecting scope the culture on the agar plate and note the fan-shaped mass of conidia arising above the mycelium. Next, examine a piece of the mycelium to see the bulbous conidiophore. The dark masses of conidia give this fungus its particular color and shape.
Take a small chunk of infected agar and prepare a slide of the sporangia for the light microscope. Examine the swollen top of the condiophores and the attached fan-shaped array of conidia.
2. The subphylum Basidiomycotina
The most familiar fungi are the basidiomycetes. The fruiting bodies of the basidiomycetes (the basidiocarp) are the recognizable features of species of mushrooms, toadstools, coral fungi, shelf fungi and tooth fungi. In each, the main body lives underground or in wood as a dispersed mycelium. Although all basidiomycetes reproduce by forming spores on club-shaped basidia, there are actually two main groups: the homobasidiomycetes and the heterbasidiomycetes. The homobasidiomycetes produce one type of spore, the basidiospore. The heterobasidiomycetes produce two types of spores
during the sexual life cycle. We will focus on the homobasidiomycete life cycle as exemplified by the common food mushroom, Agaricus campestris. Heterobasidiomycetes are the pathogenic rusts and smuts.
a. Basidiomycete yeast (Rhodotorula ruba): Some basidiomycetes also have evolved the unicellular life form. A common basidiomycete yeast is the red yeast, Rhodotorula ruba, a contaminant of bathroom curtains, tile and grout. The pink scum in filthy bathtubs and showers is caused by Rhodotorula. (You may remember the battle between the Cat-in-the-Hat and pink bathroom scum).
Examine the red yeast culture on display. If time permits, you may prepare a microscope slide of the cells from the agar culture. Examine them for budding and basidia, which are distinguished by an elongated shape and horn-like points on one end of the cell.
b. Heterobasidiomycetes: The heterobasidiomycetes include the rust diseases of grasses, and smut diseases of maize. Other members of this group are wood decomposers such as the jelly fungi. We may have a jelly fungus in the wild mushroom display.
Examine the specimens of grasses infected with wheat rust (Puccinia graminis). Note the rust-colored pustules forming on the blades of the grass. These are where asexual spores are formed to allow for continued infection of healthy plants during the summer. Black pustules appear in the late-summer. These are where teliospores are formed. Teliospores are overwintering spores that form basidiospores in the spring.
If available, examine any corn smut (Ustilago maydis) that may be on display. Corn smuts attack developing corn kernels and produce large, grey-colored smutballs that are filled with dark spores. Immature smutballs are served as a delicacy in Latin American cuisines.
Rusts and smuts are virulent parasites of grain crops, with the potential to wipe out the production of an entire region in any given year. The primary means of preventing infestation is to breed crop cultivars that are resistant to rust infections. The rusts eventually evolve new ways to infect the cultivar, so government agencies are continuously breeding resistance into varieties to stay ahead of the rust capacity to re-evolve virulence. Should breeding efforts fall behind (for example, via cost-cutting measures by governments and agribusiness), major rust outbreaks could result, ruining grain crops and causing food prices to sky-rocket.
c. Homobasidiomycetes: the Mushrooms
We have fresh specimens of the common store-bought mushroom for examination, along with cultures of the inky cap mushroom, and a range of wild mushrooms from southern Ontario. To aid in examining fine detail, we have prepared slides showing cross sections of mushroom caps for you to examine under the light microscope. A detailed diagram of the life-cycle of the mushroom is presented in Figure 15-19 of your textbook (page 321).
c.1. The common food mushroom Agricus bisporus:
Examine a) the mycelium of the spawn blocks on display, b) the young button mushrooms and c) mature-spore producing mushrooms from the collection of fresh mushrooms provided.
i. Mycelial stage: sample mycelia of the mushroom spawn that is available. Stain with cotton blue and view with the light microscope under both normal light and phase contrast. Find some isolated hyphae and examine this under high power. Note the septate nature of the hyphae. This is one of the diagnostic features of the basidiomycetes.
Each cell contains two haploid nuclei in the N + N configuration. A key feature of Basidiomycetes is the presence of clamp connections, which form after cell division in order to keep the N + N dikaryotic configuration intact (see figure 15-21 in your text). Clamp connections may be visible along the end walls of the hyphal cells, forming bulges or loops around the septate wall.
ii. The mushroom button stage: The basidiocarp forms from tightly woven mycelia. Initially, the basidiocarp form a button, or egg stage. Cut open a button and examine a) the immature stalk (or stipe), b) the young, white to pink gills, and c) the developing cap which extends down over the stipe. With a razor blade, cut a thin slice of a gill, and look at the slice with the light microscope. Stain the gill with Melzer’s blue. You may be able to see developing basidia with miniature spores.
iii. The mature mushroom: Note the features of the basidiocarp structure. The main parts are a) a well developed stalk, termed the stipe. The cap, termed the pileus, and the gills, where the spore bearing tissues occur. The gills have turn chocolate brown as millions of spores mature. Cut a thin slide of the mature gill and examine for spores and horned basidia. Stain with the Melzers stain placed on your bench. You should be able to isolate one or two good basidia from the mass of tissue.
iv. Prepared slide of the mushroom cap: Examine under the light microscope the cross sections prepared of an Agaricus pileaus. The cross sections of gills clearly demonstrate the club shaped basidia arising from the zone of fertile tissue (the hymenium). Examine the basidia under high power and note how the spores are attached to the horn-like basidial tips (the sterigmata). The spores appear as party balloons taped to a club.
v. Spore prints: As the spores mature, they are released and drift into the air below the cap. If the cap is place over paper, the spores cannot disperse and settle onto the paper to form a print of the mushroom gills. Spore prints have been prepared for you to examine. Spore prints are often used to identify particular mushrooms, and they are an easy way to verify the color of the spores.
c.2. The Inky-cap mushroom, Coprinus cinereus. We have displays of the Inky-cap mushroom for you to examine. The cultures show how the mushroom arises from the mycelium. Prepared slides are also available if you wish to examine the gill structure and the basidia of Corpinus.
The cap self-digests upon maturity forming a purple-ink colored mass of goo.
c.3. The oyster mushroom Pleurotus ostreatus: Oyster mushrooms are delightful edible mushrooms that grow on logs and decaying stumps. They produce white spores on short gills, and exhibit a large, fan-shaped pileus with a short stipe. They are now commonly cultivated and are available in many supermarkets.
Examine the oyster mushrooms for morphological structure, then thin slice a gill and examine for the basidia and white spores under the microscope.
c.4. The chanterelle (Chanterellus cineareus): Chanterelles are a wonderful delicacy that is prized for its gentle, buttery flavor. Unlike the gilled mushrooms, chanterelles form their basidia on gently folded tissue underneath a wavy pileus. Assuming we have these available, take a small piece of the pileaus of a chanterelle and examine for basidia and spores. What color are the spores?
c.6. The wild mushroom display: We have collected a variety of wild mushrooms for you to examine for variation in form. Examine each carefully, paying attention to the pileus, the stipe (if present) and where the spore bearing tissues are located. In many of the samples, gills are not present. Instead, the spores are produced on elongated tubes that form pores on the underside of the pileus, on teeth-like protrusions that hang below the pileus, on coral-like prongs, or on rumpled folds of tissue that resemble elbow skin. These traits distinguish the major families of mushroom-forming species. You may take some of the specimens and examine them under the dissecting scope in order to better see the pores, teeth or prongs of the spore-bearing tissue.
Lichens are structures formed by close symbiotic relationships between an algae and a fungus. Both green and blue-green algae can serve as the algal symbiont, while the fungus is typically an ascomycete. Because the sexual stage of the lichen that is visible is that of the fungal partner (the mycobiont), the lichen is typically named after the fungus.
In the symbiosis, the algae provide carbohydrates from photosynthesis while the fungus shelters the algae and gathers water and nutrients. Lichens can completely desiccate with no harm to the organisms inside. Upon wetting they rapidly rehydrate and resume activity. This ability allows them to live in extremely harsh surfaces, such as the branches and trunks of trees, the sides of rocks, and bare ground in deserts. In the boreal zone, lichens are important ground covers on bare soil, fallen branches and the surface of rocks. They also are common as epiphytes on trees. In general, they grow extremely slow, reflecting the harsh conditions of the habitats they live in.
Lichens come in three general categories based on morphology. Examine the specimens displayed in the lab room.
A. Foliose lichens: Lichens that exhibit a leafy shape are termed the foliose lichens. These are common in wetter habitats, for example, forest interiors in eastern Canada. Foliose lichens are important epiphytes growing on branches of standing trees.
B. Fructicose lichens: These lichens are shrubby in appearance, with many narrow, highly branched stem-like structures. Fructicose lichens are common in the boreal forest, forming dense ground covers in spruce forests. When dry, they are extremely flammable and can be used as a fire starter. They also help wildfires spread and thus contribute to some of the severe forest fires that occur every summer in Canada. Fructicose lichens are common on the sides of trees, and often hang from the bra nches in dense growths termed Witch’s hair, or Old Man’s Beard.
C. Crustose lichens: Crustose lichens occur in the most extreme terrestrial environments where life is possible. They grown on the sides of rocks, buildings and on bare soil, and are common in arid and polar deserts, including the dry valleys of Antarctica. Crustose lichens form brilliant yellow, orange, red and yellow-green colors on the rock, and are some of the most beautiful features in what are otherwise barren landscapes.
Study Guide: You should be familiar with the major categories of fungi and the names of the common species of yeast, store mushrooms, and major disease organisms displayed in the lab. You need to know the terms presented in bold font, and should recognize an organism well enough to classify it to phylum or where relevant, to subphylum. Know and understand the distinguishing characteristics of the major phyla presented in lab, as well as that of the ascomycetes and basidiomycetes.
Hornworts, liverworts, and mosses - commonly referred to as bryophytes - are considered to be a pivotal group in our understanding of the origin of land plants because they are believed to be among the earliest diverging lineages of land plants. Mosses, liverworts and hornworts are found throughout the world in a variety of habitats, from the harsh environs of Antarctica to the lush conditions of the tropical rainforests. Bryophytes are unique among land plants in that they possess an alternation of generations, which involves a dominant, free-living, haploid gametophyte alternating with a reduced, generally dependent, diploid sporophyte. Bryophytes are small, herbaceous plants that grow closely packed together in mats or cushions on rocks, soil, or as epiphytes on the trunks and leaves of forest trees. Bryophytes are remarkably diverse for their small size and are well-adapted to moist habitats and flourish particularly well in moist, humid forests like the fog forests of the Pacific northwest or the montane rain forests of the southern hemisphere.
Significance of bryophytes
Bryophytes have a significant role in contributing to nutrient cycles, providing seed-beds for the larger plants of the community, and form microhabitats for insects and an entire array of microorganisms. Bryophytes are also very effective rainfall interceptors, and the overwhelming abundance of epiphytic liverworts in "cloud" or "mossy" forest zones is considered an important factor in eliminating the deteriorating effect of heavy rains, including adding to hill stability and helping to prevent soil erosion. The chemical compounds of some liverworts are also particularly interesting because they have important biological activities, for example, against certain cancer cell lines, anti-bacterial properties, anti-microbial, anti-fungal, and muscle relaxing activity.
Over the last decade, recent advances in DNA sequencing technology and analytical approaches to phylogenetic reconstruction, including the use of ultra-structural, morphological and anatomical data, have enabled unprecedented progress toward our understanding of plant evolution. A growing consensus suggests that the bryophytes possibly represent three separate evolutionary lineages, which are today recognized as mosses (phylum Bryophyta), liverworts (phylum Marchantiophyta) and hornworts (phylum Anthocerotophyta).
• Mosses (Bryophyta)
The greatest species diversity in bryophytes is found in the mosses, with estimates of the number of species ranging from 10,000 to 15,000. Higher-level classification of the mosses remains unresolved with considerable difference of opinion on the names of the major groups. However, generally four major groups or classes are recognised. These include: Sphagnopsida (peat or Sphagnum mosses), Andreaeopsida (rock or lantern mosses), Polytrichopsida (nematodontous mosses), and the Bryopsida (arthrodontous mosses). The Sphagnum mosses are one of the most ecologically and economically important groups of bryophytes. The class Bryopsida accounts for the largest and most diverse groups within the mosses with over 100 families.
• Liverworts (Marchantiophyta)
The estimated number of liverwort species range from 6000 to 8000. Traditionally, liverworts have been subdivided into two major groups or classes based, partially, on growth form. The class Marchantiopsida, includes the well-known genera Marchantia, Monoclea, Lunularia, and Riccia, and has a complex thalloid organisation. The class Jungermanniopsida represents an estimated 85% of liverwort species and shows an enormous amount of morphological, anatomical and ecological diversity; plants with leafy shoot systems are the most common growth form in this class, e.g., Frullania, Jubulopsis, Cololejeunea, and Radula.
• Hornworts (Anthocerotophyta)
Hornworts get their name from their long, horn-shaped sporophytes and are the smallest group of bryophytes with only approximately 100 species. Hornworts resemble some liverworts in having simple, unspecialized thalloid gametophytes, but they differ in many other characters. Hornworts differ from all other land plants in having only one large, algal-like chloroplast in each thallus cell.
Scientific Name, Common Name
Plantae, Land Plants
Embryophytes, Green plants
The Bryophyta or mosses, unlike the liverworts, are present in most terrestrial habitats (even deserts) and may sometimes be the dominant plant life.
As with the liverworts the plant that we commonly see is the gametophyte. It shows the beginnings of differentiation of stem and leaves - but no roots. Mosses may have rhizoids and these may be multicellular but they do little more than hold the plant down.
The stem shows some internal differentiation into hydroids and leptoids which are like xylem and phloem of higher plants but very simply organized with no connection to leaves or branching stems.
The leaves are mostly one cell thick; sometimes the midrib is several cells thick but this does not contain conducting tissue so it is not equivalent to the vein of a leaf.
Male and female gametophytes look identical except when they produce reproductive structures.
The male plant produces clusters of antheridia which contain thousands of ciliate sperm.
The female produces archegonia, each containing a single egg.
Fertilization is dependent on water - sperm are splashed or swim to the archegonia. The zygote grows into the diploid sporophyte which remains attached to the female gametophyte It is a leafless stem with a seta or foot at one end, drawing nutrients from the gametophyte. At the other end is a capsule in which meiosis occurs to form spores.
The archegonium grows around the developing sporophyte for a while but becomes separated from the gametophyte and is carried up to form a cap or calyptra over the sporangium. Curiously, the sporangia of some mosses have stomata much like those on the leaves of vascular plants.
Immature moss capsules with calyptra
The calyptra is lost when the sporangium is mature as is the operculum or lid on the end of the capsule.
Underneath the operculum there are often peristome teeth which open under dry conditions and control spore release A spore germinates to produce a filamentous protonema which sooner or later produces buds that grow into new gametophytes.
Ecology of mosses
Mosses require abundant water for growth and reproduction. They can tolerate dry spells by drying out or,in the case of mosses like Sphagnum , by holding huge amounts of water in dead cells in the leaves.
They look pretty lowly and insignificant, but have become dominant in particular habitats and Sphagnum itself is said to occupy 1% of the earth's surface (half the area of the USA). Because of its ability to soak up blood and its relative freedom from bacterial contamination Sphagnum was used in dressings. The moss itself is used in some horticultural media and it is an important source of peat.
Polytrichum commune one of the larger mosses with mature sporophytes
If you have tried to grow a lawn in a shady location you have probably been troubled by mosses as weeds. Like many lower organisms they are very sensitive to copper salts and can be controlled in this way. On the other hand mosses are green and better adapted to shade than most grasses, so maybe we should accept them in this situation.
Natural Perspective
The Plant Kingdom : Mosses and Allies
Mosses and their allies are small green plants that are simlutaneously overlooked and deeply appreciated by the typical nature lover. On the one hand, very few people pay attention to individual moss plants and species. On the other hand, it is the mosses that imbues our forests with that wonderful lush "Rainforest" quality which soothes the soul and softens the contours of the earth.
These wonderfully soft carpets of green are, in fact, Nature's second line of attack in its war against rocks. After lichens have created a foothold in rocks the mosses move in, ultimately becoming a layer of topsoil for higher plants to take root. The mosses also hold loose dirt in place, thus preventing landslides.
Ecologically and structurally, mosses are closer to lichens than they are to other members of the plant kingdom. Both mosses and lichens depend upon external moisture to transport nutrients. Because of this they prefer damp places and have evolved special methods of dealing with long dry periods. Higher plants, on the other hand, have specialized organs for transporting fluid, allowing them to adapt to a wider variety of habitats.
Bryophytes used to be classified as three classes of a single phylum, Bryophyta . Modern texts, however, now assign each class to its own phylum: Mosses ( Bryophyta ), Liverworts ( Hepatophyta ), and Hornworts ( Anthoceraphyta ). This reflects the current taxonomic wisdom that the Liverworts and Hornworts are more primitive and only distantly related to Mosses and other plants.
Mosses (Phylum: Bryophyta )
All plants reproduce through alternating generations. Nowhere is this more apparent than in the mosses. The first generation, the gametophyte , forms the green leafy structure we ordinarily associate with moss. It produces a sperm and an egg (the gametes) which unite, when conditions are right, to grow into the next generation: the sporophyte or spore-bearing structure.
The moss sporophyte is typically a capsule growing on the end of a stalk called the seta . The sporophyte contains no clorophyl of its own: it grows parasitically on its gametophyte mother. As the sporophyte dries out, the capsule release spores which will grow into a new generation of gametophytes, if they germinate.
Mosses, the most common, diverse and advanced brypophytes, are categorized into three classes: Peat Mosses ( Sphagnopsida ), Granite Mosses ( Andreaopsida ), and "True" Mosses ( Bryopsida or Musci ) .
Shown: Class: Bryopsida ; Order: Hypnales ; Family: Brachythecia ; Homolathecium nutalli (probably)
Leafy Liverworts (Phylum: Hepatophyta , Class: Jungermanniidae )
While people typically know what a moss is, few have even heard of liverworts and hornworts.
These primitive plants function much like mosses and grow in the same places, often intertwined with each other. The liverworts take on one of two general forms, comprising the two classes of liverworts: Jungermanniidea are leafy, like moss; Marchantiopsida are leaf-like ( thalloid ) similar to foliose lichens .
The leafy liverworts look very much like mosses and, in fact, are difficult to tell apart when only gametophytes are present. The "leaves," however, are simpler than moss and dont have a midrib ( costa ). The stalk of the sporophyte is translucent to white; its capsule is typically black and egg-shaped. When it matures, the capsule splits open into four equal quarters, releasing the spores to the air.
The liverwort sporophyte shrivels up and disappears shortly after releasing its spores. Because of this one hardly ever sees liverwort sporophytes out of season. Moss sporophtyes, on the other hand, may persist much longer.
Shown: Class: Jungermanniidea ; Order: Jungermanniales ; Family: Scapaniaceae ; Scapania spp. (probably)
Leaf-like Liverworts (Phylum: Hepatophyta ; Class: Marchantiopsida )
The leaf-like ( thalloid ) liverworts are, on the whole, more substantial and easier to find than their leafy counterparts. The gametophyte is flat, green and more-or-less strap-shaped. The body may, however, branch out several times to round out the form.
When the gametophyte has become fertilized and is ready to produce its sporophyte generation it may grow a tall green umbrella-shaped structure called the carpocephalum . The sporophyte grows on the underside of this structure, often completely hidden from view.
During the dry season, leaf-like liverworts may shrivel up and completely disappear from view until the rains arrive again.
Thalloid liverworts are much easier to identify than their leafy counterparts due to the wider variety of gametophyte shapes.
Shown: Class: Marchnatiopsida ; Order: Marchantiales ; Family: Aytoniaceae ; Asterella californica
Hornworts (Phylum: Anthoceraphyta )
Hornworts are very similar to liverworts but differ in the shape of the sporophyte generation. Instead of generating spores in a capsule atop a stalk, the hornwort generates spores inside a green horn-like stalk. When the spores mature the stalk splits, releasing the spores.
Under the microscope, hornwort cells look quite distinct as well: they have a single, large chloroplast in each cell. Other plants typically have many small chloroplasts per cell. This structure imparts a particular quality of color and translucency to the body ( thallus ) of the plant.
Hornworts are all grouped into a single class, Anthocerotae , containing a single order, Anthocerotales .
Shown: Class: Anthocerotae ; Order: Anthocerotales ; Family: Anthocertaceae ; Phaeoceros spp.
Suggestions for the Use of Keys
1. Select appropriate keys for the materials to be identified. The keys may be in a flora, manual, guide' handbook, monograph, or revision (see Chapter 30). If the locality of an unknown plant is known, select a flora, guide, or manual treating the plants of that geographic area (see Guides to Floras in Chapter 30). If the family or genus is recognized, one may choose to use a monograph or revision. If locality is unknown. select a general work. If materials to be identified were cultivated, select one of the manuals treating such plants since most floras do not include cultivated plants unless naturalized.
2. Read the introductory comments on format details, abbreviations, etc. .before using the key.
3. Read both leads of a couplet before making a choice. Even though the first lead may seem to describe the unknown material, the second lead may be even more appropriate.
4. Use a glossary to check the meaning of terms you do not understand.
5. Measure several similar structures when measurements are used in the key, e.g. measure several leaves not a single leaf. Do not base your decisions on a single observation It is often desirable to examine several specimens.
6. Try both choices when dichotomies are not clear or when information is insufficient, and make a decision as to which of the two answers best fits the descriptions.
7. Verify your results by reading a description, comparing the specimen with an illustration or an authentically named herbarium specimen.
Suggestions for Construction of Keys
1. Identify all groups to be included in a key.
2. Prepare a description of each taxon (see Chapter 24 for details for description and descriptive format).
3. Select "key characters" with contrasting character states. Use macroscopic, morphological characters and constant character states when possible. Avoid characteristics that can only be seen in the field or on specially prepared specimens, i.e., use those characteristics that are generally available to the user.
4. Prepare a Comparison Chart (see Figure 25-3).
5. Construct strictly dichotomous keys.
6. Use parallel construction and comparative terminology in each lead of a couple.
7. Use at least two characters per lead when possible.
8. Follow key format (indented or bracketed see Figures 25-1 and 25-2).
9. Start both leads of a couple with the same word if at all possible and successive leads with different words.
10. Mention the name of the plant part before descriptive phrases, e.g., leaves or flowers blue not blue flowers, leaves alternate not alternate leaves.
11. Place those groups with numerous variable character states in a key several times when necessary.
12. Construct separate keys for dioecious plants, for flowering or fruiting materials and for vegetative materials when pertinent.
• Shrub or woody vine.
o Woody vine; petals 7 or more 3. Decumaria
o Shrub; petals 4 or 5.
Leaves alternate or on short spur branches.
Leaves pinnately veined; ovary superior; fruit a capsule 1. Itea
Leaves palmately veined; ovary inferior; fruit a berry 2. Ribes
Leaves opposite.
Petals usually 4;-stamens 20-40; fruit longitudinally dehiscent, not ribbed; 4. Philadelphus
Petals usually 5; stamens 8-10; fruit poricidally dehiscent, 10- to 15-ribbed 5. Hydrangea
• Herbs.
o Staminodia present; petals more than 10 mm long 6. Parnassia
o Staminodia absent; petals less than 10 mm long.
Leaves ternately decompound 7. Astilbe
Leaves simple.
Flowers solitary in leaf axils, or in short, leafy cimes.
Sepals 4; carpels 2 8. Chrysosplenium
Sepals 5; carpels 3 9. Lepuropetalon
Flowers in racemes or panicles.
Petals pinnatifid or fringed; stem leaves opposite 10. Mitella
Petals not pinnatifid or fringed; stem leaves alternate or absent.
Ovary 1-celled.
Inflorescence paniculate; stamens 5 11. Heuchera
Inflorescence racemose; stamens 10 12. Tiarella
Ovary 2-celled.
Stamens 5; leaves palmately lobed 13. Boykinia
Stamens 10; leaves not palmately lobed 14. Saxifraga
• 1. Shrub or woody vine 2.
• 1. Herbs 6.
o 2. Woody vine; petals 7 or more Decumaria.
o 2. Shrub; petals 4 or 5 3.
• 3. Leaves alternate or on short spur branches 4.
• 3. Leaves opposite 5.
o 4. Leaves pinnately veined; ovary superior; fruit a capsule Itea.
o 4. Leaves palmately veined; ovary inferior; fruit a berry Ribes.
• 5. Petals usually 4; stamens 20-40; fruit longitudinally dehiscent, not ribbed Philadelphus
• 5. Petals usually 5; stamens 8-10; fruit poricidally dehiscent, 10-15 ribbed Hydrangea.
o 6. Staminodia present; petals more than 10 mm long Parnassia.
o 6. Staminodia absent; petals less than 10 mm long 7.
• 7. Leaves ternately decompound Astilbe.
• 7. Leaves simple 8.
o 8. Flowers solitary in leaf axils, or in short, leafy cymes 9.
o 8. Flowers in racemes or panicles 10.
• 9. Sepals 4; carpels 2 Chrysosplenium.
• 9. Sepals 5; carpels 3 Lepuropetalon.
o 10. Petals pinnatifid or fringed; stem leaves opposite Mitella.
o 10. Petals not pinnatifid or fringed; stem leaves alternate or absent 11.
• 11. Ovary 1-celled 12.
• 11. Ovary 2-celled 13.
o 12. Inflorescence paniculate; stamens 5 Heuchera.
o 12. Inflorescence racemose; stamens 10 Tiarella.
• 13. Stamens 5; leaves palmately lobed Boykinia.
• 13. Stamens 10; leaves not palmately lobed Saxifraga.
Figure 25-2. Example of a bracketed key. (Modified from Radford, A. E., 11. E. Ahles, and C. R. Bell. 1968. Manual of the Vascular Flora of the Carolinas. University of North Carolina Press. Chapel Hill, North Carolina. Used with permission.)
1. Identification of an unknown. Select an unknown specimen and identify it by keying in an appropriate manual, flora, or monograph. Verify your results by reading a description, by comparing with an illustration or by checking with your instructor.
2. Preparation of a comparison chart. Select 5 or more specimens from the group provided by your instructor. Identify each by keying. Verify your results. Prepare a description of each similar to those in a flora or manual. Be sure characters and character states are in the same order. Select contrasting character states and prepare a comparison chart (see Figure 25-3).
3. Construction of keys. Construct a dichotomous key to these specimens using the information in the comparison chart.
Decumaria Itea Ribes Parnassia Heuchera Saxifraga
Habit Woody vine Shrub Shrub Herb Herb Herb
Leaf arrangement Opposite Alternate Alternate
or on spur roots Basal
(Rosulate) Basal
(Rosulate) Basal
Petal Number 7-10 5 5 5 5 5
Locule Number 7-10 2 1 1 1 2
Stamen Number 7+ 5 5 5
(stamonodia 5) 5 10
Fruit Type Capsule Capsule Berry Capsule Capsule Capsule
Figure 25-3. A comparison chart used in the construction of keys (for six of the genera in Figures 25-1 and 25-2).
In almost every ditch in Holland with reasonably clean water we will in summer find slimy masses of filamentous algae, floating as scum on the surface. It looks rather distasteful, but a ditch like that is not polluted, only eutrophic (rich in nutrients). In spring these filamentous algae grow under water but when there is enough sunlight and the temperatures are not too low, they produce a lot of oxygen, sticking in little bubbles between the tangles of the algae. These come to the surface and become visible as slimy green masses. In these tangles we will find mainly three types of filamentous algae, Spirogyra, Mougeotia and Zygnema. In this article we will mainly write about Spirogyra.
From a distance these slimy tangles look perhaps a bit dirty, but under the microscope the filaments are very beautiful and moreover, they have a spectacular way of reproducing. Spirogyra owes its name to a chloroplast (the green part of the cell) that is wound into a spiral, a unique property of this genus which makes it easily to recognise. In The Netherlands up till now there are found more than 60 species of Spirogyra , in the whole world more than 400.
For the determination of a species it is necessary to look for reproducing specimens with spores. But a precise determination is not necessary for learning a lot of interesting facts from Spirogyra. It is easy to see that there are many species ; in a clean, eutrophic ditch with hard water in Holland we will find easily 20 different species. If we look at a filament of Spirogyra with the microscope, the first thing that attracts attention is the chloroplast, a narrow, banded spiral with serrated edges. The small round bodies in the chloroplast are pyrenoids, centres for the production of starch. In the middle of the cell we see the transparent nucleus, with fine strands linking it to the peripheral protoplasm. The filaments contain cells of different sizes and it is easy to find a new cell, just formed after a division.
The really interesting part comes as Spirogyra reproduces sexually. When two filaments are close together, the process starts. Cell outgrowths form connections between the filaments and a sort of ladder is formed. The contents of the cells in one filament will go through the connection tubes to the cells in the other filament. A zygospore is formed with a thick cell wall, round or oval and with a brownish colour. This conjugation process takes place especially between half May and half June. The spores are liberated, sink to the bottom and germinate in the next spring to form a new filament. It is very worthwhile to look in a sample of algae for the different stages of this conjugation process. It is always a nice surprise to find the conjugating filaments. Spirogyra can also exhibit, apart from the ladder like conjugation, another form of conjugation. Two neighbouring cells in the same filament can connect via a tube.
There are several other genera of related filamentous algae; Zygnema and Mougeotia, with respectively star like and plate like chloroplasts. These genera live in general in more acid, soft fresh water. The conjugation figures look different from those in Spirogyra, for instance X-like. Dune pools are a rich biotope for Spirogyra. In ditches the amount of species declines when the water becomes very eutrophic. Other filamentous algae then replace it, like Cladophora, Vaucheria and Enteromorpha. In the end we only will find duck weed. Then a ditch does not receive light, with disastrous consequences for the growths of plants and the production of oxygen.
The Filamentous Algae.
This gallery includes only the filamentous green algae. The group is a heterogeneous one in which the members, although superficially similar, show a wide diversity in their life cycle and modes of reproduction. Spirogyra, Oedogonium and Cladophora are amongst the varieties most frequently encountered.
All blue-green algae are now classified amongst the Bacteria, and will be found in the Cyanobacteria gallery.
Spirogyra is a filamentous green alga which is common in freshwater habitats. It has the appearance of very fine bright dark-green filaments moving gently with the currents in the water, and is slimy to the touch when attempts are made to collect it. The slime serves to deter creatures which otherwise attatch themselves to underwater plants, so Spirogyra under the microscope is usually spotless.
A field of Spirogyra filaments. Their appearance is not quite typical in that the nuclei are unusually prominent, and the characteristic spiral chloroplasts are so fine and tightly wound that close examination is needed to confirm the identification. In any case the possession of spiral chloroplasts is sufficient to positively identify Spirogyra to genus.
Darkfield, x120.
The central portion of a cell of Spirogyra showing the nucleus and giving an insight into the way the spiral chloroplast contacts with the wall of the cell. The filament in the background provides another view.
Brightfield. x1000.
Central portion of a Spirogyra cell showing nucleus and chloroplasts.
Brightfield, x1000.
This filament of Spirogyra is about to break into two filaments. The wall of each cell (centre of picture) has developed an inward indentation at the junction between the cells. Increase in pressure in each cell will cause the indentation to pop out, forcing separation of the filaments, and leaving them with highly convex ends.
Brightfield. x1000.
Two filaments of Spirogyra, the lower one clearly showing the nucleus. This picture also gives a good insight into the way the chloroplasts line the wall of the cell.
Brightfield. x1000.
Conjugation in Spirogyra.
In common with other members of its phylum (Gamophyta) Spirogyra lacks a motile variant at all stages of its life history; ie, no motile gametes (ova or sperm), no zoospores etc. Sexual reproduction is by a process called conjugation -- another of the famously remarkable sights available to the microscopist.
Although it is not possible to distinguish them visually, certain filaments in a loose parallel bundle of Spirogyra assume the female, and others the male, role in the process which follows. The cells of adjacent filaments develop bumps which grow towards one another and eventually fuse to form a continuous tube between the cells. Meanwhile the contents of each cell have detatched themselves from their respective cell walls and have formed a round ball. Over a relatively short space of time (minutes), the green spheres from the male filament squeeze their way down the connecting tubes to fuse with a similarly contracted female cell in the other filament. The result of this sexual union is the formation of a zygospore with a tough resistant outer covering within the chambers of the female filament. After a dormant period, these zygotes undergo meiosis and germinate, resulting in new filaments of Spirogyra.
Once seen never forgotten.
The central pair of cells are joined by a conjugation tube which has yet to fuse into form a continuous passage. The cell contents are at a similarly early stage of detatching themselves from the cell wall to form a ball.
By contrast, the two cells to the right contain newly formed zygospores as a result of consummated conjugation.
Male and female cells now occupy the same space, and are pictured before fusion to form a zygospore has taken place. The filament designated female is the one in which the zygospores have formed.
Two mature zygospores of Spirogyra from another part of the specimen which provided the above pictures. In this form, Spirogyra can survive winter or other adverse conditions and germinate in the spring to form new filaments. The hardened outer spore wall can be seen reflecting the light from the darkfield condenser.
Darkfield. x400.
A zygospore of Spirogyra against a background of decaying plant remains and other algal forms.
Darkfield, x400.
It's hard to say what is happening here. It looks like the stage in conjugation of Spirogyra in which the contraction of the cell contents to a ball is not quite complete, and the spiral nature of the chloroplast is still discernable.
Darkfield, x1000.
Cladophora and Microspora.
The filamentous alga Cladophora is a common inhabitatant of freshwater locations. It is called blanket weed in some places -- not an inappropriate name when in late summer dense floating rafts of Cladophora can be found both at the pond's edge and in the open water, buoyed up with the oxygen generated by its own photosynthesis.
Unlike Spirogyra, Cladophora is capable of branching, and seems to produce little or no mucilagineous secretion. This, and the fact that salts tend to crystallize on the filaments of older specimens, gives it a rougher, grittier feel than other filamentous algae. It is also more readily colonized by epiphytic diatoms and other algae, and provides a protected foraging environment for the smaller pond creatures such as protozoa, worms, small crustaceans and insect larvae.
Its springiness also makes it more difficult to prepare the thin, flat specimens required by the microscope.
Branching in this filament of Cladophora has begun with an outgrowth of the cell at the upper end near the cell wall junction. As the branch grows, differential growth of the main cell wall causes the branch to grow forwards rather than at right angles to the original cell.
An interesting feature of the picture is the distribution of plastids in the two cells shown. Since the plastids are the energy converters of the cell, large numbers have migrated into the growing branch, where the energy requirement is greatest.
The cell on the right shows a distribution of plastids normal to a resting cell.
Darkfield, x300.
Picture shows Cladophora at a branching point. The filaments are encrusted with diatoms (Gomphonema) and crystals of calcium carbonate which give the plant its rough gritty feel.
Darkfield, x400.
Microspora is common in ponds, especially in the winter months. It can be recognized by its reticulated chloroplast which covers the inside wall of the cell including the cell walls between one cell and the next.
Darkfield, x600.
Pteridium aquilinum
Bracken Fern
Bracken Fern
Photo © by Earl J.S. Rook
Flora, fauna, earth, and sky...
The natural history of the northwoods
Name: • Pteridium, from the Greek pteris (pteris), "fern"
• aquilinum, from the Latin, "eagle like"
• Bracken, an old English word for all large ferns, eventually applied to this species in particular.
• Other common names include: Brake, Brake Fern, Eagle Fern, Female Fern, Fiddlehead, Hog Brake, Pasture Brake, Western Brackenfern, Grande fougere, Fougere d'aigle, Warabi (Qué), Örnbräken, Bräken, Slokörnbräken, Taigaörnbräken, Vanlig Örnbräken (Swe), Einstape (Nor), Ørnebregne (Dan), Sananjalka (Fin), Adlerfarn (Ger), Kilpjalg, Kotkajalg, Põldsõnajalg, Seatinarohi, Sõnajalg (Estonia)
Taxonomy: • Kingdom Plantae, the Plants
o Division Polypodiophyta, the True Ferns
Class Filicopsida
Order Polypodiales
Family Dennstaedtiaceae
Genus Pteridium
• Taxonomic Serial Number: 17224
• Also known as Pteris aquilina, Asplenium aquilinum, Allosorus aquilinus, Ornithopteris aquilina, Filix aquilina, Filix-foemina aquilina, Pteris latiuscula
• Considered a single, worldwide species, although some disagree
Description: • A large, deciduous, rhizomatous fern
• Fronds 1'-3' w/leaf stalk up to 3'' but usually shorter than leaf blade. Blades of frond divided into pinnae, the bottom pair sometimes large enough to suggest a three part leaf. Pinna divided into pinnules. On fertile fronds the spores are borne in sori beneath the outer margins of the pinnules. Fronds are killed by frost each winter and new fronds grow in spring. Dead fronds form a mat of highly flammable litter that insulates the below-ground rhizomes from frost when there is no snow cover. This litter also delays the rise in soil temperature and emergence of frost-sensitive fronds in the spring.
• Rhizomes are the main carbohydrate and water storage organs (87% water). Rhizomes can be up to 1" diameter and branching is alternate. The rhizome system has two components. The long shoots form the main axis or stem of the plant. They elongate rapidly, have few lateral buds, do not produce fronds, and store carbohydrates. Short shoots, or leaf-bearing lateral branches, may be closer to the soil surface. They arise from the long shoots, are slow growing, and produce annual fronds and many dormant frond buds. Transition shoots start from both short and long shoots and may develop into either.
• Roots thin, black, brittle extending from the rhizome to over 20" inches into the soil.
• Brackenfern is a large, coarse, perennial fern that has almost horizontal leaves and can grow 1½ to 6½ feet tall (sometimes up to 10 feet). Unlike our more typical broadleaf perennials, this primitive perennial lacks true stems. Each leaf arises directly from a rhizome (horizontal underground stem), and is supported on a rigid leaf stalk. In addition, brackenfern does not produce flowers or seeds. Instead, it reproduces by spores and creeping rhizomes. This species often forms large colonies.
• Root system - The black, scaly, creeping rhizomes (horizontal underground stems) are ½ inch thick, and can grow as much as 20 feet long and 10 feet deep. Stout, black, wide-spreading roots grow sparsely along the rhizomes.
• Seedlings & Shoots - The curled leaves (fiddleheads) emerging from rhizomes in the spring are covered with silvery gray hair.
• Stems - The leaf stalk (not a true stem) is tall (about the same length as the leaf), smooth, rigid and grooved in front. It is green when young, but turns dark brown later in the season.
• Leaves - The leaf stalk supports a broad (3 feet long, 3 feet wide), triangular, dark green, leathery and coarse-textured leaf that often bends nearly horizontal. The leaf is divided into 3 parts, 1 terminal and 2 opposite. Each of the leaf parts is triangular and composed of numerous oblong, pointed leaflets, which are in turn composed of narrow, blunt-tipped subleaflets.
• Fruits & Seeds - A continuous line of spore cases (spore-producing structures) is formed along the underside edge of leaflets, but the spore cases are partially or completely covered by inrolled leaf margins and are difficult to see. Spore cases produce minute, brown spores.
• Biology: Spores of brackenfern are produced August through September. Brackenfern is one of the earliest ferns to appear in spring or after a fire. It sometimes forms large colonies of nearly solid stands. In the fall, it is one of the first plants to be killed by frost, resulting in large patches of crisp, brown foliage.
• Brackenfern is resistant to many herbicides and is tolerant of various forms of mechanical control. However, effective control has been obtained by repeated removal of aboveground growth, which eventually exhausts the food reserves in the rhizomes.
Identification: • Distinguished from other large North Country ferns by the large three part leaf atop a tall stalk.
• Field Marks
o broad triangular leaf held almost parallel to the ground
o smooth, grooved, rigid stalk about as long as the leaf
o narrowed tip to leaflets
Distribution: • Global; throughout the world with the exception of hot and cold deserts
Habitat: • Fossil evidence suggests that bracken fern has had at least 55 million years to evolve and perfect antidisease and antiherbivore chemicals. It produces bitter tasting sesquiterpenes and tannins, phytosterols that are closely related to the insect moulting-hormone, and cyanogenic glycosides that yield hydrogen cyanide (HCN) when crushed. It generates simple phenolic acids that reduce grazing, may act as fungicides, and are implicated in bracken fern's allelopathic activity. Severe disease outbreaks are very rare in bracken fern.
• Grows on a variety of soils with the exception of heavily waterlogged sites. Efficient stomatal control allows it to succeed on sites that would be too dry for most ferns, and its distribution does not normally seem limited by moisture. Grows best on deep, well-drained soils with good water-holding capacity, and may dominate other vegetation on such sites.
• Rhizomes are particularly effective at mobilizing phosphorus from inorganic sources into an available form for plant use. Bracken fern contributes to potassium cycling on sites and is associated with high levels of potassium.
• In northern climates bracken fern is frequently found on uplands and side slopes, since it is susceptible to spring frost damage. Fronds growing in the open or without litter cover are often killed as crosiers by spring frost damage, since the soil warms earlier and growth begins sooner. The result is that fronds appear earlier in shaded habitats.
• A shade intolerant pioneer and succession species that is sufficiently shade tolerant to survive in light spots in old growth forests.
• Light, windborne spores allow colonization of newly vacant areas.
• Despite production of bitter-tasting compounds, chemicals that interfere with insect growth, and toxic chemicals, bracken fern hosts a relatively large number and variety of herbivorous insects.
• Competition: Invades cultivated fields and disturbed areas, effectively competing for soil moisture and nutrients. Rhizomes grow under the roots of herbs and tree or shrub seedlings, and when the fronds emerge, they shade the smaller plants. In the winter dead fronds may bury other plants and press them to the ground. On some sites shading may protect tree seedlings and increase survival.
• Allelopathy: Bracken fern's production and release of allelopathic chemicals is an important factor in its ability to dominate other vegetation. Farther north no allelopathic chemicals are released from the green fronds but are readily leached from standing dead fronds. Herbs may be inhibited for a full growing season after bracken fern is removed, apparently because active plant toxins remain in the soil.
Fire: • A fire-adapted species throughout the world. Not merely well adapted to fire, it promotes fire by producing a highly flammable layer of dried fronds every fall. Repeated fires favor Bracken.
• Primary fire adaptation is deeply buried rhizomes which sprout vigorously following fires before most competing vegetation is established. Windborne spores may disperse over long distances.
• Fire removes competition and creates the alkaline soil conditions suitable for its establishment from spores
• Fuel loading in areas dominated by bracken fern can be quite high.
Associates: • Shrubs: Bunchberry (Cornus canadensis), Twinflower (Linnaea borealis)
• Herbs: Wild Sarsaparilla (Aralia nudicaulis), Large Leaf Aster (Aster macrophyllus), Blue Bead Lily (Clintonia borealis), Gold Thread (Coptis trifolia), Bedstraws (Galium ssp.), Oak Fern (Gymnocarpium dryopteris), Canada Mayflower (Maianthemum canadense), Bishop's Cap (Mitella nuda), One Flowered Pyrola (Moneses uniflora), One Sided Pyrola (Pyrola secunda), Rose Twisted Stalk (Streptopus rosea), Starflower (Trientalis borealis), Kidney Leaf Violet (Viola renifolia), Violets (Viola spp.)
• Mammals: Palatability is usually nil to poor
History: • Considered so valuable during the Middle Ages it was used to pay rents.
• Used as roofing thatch and as fuel when a quick hot fire was desired.
• The ash was used as a source of potash in the soap and glass industry until 1860 and for making soap and bleach. The rhizomes were used in tanning leathers and to dye wool yellow.
• Bracken still used for winter livestock bedding in parts of Wales since it is more absorbent, warmer, and easier to handle than straw.
• Also used as a green mulch and compost
Uses: • Most commonly used today as a food for humans. The newly emerging croziers or fiddleheads are picked in spring and may be consumed fresh or preserved by salting, pickling, or sun drying. Both fronds and rhizomes have been used in brewing beer, and rhizome starch has been used as a substitute for arrowroot. Bread can be made out of dried and powered rhizomes alone or with other flour. American Indians cooked the rhizomes, then peeled and ate them or pounded the starchy fiber into flour. In Japan starch from the rhizomes is used to make confections. Bracken fern is grown commercially for use as a food and herbal remedy in Canada, the United States, Siberia, China, Japan, and Brazil and is often listed as an edible wild plant. Powdered rhizome has been considered particularly effective against parasitic worms. American Indians ate raw rhizomes as a remedy for bronchitis
• Bracken fern has been found to be mutagenic and carcinogenic in rats and mice, usually causing stomach or intestinal cancer. It is implicated in some leukemias, bladder cancer, and cancer of the esophagus and stomach in humans. All parts of the plant, including the spores, are carcinogenic, and face masks are recommended for people working in dense bracken. The toxins in bracken fern pass into cow's milk. The growing tips of the fronds are more carcinogenic than the stalks. If young fronds are boiled under alkaline conditions, they will be safer to eat and less bitter.
• Bracken fern is a potential source of insecticides and it has potential as a biofuel. Bracken fern increases soil fertility by bringing larger amounts of phosphate, nitrogen, and potassium into circulation through litter leaching and stem flow; its rhizomes also mobilize mineral phosphate. Bracken fern fronds are particularly sensitive to acid rain which also reduces gamete fertilization. Both effects signal the amount of pollutants in rain water making bracken fern a useful indicator.
• Fronds may release hydrogen cyanide (HCN) when they are damaged (cyanogenesis), particularly the younger fronds. Herbivores, including sheep, selectively graze young fronds that are acyanogenic (without HCN) Lignin, tannin, and silicate levels tend to increase through the growing season making the plants less palatable. Cyanide (HCN) levels fall during the season as do the levels of a thiaminase which prevents utilization of B vitamins.
• Toxicity: Known to be poisonous to livestock throughout the US, Canada, and Europe. Simple stomach animals like horses, pigs, and rats develop a thiamine deficiency within a month. Acute bracken poisoning affects the bone marrow of both cattle and sheep, causing anemia and hemorrhaging which is often fatal. Blindness and tumors of the jaws, rumen, intestine, and liver are found in sheep feeding on bracken fern.
• Toxicity: All parts of brackenfern, including rootstocks, fresh or dry leaves, fiddleheads and spores, contain toxic compounds, and are poisonous to livestock and humans. Consumption of brackenfern causes vitamin B1 deficiency in horses, and toxins can pass into the milk of cattle. Young leaves of brackenfern have been used as a human food source, especially in Japan, and may be linked to increased incidence of stomach cancer. Humans working outdoors near abundant stands of the plant may be at risk from cancer-causing compounds in the spores.
• Facts and Folklore:
It was once thought that, if the spores of the brackenfern were gathered on St. John's Eve, it would make the possessor invisible.
In the 17th century, live brackenfern was set on fire in hopes of producing rain.
Brackenfern fiddleheads have been used as a food source; however, their consumption has been linked to various types of cancer in humans.
Reproduction: • Reproduces by spores and vegetatively by rhizomes
• Most regeneration is vegetative. Many have searched for young plants growing from spores, but few have found them. However, spores do germinate and grow readily in culture.
• Young plants produce spores by the end of the second growing season in cultivation but normally do not produce spores until the third or fourth growing season. A single, fertile frond can produce 300,000,000 spores annually. Spore production varies from year to year depending on plant age, frond development, weather, and light exposure. Production decreases with increasing shade. The wind-borne spores are extremely small. Dry spores are very resistant to extreme physical conditions, although the germination of bracken fern spores declines from 95-96% to around 30-35% after 3 years storage. The spores germinate without any dormancy requirement. Under favorable conditions, young plants could be found 6 to 7 weeks after the spores are shed. Under normal conditions the spores may not germinate until the spring after they are shed.
• Sufficient moisture and shelter from wind are important factors in fern spore germination. Bracken fern spore germination appears to require soil sterilized by fire. On unsterilized soils spores may germinate, but the new plants are quickly overwhelmed by other growth. Temperatures between 59º and 86º F are generally best for germination, although bracken fern is capable of germination at 33º-36ºF.
• A pH range of 5.5 to 7.5 is optimal for germination. Germination is indifferent to light quality; it is one of the few ferns that can germinate in the dark. Despite limitations on spore germination, genotype analysis in the Northeast indicates that many stands of bracken fern represent multiple establishment of individuals from spores.
• When spores germinate, they produce bisexual, gamete-bearing plants about ¼" in diameter and one cell thick. These tiny plants have no vascular system and require very moist conditions to survive. The young spore-bearing plant which develops from the fertilized egg is initially dependent on the gametopyte until it develops its first leaf and roots. The first fronds are simple and lobed. They develop into thin, delicate fronds divided into lobed pinnae. They do not look like adult plants and are frequently not recognized as bracken fern. Cultivated plants begin to resemble adult fern after 18 weeks. The rhizomes begin to develop after there are a number (up to 10) of fronds and a well-developed root system or in the fifteenth week of growth under optimal conditions.In the first year rhizomes may grow to 86 inches long. By the end of a second year the rhizome system may exceed 6' in diameter.
• Aggressive rhizome system gives it the ability to reproduce vegetatively and reduces dependence on water for reproduction. The rhizomatous clones can be up to 400' in diameter and hundreds of years old; some clones alive today may be over 1,000 years old.
• Rhizomes have a high proportion of dormant buds. When disturbed or broken off, all portions of the rhizome may sprout, and plants growing from small rhizome fragments revert temporarily to a juvenile morphology.
• Shaded plants produce fewerspores than plants in full sun
• Bracken fern is a survivor. The fronds are generally killed by fire, but some rhizomes survive. The rhizomes are sensitive to elevated temperatures. During fires the rhizome system is insulated by mineral soil. Depth of the main rhizome system is normally between 3½" and 12" short rhizomes may be within 1½" of the surface and some rhizomes may be as deep as 40".
• Well known postfire colonizer in eastern pine and oak forests. Fire benefits bracken by removing competition while it sprouts profusely from surviving rhizomes. New sprouts are more vigorous following fire, and bracken fern becomes more fertile, producing far more spores than it does in the shade
• Spores germinate well on alkaline soils, allowing them to establish in the basic conditions created by fire.
Propagation: • Division most successful method
Cultivation: • Hardy to USDA Zone 3 (average minimum annual temperature -40ºF)
• Characteristically found on soils with medium to very rich nutrients.
• Cultivated and shaded plants produce fewer, thinner but larger fronds than open-grown plants
Population Genetics and Evolution
In 1908, G.H.Hardy and W. Weinberg independently suggested a scheme whereby evolution could be viewed as changes in frequency of alleles in a population of organisms. In this scheme, if A and a are alleles for a particular gene locus and each diploid individual has two such loci, then p can be designated as the frequency of the A allele and q as the frequency of the a allele. For example, in a population of 100 individuals ( each with two loci ) in which 40% of the alleles are A, p would be 0.40. The rest of the alleles would be ( 60%) would be a and q would be equal to 0.60. p + q = 1 These are referred to as allele frequencies. The frequency of the possible diploid combinations of these alleles ( AA, Aa, aa ) is expressed as p2 +2pq +q2 = 1.0. Hardy and Weinberg also argued that if 5 conditions are met, the population's alleles and genotype frequencies will remain constant from generation to generation. These conditions are as follows:
• The breeding population is large. ( Reduces the problem of genetic drift.)
• Mating is random. ( Individual show no preference for a particular mating type.)
• There is no mutation of the alleles.
• No differential migration occurs. ( No immigration or emigration.)
• There is no selection. ( All genotypes have an equal chance of surviving and reproducing.)
The Hardy-Weinberg equation describes an existing situation. Of what value is such a rule? It provides a yardstick by which changes in allelic frequencies can be measured. If a population's allelic frequencies change, it is undergoing evolution.
Estimating Allele Frequencies for a Specific Trait within a Sample Population:
Using the class as a sample population, the allele frequency of a gene controlling the ability to taste the chemical PTC (phenylthiocarbamide) could be estimated. A bitter taste reaction is evidence of the presence of a dominant allele in either a homozygous (AA) or heterozygous (Aa) condition. The inability to taste the PTC is dependent on the presence of the two recessive alleles (aa). Instead of using the PTC paper the trait for tongue rolling may be substituted. To estimate the frequency of the PTC -tasting allele in the population, one must find p. To find p, one must first determine q ( the frequency of the non tasting allele).
1. Using the PTC taste test paper, tear off a short strip and press it to your tongue tip. PTC tasters will sense a bitter taste.
2. A decimal number representing the frequency of tasters (p2+2pq) should be calculated by dividing the number of tasters in the class by the total number of students in the class. A decimal number representing the frequency of the non tasters (q2) can be obtained by dividing the number of non tasters by the total number of students. You should then record these numbers in Table 8.1.
3. Use the Hardy-Weinberg equation to determine the frequencies (p and q ) of the two alleles. The frequency q can be calculated by taking the square root of q2. Once q has been determined, p can be determined because 1-q=p. Record these values in Table 8.1 for the class and also calculate and record values of p and q for the North American population.
Table 8.1 Phenotypic Proportions of Tasters and Nontasters and Frequencies of the Determining Alleles
Phenotypes Allele Frequency Based on the H-W Equation
Tasters (p2+2pq) Non Tastes(q2) p q
Class Population #= %= #= %=
North American Population 0.55 0.45
Topics for Discussion:
1. What is the percentage of heterozygous tasters (2pq) in your class? ______________________.
2. What percentage of the North American population is heterozygous for the taster allele? _____________
Case Studies:
Case 1 ( Test of an Ideal Hardy-Weinberg Community)
The entire class will represent a breeding population, so find a large open space for its simulation. In order to ensure random mating, choose another student at random. In this simulation, we will assume that gender and genotype are irrelevant to mate selection.
The class will simulate a population of randomly mating heterozygous individuals with an initial gene frequency of 0.5 for the dominant allele A and the recessive allele a and genotype frequencies of 0.25AA, 0.50Aa, and 0.25aa. Record this on the Data page at the end of the lab. Each member of the class will receive four cards. Two cards will have A and two cards will have a. The four cars represent the products of meiosis. Each "parent" will contribute a haploid set of chromosomes to the next generation.
1. Turn the four cards over so the letters are not showing, shuffle them, and take the card on top to contribute to the production of the first offspring. Your partner should do the same. Put the cards together. The two cards represent the alleles of the first offspring. One of you should record the genotype of this offspring in the Case 1 section at the end of the lab. Each student pair must produce two offspring, so all four cards must be reshuffled and the process repeated to produce a second offspring.
2. The other partner should then record the genotype of the second offspring in the Case 1 section at the end of the lab. Using the genotypes produced from the matings, you and your partner will mate again using the genotypes of the two offspring. That is , student 1 assumes the genotype of the first offspring, and student 2 assumes the genotype of the second offspring.
3. Each student should obtain, if necessary, new cards representing their alleles in his or her respective gametes after the process of meiosis. For example, student 1 becomes the genotype Aa and obtains cards A,A,a,a; student 2 becomes aa and obtains cards,a,a,a,a. Each participant should randomly seek out another person with whom to mate in order to produce offspring of the next generation. You should follow the same mating procedure as for the first generation, being sure you record your new genotype after each generation in the Case 1 section. Class data should be collected after each generation for five generations. At the end of each generation, remember to record the genotype that you have assumed. Your teacher will collect class data after each generation by asking you to raise your hand to report your genotype.
Allele frequency: The allele frequencies, p and q, should be calculated for the population after five generations of simulated random mating.
Number of A alleles present at the fifth generation
Number of offspring with genotype AA _____________ X 2= _______________ A alleles
Number of offspring with genotype Aa _____________ X 1= ________________A alleles
Total = ____________ A alleles
p = Total number of A alleles =
Total number of alleles in the population
In this case, the total number of alleles in the population is equal to the number of students in the class X 2.
Number of a alleles present at the fifth generation
Number of offspring with genotype aa _____________ X 2= _______________ a alleles
Number of offspring with genotype Aa _____________ X 1= ________________A alleles
Total = ____________ a alleles
q = Total number of a alleles =
Total number of alleles in the population
1. What does the Hardy-Weinberg equation predict for the new p and q?.
2. Do the results you obtained in this simulation agree? __________ If not, why not?
3. What major assumption(s) were not strictly followed in this simulation?
Case 2 ( Selection )
In this case you will modify the simulation to make it more realistic. in the natural environment , not all genotypes have the same rate of survival; that is, the environment might favor some genotypes while selecting against others. An example is the human condition sickle-celled anemia. It is a condition caused by a mutation on one allele, in which a homozygous recessive does not survive to reproduce. For this simulation you will assume that the homozygous recessive individuals never survive. Heterozygous and homozygous dominant individuals always survive.
The procedure is similar to that for Case 1. Start again with your initial genotype, and produce your "offspring" as in Case 1. This time, However, there is one important difference. Every time your offspring is aa it does not reproduce. Since we want to maintain a constant population size, the same two parents must try again until they produce two surviving offspring. You may need to get new allele cards from the pool.
Proceed through five generations, selecting against the homozygous offspring 100% of the time. Then add up the genotype frequencies that exist in the population and calculate the new p and q frequencies in the same way as it was done in Case 1.
Number of A alleles present at the fifth generation
Number of offspring with genotype Aa _____________ X 1= ________________A alleles
Total = ____________ A alleles
p = Total number of A alleles =
Total number of alleles in the population
Number of a alleles present at the fifth generation
Number of offspring with genotype Aa _____________ X 1= ________________A alleles
Total = ____________ a alleles
q = Total number of a alleles =
Total number of alleles in the population
1. How do the new frequencies of p and q compare to the initial frequencies in Case 1?
2. How has the allelic frequency of the population changed?
3. Predict what would happen to the frequencies of p and q if you simulated another 5 generations.
4. In a large population, would it be possible to completely eliminate a deleterious recessive allele? Explain.
Hardy-Weinberg Problems
1. In Drosophila, the allele for normal length wings is dominant over the allele for vestigial wings. In a population of 1,000 individuals, 360 show the recessive phenotype. How many individuals would you expect to be homozygous dominant and heterozygous for this trait?
2. The allele for the ability to roll one's tongue is dominant over the allele for the lack of this ability. In a population of 500 individuals, 25% show the recessive phenotype. How many individuals would you expect to be homozygous dominant and heterozygous for this trait?
3. The allele for the hair pattern called "widow's peak" is dominant over the allele for no "widow's peak." In a population of 1,000 individuals, 510 show the dominant phenotype. How many individuals would you expect of each of the possible three genotypes for this trait?
4. In a certain population, the dominant phenotype of a certain trait occurs 91 % of the time. What is the frequency of the dominant allele?
Data Page:
Case 1 ( Hardy-Weinberg Equilibrium )
Initial Class Frequencies:
AA ________ Aa________ aa_________
My initial genotype :_______________
F1 Genotype ______
F2 Genotype ______
F3 Genotype ______
F4 Genotype ______
F5 Genotype ______
Final Class Frequencies:
AA ________ Aa________ aa_________
p _________ q __________
Case 2 ( Selection )
Initial Class Frequencies:
AA ________ Aa________ aa_________
My initial genotype :_______________
F1 Genotype ______
F2 Genotype ______
F3 Genotype ______
F4 Genotype ______
F5 Genotype ______
Final Class Frequencies:
AA ________ Aa________ aa_________
p _________ q __________
Biology 198
Hardy-Weinberg practice questions
The Hardy-Weinberg formulas allow scientists to determine whether evolution has occurred. Any changes in the gene frequencies in the population over time can be detected. The law essentially states that if no evolution is occurring, then an equilibrium of allele frequencies will remain in effect in each succeeding generation of sexually reproducing individuals. In order for equilibrium to remain in effect (i.e. that no evolution is occurring) then the following five conditions must be met:
1. No mutations must occur so that new alleles do not enter the population.
2. No gene flow can occur (i.e. no migration of individuals into, or out of, the population).
3. Random mating must occur (i.e. individuals must pair by chance)
4. The population must be large so that no genetic drift (random chance) can cause the allele frequencies to change.
5. No selection can occur so that certain alleles are not selected for, or against.
Obviously, the Hardy-Weinberg equilibrium cannot exist in real life. Some or all of these types of forces all act on living populations at various times and evolution at some level occurs in all living organisms. The Hardy-Weinberg formulas allow us to detect some allele frequencies that change from generation to generation, thus allowing a simplified method of determining that evolution is occurring. There are two formulas that must be memorized:
p2 + 2pq + q2 = 1 and p + q = 1
p = frequency of the dominant allele in the population
q = frequency of the recessive allele in the population
p2 = percentage of homozygous dominant individuals
q2 = percentage of homozygous recessive individuals
2pq = percentage of heterozygous individuals
Individuals that have aptitude for math find that working with the above formulas is ridiculously easy. However, for individuals who are unfamiliar with algebra, it takes some practice working problems before you get the hang of it. Below I have provided a series of practice problems that you may wish to try out. Note that I have rounded off some of the numbers in some problems to the second decimal place:
Remember the basic formulas:
p = frequency of the dominant allele in the population
q = frequency of the recessive allele in the population
p2 = percentage of homozygous dominant individuals
q2 = percentage of homozygous recessive individuals
2pq = percentage of heterozygous individuals
1. PROBLEM #1.
You have sampled a population in which you know that the percentage of the homozygous recessive genotype (aa) is 36%. Using that 36%, calculate the following:
A. The frequency of the "aa" genotype. Answer: 36%, as given in the problem itself.
B. The frequency of the "a" allele. Answer: The frequency of aa is 36%, which means that q2 = 0.36, by definition. If q2 = 0.36, then q = 0.6, again by definition. Since q equals the frequency of the a allele, then the frequency is 60%.
C. The frequency of the "A" allele. Answer: Since q = 0.6, and p + q = 1, then p = 0.4; the frequency of A is by definition equal to p, so the answer is 40%.
D. The frequencies of the genotypes "AA" and "Aa." Answer: The frequency of AA is equal to p2, and the frequency of Aa is equal to 2pq. So, using the information above, the frequency of AA is 16% (i.e. p2 is 0.4 x 0.4 = 0.16) and Aa is 48% (2pq = 2 x 0.4 x 0.6 = 0.48).
E. The frequencies of the two possible phenotypes if "A" is completely dominant over "a." Answers: Because "A" is totally dominate over "a", the dominant phenotype will show if either the homozygous "AA" or heterozygous "Aa" genotypes occur. The recessive phenotype is controlled by the homozygous aa genotype. Therefore, the frequency of the dominant phenotype equals the sum of the frequencies of AA and Aa, and the recessive phenotype is simply the frequency of aa. Therefore, the dominant frequency is 64% and, in the first part of this question above, you have already shown that the recessive frequency is 36%.
2. PROBLEM #2.
Sickle-cell anemia is an interesting genetic disease. Normal homozygous individials (SS) have normal blood cells that are easily infected with the malarial parasite. Thus, many of these individuals become very ill from the parasite and many die. Individuals homozygous for the sickle-cell trait (ss) have red blood cells that readily collapse when deoxygenated. Although malaria cannot grow in these red blood cells, individuals often die because of the genetic defect. However, individuals with the heterozygous condition (Ss) have some sickling of red blood cells, but generally not enough to cause mortality. In addition, malaria cannot survive well within these "partially defective" red blood cells. Thus, heterozygotes tend to survive better than either of the homozygous conditions. If 9% of an African population is born with a severe form of sickle-cell anemia (ss), what percentage of the population will be more resistant to malaria because they are heterozygous (Ss) for the sickle-cell gene? Answer: 9% =.09 = ss = q2. To find q, simply take the square root of 0.09 to get 0.3. Since p = 1 - 0.3, then p must equal 0.7. 2pq = 2 (0.7 x 0.3) = 0.42 = 42% of the population are heterozygotes (carriers).
3. PROBLEM #3.
There are 100 students in a class. Ninety-six did well in the course whereas four blew it totally and received a grade of F. Sorry. In the highly unlikely event that these traits are genetic rather than environmental, if these traits involve dominant and recessive alleles, and if the four (4%) represent the frequency of the homozygous recessive condition, please calculate the following:
A. The frequency of the recessive allele. Answer: Since we believe that the homozygous recessive for this gene (q2) represents 4% (i.e. = 0.04), the square root (q) is 0.2 (20%).
B. The frequency of the dominant allele. Answer: Since q = 0.2, and p + q = 1, then p = 0.8 (80%).
C. The frequency of heterozygous individuals. Answer: The frequency of heterozygous individuals is equal to 2pq. In this case, 2pq equals 0.32, which means that the frequency of individuals heterozygous for this gene is equal to 32% (i.e. 2 (0.8)(0.2) = 0.32).
4. PROBLEM #4.
Within a population of butterflies, the color brown (B) is dominant over the color white (b). And, 40% of all butterflies are white. Given this simple information, which is something that is very likely to be on an exam, calculate the following:
A. The percentage of butterflies in the population that are heterozygous.
B. The frequency of homozygous dominant individuals.
Answers: The first thing you'll need to do is obtain p and q. So, since white is recessive (i.e. bb), and 40% of the butterflies are white, then bb = q2 = 0.4. To determine q, which is the frequency of the recessive allele in the population, simply take the square root of q2 which works out to be 0.632 (i.e. 0.632 x 0.632 = 0.4). So, q = 0.63. Since p + q = 1, then p must be 1 - 0.63 = 0.37. Now then, to answer our questions. First, what is the percentage of butterflies in the population that are heterozygous? Well, that would be 2pq so the answer is 2 (0.37) (0.63) = 0.47. Second, what is the frequency of homozygous dominant individuals? That would be p2 or (0.37)2 = 0.14.
5. PROBLEM #5.
A rather large population of Biology instructors have 396 red-sided individuals and 557 tan-sided individuals. Assume that red is totally recessive. Please calculate the following:
A. The allele frequencies of each allele. Answer: Well, before you start, note that the allelic frequencies are p and q, and be sure to note that we don't have nice round numbers and the total number of individuals counted is 396 + 557 = 953. So, the recessive individuals are all red (q2) and 396/953 = 0.416. Therefore, q (the square root of q2) is 0.645. Since p + q = 1, then p must equal 1 - 0.645 = 0.355.
B. The expected genotype frequencies. Answer: Well, AA = p2 = (0.355)2 = 0.126; Aa = 2(p)(q) = 2(0.355)(0.645) = 0.458; and finally aa = q2 = (0.645)2 = 0.416 (you already knew this from part A above).
C. The number of heterozygous individuals that you would predict to be in this population. Answer: That would be 0.458 x 953 = about 436.
D. The expected phenotype frequencies. Answer: Well, the "A" phenotype = 0.126 + 0.458 = 0.584 and the "a" phenotype = 0.416 (you already knew this from part A above).
E. Conditions happen to be really good this year for breeding and next year there are 1,245 young "potential" Biology instructors. Assuming that all of the Hardy-Weinberg conditions are met, how many of these would you expect to be red-sided and how many tan-sided? Answer: Simply put, The "A" phenotype = 0.584 x 1,245 = 727 tan-sided and the "a" phenotype = 0.416 x 1,245 = 518 red-sided ( or 1,245 - 727 = 518).
6. PROBLEM #6.
A very large population of randomly-mating laboratory mice contains 35% white mice. White coloring is caused by the double recessive genotype, "aa". Calculate allelic and genotypic frequencies for this population. Answer: 35% are white mice, which = 0.35 and represents the frequency of the aa genotype (or q2). The square root of 0.35 is 0.59, which equals q. Since p = 1 - q then 1 - 0.59 = 0.41. Now that we know the frequency of each allele, we can calculate the frequency of the remaining genotypes in the population (AA and Aa individuals). AA = p2 = 0.41 x 0.41 = 0.17; Aa = 2pq = 2 (0.59) (0.41) = 0.48; and as before aa = q2 = 0.59 x 0.59 = 0.35. If you add up all these genotype frequencies, they should equal 1.
7. PROBLEM #7.
After graduation, you and 19 of your closest friends (lets say 10 males and 10 females) charter a plane to go on a round-the-world tour. Unfortunately, you all crash land (safely) on a deserted island. No one finds you and you start a new population totally isolated from the rest of the world. Two of your friends carry (i.e. are heterozygous for) the recessive cystic fibrosis allele (c). Assuming that the frequency of this allele does not change as the population grows, what will be the incidence of cystic fibrosis on your island? Answer: There are 40 total alleles in the 20 people of which 2 alleles are for cystic fibrous. So, 2/40 = .05 (5%) of the alleles are for cystic fibrosis. That represents p. Thus, cc or p2 = (.05)2 = 0.0025 or 0.25% of the F1 population will be born with cystic fibrosis.
8. PROBLEM #8.
You sample 1,000 individuals from a large population for the MN blood group, which can easily be measured since co-dominance is involved (i.e., you can detect the heterozygotes). They are typed accordingly:
M MM 490 0.49
MN MN 420 0.42
N NN 90 0.09
Using the data provide above, calculate the following:
A. The frequency of each allele in the population. Answer: Since MM = p2, MN = 2pq, and NN = q2, then p (the frequency of the M allele) must be the square root of 0.49, which is 0.7. Since q = 1 - p, then q must equal 0.3.
B. Supposing the matings are random, the frequencies of the matings. Answer: This is a little harder to figure out. Try setting up a "Punnett square" type arrangement using the 3 genotypes and multiplying the numbers in a manner something like this:
MM (0.49) MN (0.42) NN (0.09)
MM (0.49) 0.2401* 0.2058 0.0441
MN (0.42) 0.2058 0.1764* 0.0378
NN (0.09) 0.0441 0.0378 0.0081*
C. Note that three of the six possible crosses are unique (*), but that the other three occur twice (i.e. the probabilities of matings occurring between these genotypes is TWICE that of the other three "unique" combinations. Thus, three of the possibilities must be doubled.
D. MM x MM = 0.49 x 0.49 = 0.2401
MM x MN = 0.49 x 0.42 = 0.2058 x 2 = 0.4116
MM x NN = 0.49 x 0.09 = 0.0441 x 2 = 0.0882
MN x MN = 0.42 x 0.42 = 0.1764
MN x NN = 0.42 x 0.09 = 0.0378 x 2 = 0.0756
NN x NN = 0.09 x 0.09 = 0.0081
E. The probability of each genotype resulting from each potential cross. Answer: You may wish to do a simple Punnett's square monohybrid cross and, if you do, you'll come out with the following result:
MM x MM = 1.0 MM
MM x MN = 0.5 MM 0.5 MN
MM x NN = 1.0 MN
MN x MN = 0.25 MM 0.5 MN 0.25 NN
MN x NN = 0.5 MN 0.5 NN
NN x NN = 1.0 NN
9. PROBLEM #9.
Cystic fibrosis is a recessive condition that affects about 1 in 2,500 babies in the Caucasian population of the United States. Please calculate the following.
A. The frequency of the recessive allele in the population. Answer: We know from the above that q2 is 1/2,500 or 0.0004. Therefore, q is the square root, or 0.02. That is the answer to our first question: the frequency of the cystic fibrosis (recessive) allele in the population is 0.02 (or 2%).
B. The frequency of the dominant allele in the population. Answer: The frequency of the dominant (normal) allele in the population (p) is simply 1 - 0.02 = 0.98 (or 98%).
C. The percentage of heterozygous individuals (carriers) in the population. Answer: Since 2pq equals the frequency of heterozygotes or carriers, then the equation will be as follows: 2pq = (2)(.98)(.02) = 0.04 or 1 in 25 are carriers.
10. PROBLEM #10.
In a given population, only the "A" and "B" alleles are present in the ABO system; there are no individuals with type "O" blood or with O alleles in this particular population. If 200 people have type A blood, 75 have type AB blood, and 25 have type B blood, what are the alleleic frequencies of this population (i.e., what are p and q)? Answer: To calculate the allele frequencies for A and B, we need to remember that the individuals with type A blood are homozygous AA, individuals with type AB blood are heterozygous AB, and individuals with type B blood are homozygous BB. The frequency of A equals the following: 2 x (number of AA) + (number of AB) divided by 2 x (total number of individuals). Thus 2 x (200) + (75) divided by 2 (200 + 75 + 25). This is 475/600 = 0.792 = p. Since q is simply 1 - p, then q = 1 - 0.792 or 0.208.
11. PROBLEM #11.
The ability to taste PTC is due to a single dominate allele "T". You sampled 215 individuals in biology, and determined that 150 could detect the bitter taste of PTC and 65 could not. Calculate all of the potential frequencies. Answer: First, lets go after the recessives (tt) or q2. That is easy since q2 = 65/215 = 0.302. Taking the square root of q2, you get 0.55, which is q. To get p, simple subtract q from 1 so that 1 - 0.55 = 0.45 = p. Now then, you want to find out what TT, Tt, and tt represent. You already know that q2 = 0.302, which is tt. TT = p2 = 0.45 x 0.45 = 0.2025. Tt is 2pq = 2 x 0.45 x 0.55 = 0.495. To check your own work, add 0.302, 0.2025, and 0.495 and these should equal 1.0 or very close to it. This type of problem may be on the exam.
12. PROBLEM #12. (You will not have this type of problem on the exam)
What allelic frequency will generate twice as many recessive homozygotes as heterozygotes? Answer: We need to solve for the following equation: q2 (aa) = 2 x the frequency of Aa. Thus, q2 (aa) = 2(2pq). Or another way of writing it is q2 = 4 x p x q. We only want q, so lets trash p. Since p = 1 - q, we can substitute 1 - q for p and, thus, q2 = 4 (1 - q) q. Then, if we multiply everything on the right by that lone q, we get q2 = 4q - 4q2. We then divide both sides through by q and get q = 4 - 4q. Subtracting 4 from both sides, and then q (i.e. -4q minus q = -5q) also from both sides, we get -4 = -5q. We then divide through by -5 to get -4/-5 = q, or anotherwards the answer which is 0.8 =q. I cannot imagine you getting this type of problem in this general biology course although if you take algebra good luck |
9a7d69f5f513f53c | Take the 2-minute tour ×
In my notes, I have the Time Independent Schrodinger equation for a free particle $$\frac{\partial^2 \psi}{\partial x^2}+\frac{p^2}{\hbar^2}\psi=0\tag1$$
The solution to this is given, in my notes, as $$\Large \psi(x)=C e^\left(\frac{ipx}{\hbar}\right)\tag2$$
Now, since (1) is a second order homogeneous equation with constant coefficients, given the coefficients we have, we get a pair of complex roots:$$r_{1,2}=\pm \frac{ip}{\hbar}\tag3$$
Thus, the most general solution looks something like:$$\psi(x)=c_1 \cos \left(\frac{px}{\hbar}\right)+c_2 \sin \left(\frac{px}{\hbar}\right)\tag4$$
However, instead of writing the solution as a cosine plus a sin, the professor seems to have taken a special case of the general solution (with $c_1=1$ and $c_2=i$) and converted the resulting $$\psi(x)=\cos \left(\frac{px}{\hbar}\right)+ i\sin \left(\frac{px}{\hbar}\right)\tag5$$ into exponential form, using $$e^{i\theta}=\cos \theta + i\sin \theta \tag6$$ to get (2).
The main question I have concerning this is: shouldn't we be going after real solutions, and ignoring the complex ones for this particular situation? According to my understanding $\Psi(x,t)$ is complex but $\psi(x)$ should be real. Thanks in advance.
share|improve this question
The wavefunction needn't and shouldn't be real. – Mew Feb 8 '13 at 10:38
There are cases where you can get away with a real wavefunction, but the complex case is more general and fundamental. The free particle Hamiltonian $\hat{H}$ commutes with reflection $x\rightarrow -x$,$p\rightarrow -p$, so states with momenta $\pm p$ are both solutions. In equation (2) they have chosen the solution which is an eigenvalue of the momentum operator $\hat{p}$ with a plus sign $+$. The other sign is also a solution, representing a wave going in the opposite direction. Your real solution contains both left moving and right moving waves. – Michael Brown Feb 8 '13 at 11:05
If you look at the particle current $\vec{j}\propto \psi^\star \nabla \psi - \psi \nabla \psi^\star$ you'll see that real wavefunctions correspond to states where there is no net current, so you can only really expect them to turn up when you have bound states. If there is nothing to reflect a particle back the way it came then it is free to move off to infinity and the current can't vanish, so the wavefunction can't be real. – Michael Brown Feb 8 '13 at 11:10
Related: The book of Griffiths, Intro to QM, Problem 2.1b, p.24; and this Phys.SE post. – Qmechanic Feb 8 '13 at 15:54
1 Answer 1
There is no need for the solution $\psi(x)$ to be real. What must be real is the probability density that is "carried" by $\psi(x)$. In some loose and imprecise intuitive way, you may think about a TV image carried by electromagnetic waves. The signal that travels is not itself the image, but it carries it, and you can recover the image by decoding the signal properly.
Somewhat similarly, the complex wave function that is found by solving Schrödinger equation carries the information of "where the particle is likely to be", but in an indirect manner. The information on the probability density $P(x)$ of finding the particle is recovered from $\psi(x)$ simply by multiplying it times its complex conjugate:
$$\psi(x)^*\psi(x) = P(x)$$
that gives a real function as a result. Note that it is a density: what you compute eventually is the probability of finding the particle between $x=a$ and $x=b$ as $\int_{a}^{b} P(x) dx$
As you know, when you multiply a complex number(/function) times its complex conjugate, the information on the phase is lost:
$$\rho e^{i \theta}\rho e^{-i \theta}=\rho^{2}$$
For that reason, in some places one can (not quite correctly) read that the phase has no physical meaning (see footnote), and then you may wonder "if I eventually get real numbers, why did not they invent a theory that directly handles real functions?".
The answer is that, among other reasons, complex wave functions make life interesting because, since the Schrödinger equation is linear, the superposition principle holds for its solutions. Wave functions add, and it is in that addition where the relative phases play the most important role.
The archetypical case happens in the double slit experiment. If $\psi_{1}$ and $\psi_{2}$ are the wave functions that represent the particle coming from the hole number $1$ and $2$ respectively, the final wave function is $$\psi_{1}+\psi_{2}$$ and thus the probability density of finding the particle after it has crossed the screen with two holes is found from $$P_{1+2}= (\psi_{1}+\psi_{2})^{*}(\psi_{1}+\psi_{2}) $$
That is, you shall first add the wave functions representing the individual holes to have the combined complex wave function, and then compute the probability density. In that addition, the phase informations carried by $\psi_{1}$ and $\psi_{2}$ play the most important role, since they give rise to interference patterns.
Comment: Feynman is quoted to have said "One of the miseries of life is that everybody names things a little bit wrong, and so it makes everything a little harder to understand in the world than it would be if it were named differently." It is quite similar here. Every book says that the phase of the wave function has no physical meaning. That is not 100% correct, as you see.
share|improve this answer
Your Answer
|
6462cf2399f163b0 | MaplePrimes Announcement
Reporting from Amsterdam, it's a pleasure to report on day one of the 2014 Maple T.A. User Summit. Being our first Maple T.A. User Summit, we wanted to ensure attendees were not only given the opportunity to sit-in on key note presentations from various university or college professors, high school teachers and Maplesoft staff, but to also engage in active discussions with each other on how they have implemented Maple T.A. at their institution.
Featured Posts
Last week the Physics package was presented in a talk at the Perimeter Institute for Theoretical Physics and in a combined Applied Mathematics and Physics Seminar at the University of Waterloo. The presentation at the Perimeter Institute got recorded. It was a nice opportunity to surprise people with the recent advances in the package. It follows the presentation with sections closed, and at the end there is a link to a pdf with the sections open and to the related worksheet, used to run the computations in real time during the presentation.
Generally speaking, physicists still experience that computing with paper and pencil is in most cases simpler than computing on a Computer Algebra worksheet. On the other hand, recent developments in the Maple system implemented most of the mathematical objects and mathematics used in theoretical physics computations, and dramatically approximated the notation used in the computer to the one used in paper and pencil, diminishing the learning gap and computer-syntax distraction to a strict minimum. In connection, in this talk the Physics project at Maplesoft is presented and the resulting Physics package illustrated tackling problems in classical and quantum mechanics, general relativity and field theory. In addition to the 10 a.m lecture, there will be a hands-on workshop at 1pm in the Alice Room.
... Why computers?
We can concentrate more on the ideas instead of on the algebraic manipulations
We can extend results with ease
We can explore the mathematics surrounding a problem
We can share results in a reproducible way
Representation issues that were preventing the use of computer algebra in Physics
Notation and related mathematical methods that were missing:
coordinate free representations for vectors and vectorial differential operators,
covariant tensors distinguished from contravariant tensors,
functional differentiation, relativity differential operators and sum rule for tensor contracted (repeated) indices
Bras, Kets, projectors and all related to Dirac's notation in Quantum Mechanics
Inert representations of operations, mathematical functions, and related typesetting were missing:
inert versus active representations for mathematical operations
ability to move from inert to active representations of computations and viceversa as necessary
hand-like style for entering computations and texbook-like notation for displaying results
Key elements of the computational domain of theoretical physics were missing:
ability to handle products and derivatives involving commutative, anticommutative and noncommutative variables and functions
ability to perform computations taking into account custom-defined algebra rules of different kinds
(problem related commutator, anticommutator, bracket, etc. rules)
Vector and tensor notation in mechanics, electrodynamics and relativity
Dirac's notation in quantum mechanics
Computer algebra systems were not originally designed to work with this compact notation, having attached so dense mathematical contents, active and inert representations of operations, not commutative and customizable algebraic computational domain, and the related mathematical methods, all this typically present in computations in theoretical physics.
This situation has changed. The notation and related mathematical methods are now implemented.
Tackling examples with the Physics package
Classical Mechanics
Inertia tensor for a triatomic molecule
Problem: Determine the Inertia tensor of a triatomic molecule that has the form of an isosceles triangle with two masses m[1] in the extremes of the base and mass m[2] in the third vertex. The distance between the two masses m[1] is equal to a, and the height of the triangle is equal to h.
Quantum mechanics
Quantization of the energy of a particle in a magnetic field
Show that the energy of a particle in a constant magnetic field oriented along the z axis can be written as
H = `ℏ`*`ω__c`*(`#msup(mi("a",mathcolor = "olive"),mo("†"))`*a+1/2)
where `#msup(mi("a",mathcolor = "olive"),mo("†"))`and a are creation and anihilation operators.
The quantum operator components of `#mover(mi("L",mathcolor = "olive"),mo("→",fontstyle = "italic"))` satisfy "[L[j],L[k]][-]=i `ε`[j,k,m] L[m]"
Unitary Operators in Quantum Mechanics
(with Pascal Szriftgiser, from Laboratoire PhLAM, Université Lille 1, France)
A linear operator U is unitary if 1/U = `#msup(mi("U"),mo("†"))`, in which case, U*`#msup(mi("U"),mo("†"))` = U*`#msup(mi("U"),mo("†"))` and U*`#msup(mi("U"),mo("†"))` = 1.Unitary operators are used to change the basis inside an Hilbert space, which physically means changing the point of view of the considered problem, but not the underlying physics. Examples: translations, rotations and the parity operator.
1) Eigenvalues of an unitary operator and exponential of Hermitian operators
2) Properties of unitary operators
3) Schrödinger equation and unitary transform
4) Translation operators
Classical Field Theory
The field equations for a quantum system of identical particles
Problem: derive the field equation describing the ground state of a quantum system of identical particles (bosons), that is, the Gross-Pitaevskii equation (GPE). This equation is particularly useful to describe Bose-Einstein condensates (BEC).
The field equations for the lambda*Phi^4 model
Maxwell equations departing from the 4-dimensional Action for Electrodynamics
General Relativity
Given the spacetime metric,
g[mu, nu] = (Matrix(4, 4, {(1, 1) = -exp(lambda(r)), (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -r^2, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -r^2*sin(theta)^2, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = exp(nu(r))}))
a) Compute the trace of
"Z[alpha]^(beta)=Phi R[alpha]^(beta)+`𝒟`[alpha]`𝒟`[]^(beta) Phi+T[alpha]^(beta)"
where `≡`(Phi, Phi(r)) is some function of the radial coordinate, R[alpha, `~beta`] is the Ricci tensor, `𝒟`[alpha] is the covariant derivative operator and T[alpha, `~beta`] is the stress-energy tensor
T[alpha, beta] = (Matrix(4, 4, {(1, 1) = 8*exp(lambda(r))*Pi, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = 8*r^2*Pi, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 8*r^2*sin(theta)^2*Pi, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 8*exp(nu(r))*Pi*epsilon}))
b) Compute the components of "W[alpha]^(beta)"" ≡"the traceless part of "Z[alpha]^(beta)" of item a)
c) Compute an exact solution to the nonlinear system of differential equations conformed by the components of "W[alpha]^(beta)" obtained in b)
Background: paper from February/2013, "Withholding Potentials, Absence of Ghosts and Relationship between Minimal Dilatonic Gravity and f(R) Theories", by P. Fiziev.
c) An exact solution for the nonlinear system of differential equations conformed by the components of "W[alpha]^(beta)"
The Physics Project
"Physics" is a software project at Maplesoft that started in 2006. The idea is to develop a computational symbolic/numeric environment specifically for Physics, targeting educational and research needs in equal footing, and resembling as much as possible the flexible style of computations used with paper and pencil. The main reference for the project is the Landau and Lifshitz Course of Theoretical Physics.
A first version of "Physics" with basic functionality appeared in 2007. Since then the package has been growing every year, including now, among other things, a searcheable database of solutions to Einstein equations and a new dedicated programming language for Physics.
Since August/2013, weekly updates of the Physics package are distributed on the web, including the new developments related to our plan as well as related to people's feedback.
Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
Someone asked on about plotting x*y*z=1 and, while it's easy enough to handle it with implicitplot3d it raised the question of how to get nice constained axes in the case that the x- or y-range is much less than the z-range.
Here's what WolframAlpha gives. (Mathematica handles it straight an an plot of the explict z=1/(x*y), which is interesting although I'm more interested here in axes scaling than in discontinuous 3D plots)
Here is the result of a call to implicitplot3d with default scaling=unconstrained. The axes appear like in a cube, each of equal "length".
Here is the same plot, with scaling=constrained. This is not pretty, because the x- and y-range are much smalled than the z-range.
How can we control the axes scaling? Resizing the inlined plot window with the mouse just affects the window. The plot itself remains rendered in a cube. Using right-click menus to rescale just makes all axes grow or shrink together.
One unattractive approach it to force a small z-view on a plot of a much larger z-range, for a piecewise or procedure that is undefined outisde a specific range.
if abs(z)>200 then undefined;
else x*y*z-1; end if;
end proc,
-1..1, -1..1, -200..200, view=[-1..1,-1..1,-400..400],
style=surfacecontour, grid=[30,30,30]);
Another approach is to scale the x and y variables, scale their ranges, and then force scaled tickmark values. Here is a rough procedure to automate such a thing. The basic idea is for it to accept the same kinds of arguments are implicitplot3d does, with two extra options for scaling the axis x-relative-to-z, and axis y-relative-to-z.
implplot3d:=proc( expr,
{scalex::numeric:=1, scaley::numeric:=1} )
local d1, d2, dz, n1, n2, r1, r2, rngs, scx, scy;
uses plotfn=plots:-implicitplot3d;
(n1,n2) := lhs(rng1), lhs(rng2);
dz := rhs(rhs(rng3))-lhs(rhs(rng3));
(scx,scy) := scalex*dz/(rhs(rhs(rng1))-lhs(rhs(rng1))),
(r1,r2) := map(`*`,rhs(rng1),scx), map(`*`,rhs(rng2),scy);
(d1,d2) := rhs(r1)-lhs(r1), rhs(r1)-lhs(r1);
plotfn( subs([n1=n1/scx, n2=n2/scy], expr),
n1=r1, n2=r2, rng3, _rest[],
end proc:
The above could be better. It could also detect user-supplied custom x- or y-tickmarks and then scale those instead of forming new ones.
Here is an example of using it,
implplot3d( x*y*z=1, x=-1..1, y=-1..1, z=-200..200, grid=[30,30,30],
style=surfacecontour, shading=xy, orientation=[-60,60,0],
scalex=1.618, scaley=1.618 );
Here is another example
implplot3d( x*y*z=1, x=-5..13, y=-11..5, z=-200..200, grid=[30,30,30],
style=surfacecontour, orientation=[-50,55,0],
scaley=0.5 );
Ideally I would like to see the GUI handle all this, with say (two or three) additional (scalar) axis scaling properties in a PLOT3D structure. Barring that, one might ask whether a post-processing routine could use plots:-transform (or friend) and also force the tickmarks. For that I believe that picking off the effective x-, y-, and z-ranges is needed. That's not too hard for the result of a single call to the plot3d command. Where it could get difficult is in handling the result of plots:-display when fed a mix of several spacecurves, 3D implicit plots, and surfaces.
Have I overlooked something much easier?
MaplePrimes Questions Recent Unanswered Maple MapleSim Maple T.A. |
b7867c0fed52cbbd | Born rule
From Wikipedia, the free encyclopedia
(Redirected from Born's rule)
Jump to: navigation, search
The Born rule (also called the Born law, Born's rule, or Born's law) formulated by German physicist Max Born in 1926, is a law of quantum mechanics giving the probability that a measurement on a quantum system will yield a given result. In its simplest form it states that the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point. The Born rule is one of the key principles of quantum mechanics. There have been many attempts to derive the Born rule from the other assumptions of quantum mechanics; see Geometry of Quantum Theory, by V.S. Varadarajan for a rigorous derivation.
The rule[edit]
The Born rule states that if an observable corresponding to a Hermitian operator with discrete spectrum is measured in a system with normalized wave function (see bra–ket notation), then
• the measured result will be one of the eigenvalues of , and
• the probability of measuring a given eigenvalue will equal , where is the projection onto the eigenspace of corresponding to .
(In the case where the eigenspace of corresponding to is one-dimensional and spanned by the normalized eigenvector , is equal to , so the probability is equal to . Since the complex number is known as the probability amplitude that the state vector assigns to the eigenvector , it is common to describe the Born rule as telling us that probability is equal to the amplitude-squared (really the amplitude times its own complex conjugate). Equivalently, the probability can be written as .)
In the case where the spectrum of is not wholly discrete, the spectral theorem proves the existence of a certain projection-valued measure , the spectral measure of . In this case,
• the probability that the result of the measurement lies in a measurable set will be given by .
If we are given a wave function for a single structureless particle in position space, this reduces to saying that the probability density function for a measurement of the position at time will be given by
The Born rule was formulated by Born in a 1926 paper.[1] In this paper, Born solves the Schrödinger equation for a scattering problem and, inspired by Einstein's work on the photoelectric effect,[2] concluded, in a footnote, that the Born rule gives the only possible interpretation of the solution. In 1954, together with Walther Bothe, Born was awarded the Nobel Prize in Physics for this and other work.[2] John von Neumann discussed the application of spectral theory to Born's rule in his 1932 book.[3]
Within the Quantum Bayesianism interpretation of quantum theory, the Born rule is seen as an extension of the standard Law of Total Probability, which takes into account the Hilbert space dimension of the physical system involved. [4] In the ambit of the so-called Hidden-Measurements Interpretation of quantum mechanics the Born rule can be derived by averaging over all possible measurement-interactions that can take place between the quantum entity and the measuring system. [5][6] It has been claimed that Pilot Wave Theory can also statistically derive Born's law. [7] While it has been claimed that Born's law can be derived from the Many Worlds Interpretation, the existing proofs have been criticized as circular. [8]
See also[edit]
1. ^ Born, Max (1926). "I.2". In Wheeler, J. A.; Zurek, W. H. Zur Quantenmechanik der Stoßvorgänge [On the quantum mechanics of collisions]. Princeton University Press (published 1983). pp. 863–867. Bibcode:1926ZPhy...37..863B. doi:10.1007/BF01397477. ISBN 0-691-08316-9.
2. ^ a b Born, Max (11 December 1954). "The statistical interpretation of quantum mechanics" (PDF). Retrieved 30 December 2016. Again an idea of Einstein's gave me the lead. He had tried to make the duality of particles - light quanta or photons - and waves comprehensible by interpreting the square of the optical wave amplitudes as probability density for the occurrence of photons. This concept could at once be carried over to the psi-function: |psi|2 ought to represent the probability density for electrons (or other particles).
3. ^ Neumann (von), John (1932). Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics]. Translated by Beyer, Robert T. Princeton University Press (published 1996). ISBN 0691028931.
4. ^ Fuchs, C. A. QBism: the Perimeter of Quantum Bayesianism 2010
5. ^ Aerts, D. (1986). A possible explanation for the probabilities of quantum mechanics, Journal of Mathematical Physics, 27, pp. 202-210.
6. ^ Aerts, D. and Sassoli de Bianchi, M. (2014). The extended Bloch representation of quantum mechanics and the hidden-measurement solution to the measurement problem. Annals of Physics 351, Pages 975–1025 (Open Access).
7. ^
External links[edit] |
a41f0dbd91512e84 | Nanoelectronic Modeling Lecture 11: Open 1D Systems - The Transfer Matrix Method
By Gerhard Klimeck1, Dragica Vasileska2, Samarth Agarwal3, Parijat Sengupta3
Published on
The 1D time independent Schrödinger equation can be easily solved analytically in segments of constant potential energy through the matching of the wavefunction and its derivative at every interface at which there is a potential change. The previous lectures showed the process for a single step potential change and a single potential barrier which consts of two interfaces. The process can be generalized for an arbitrary number of interfaces with the transfer matrix approach. The transfer matrix approach enables the simple cascading of matrices through simple matrix multiplication.
The transfer matrix approach is analytically exact, and “arbitrary” heterostructures can apparently be handled through the discretization of potential changes. The approach appears to be quite appealing. However, the approach is inherently unstable for realistically extended devices which exhibit electrostatic band bending or include a large number of basis sets.
Cite this work
Researchers should cite this work as follows:
• Gerhard Klimeck; Dragica Vasileska; Samarth Agarwal; Parijat Sengupta (2009), "Nanoelectronic Modeling Lecture 11: Open 1D Systems - The Transfer Matrix Method,"
BibTex | EndNote
Università di Pisa, Pisa, Italy
1. nanoelectronics
2. course lecture |
2bde4d663b424cb4 | massXpert lets users analyse/predict mass spectrometric data on (bio)polymers. Notable features include ex nihilo polymer chemistry definitions, highly sophisticated editing of polymer sequences and chemical reaction simulations.
MAUD (Material Analysis Using Diffraction) is a general diffraction/reflectivity analysis program mainly based on the Rietveld method. It has a GUI, and a wide range of features including ab initio structure solution, various optimisation algorithms, Le Bail fitting and microstruture analysis.
Maxwell consists of a suite of programs that implement the Maxwellian formalism for calculating the interaction energy between charge distributions as represented by the multipole expansion. It can also be applied to crystal lattices.
MCCCS Towhee
MCCCS Towhee is a Monte Carlo molecular simulation code which can be used for the prediction of fluid phase equilibria using a wide variety of atom-based force fields and several ensembles (Gibbs, Canonical, Isobaric-isothermal and Grand Canonical). It has also been extended for use with solid (or porous) phases.
McMaille is a program for indexing powder diffraction patterns by Monte Carlo and grid search.
MCPRO performs Monte Carlo statistical mechanics simulations of peptides, proteins, and nucleic acids in solution; it was derived from BOSS, but makes extensive use of the concept of residues. Free energy changes can be computed via FEP calculations and have been used extensively for studying protein-ligand binding.
MCTDH (Multi Configuration Time Dependent Hartree) is a general algorithm to solve the time-dependent Schrödinger equation for multidimensional dynamical systems consisting of distinguishable particles. MCTDH can thus determine the quantal motion of the nuclei of a molecular system evolving on one or several coupled electronic potential energy surfaces. MCTDH can also be used to propagate density operators and to determine eigenvalues and eigenstates of a molecular vibrational Hamiltonian.
MDTools is a collection of programs, scripts, and utilities to make various molecular dynamics tasks easier, and to provide basic code and utilities which can be built up into larger toolsets.
Mercury is a program for visualising crystal structures in three dimensions. Its features include:The ability to load hit lists from ConQuest searches, or to browse the entire Cambridge Structural Database, or to read in crystal structures in various formats. The ability to rotate and translate the 3D crystal-structure display, and to view down cell axes, reciprocal cell axes, and the normals to least-squares and Miller planes. The ability to measure and display distances, angles and torsion angles by atom picking and much more.
MeTA Studio is programmable IDE tailored for a computational chemist. It has support for BeanShell and Jython, and integrates Jmol. |
eb8a3870ae58bbd2 | Name Affiliation Daysort descending Title Abstract Presentation Article link
Benedetta Pellacci Università Napoli Parthenope Wednesday June 10 Saturable Schr|\"odinger equations and systems:Existence and related topics Abstract.pdf Pellacci_Porto_2015.pdf
Christos Sourdis University of Turin Friday June 12 Analysis of an irregular boundary layer behavior for the steady state flow of a Boussinesq fluid Sourdis_PORTO.pdf
Sergio Henrique Monari Soares Universidade de São Paulo Friday June 12 A sign-changing solution for an asymptotically linear Schrödinger equation Porto_2015.pdf |
86f766fe9ef3b0a8 | Speed of light
From Wikiquote
Jump to: navigation, search
Speed of light The distance from the Sun to the Earth is shown as 150 million kilometers, an approximate average. Sizes to scale. Sunlight takes about 8 minutes 17 seconds to travel the average distance from the surface of the Sun to the Earth.
The speed of light in vacuum, c, is a universal physical constant, which, according to special relativity, is the maximum speed at which matter or information may travel. It is the speed of all massless particles and changes of the associated fields in a vacuum. Such particle/waves travel at c, regardless of the motion of the source or the inertial reference frame of the observer. The theory of relativity interrelates space and time using c, which also appears in the famous equivalence relation of mass and energy, E = mc2.
Quotes are arranged alphabetically by author
A - F[edit]
• One of the problems has to do with the speed of light and the difficulties involved in trying to exceed it. You can't. Nothing travels faster than the speed of light with the possible exception of bad news, which obeys its own special laws.
• Carl B. Boyer, The Rainbow: From Myth to Mathematics (1959) p. 205
• One of the funniest examples of these kinds of statistics comes from Evolution: Possible or Impossible by James F. Coppedge [who] cites an article by Ulric Jelinek … which claims that the odds are 1 in 10^243 against "two thousand atoms" (the size of one particular protein molecule) ending up in precisely that particular order "by accident." Where did Jelenik get that figure? From Pierre Lecompte du Nouy... who in turn got it from Charles-Eugene Guye, a physicist who died in 1942. Guye had merely calculated the odds of these atoms lining up by accident if "a volume" of atoms the size of the Earth were "shaken at the speed of light." In other words, ignoring all the laws of chemistry, which create preferences for the formation and behavior of molecules, and ignoring that there are millions if not billions of different possible proteins--and of course the result has no bearing on the origin of life, which may have begun from an even simpler protein. This calculation is thus useless for all these reasons, and is typical in that it comes to Coppedge third-hand (and thus to us fourth-hand), and is hugely outdated (it was calculated before 1942, even before the discovery of DNA), and thus fails to account for over half a century of scientific progress.
• Hubble's law predicts that galaxies beyond ...the Hubble distance, recede faster than the speed of light ...this distance is about 14 billion light years. ...galaxies with a redshift of about 1.5—...150 percent longer than the laboratory reference value—are receding at the speed of light. Equivalently, we are receding from those galaxies...
• Tamara Davis, Charles Lineweaver, "Misconceptions About the Big Bang," Scientific American (March, 2005)
• The radiation of the cosmic microwave background... has a red shift of about 1,000. ...the hot plasma of the early universe ...was receding from our location at about 50 times the speed of light.
• A light beam that is farther than the Hubble distance of 14 billion light-years ...cannot keep up with the stretching space.
• When you are next out of doors on a summer night, turn your head towards the zenith. Almost vertically above you will be shining the brightest star of the northern skies—Vega of the Lyre, twenty-six years away at the speed of light, near enough to the point of no return for us short-lived creatures. Past this blue-white beacon, fifty times as brilliant as our sun, we may send our minds and bodies, but never our hearts.
For no man will ever turn homewards beyond Vega, to greet again those he knew and loved on Earth.
• Another topic deserving discussion is Einstein’s modification of Newton’s law of gravitation. In spite of all the excitement it created, Newton’s law of gravitation is not correct! It was modified by Einstein to take into account the theory of relativity. According to Newton, the gravitational effect is instantaneous, that is, if we were to move a mass, we would at once feel a new force because of the new position of that mass; by such means we could send signals at infinite speed. Einstein advanced arguments which suggest that we cannot send signals faster than the speed of light, so the law of gravitation must be wrong. By correcting it to take the delays into account, we have a new law, called Einstein’s law of gravitation. One feature of this new law which is quite easy to understand is this: In the Einstein relativity theory, anything which has energy has mass—mass in the sense that it is attracted gravitationally. Even light, which has an energy, has a “mass.” When a light beam, which has energy in it, comes past the sun there is an attraction on it by the sun. Thus the light does not go straight, but is deflected. During the eclipse of the sun, for example, the stars which are around the sun should appear displaced from where they would be if the sun were not there, and this has been observed.
• Richard Feynman, The Feynman Lectures on Physics, Vol. I, Ch. 7. The Theory of Gravitation
• For over 200 years the equations of motion enunciated by Newton were believed to describe nature correctly, and the first time that an error in these laws was discovered, the way to correct it was also discovered. Both the error and its correction were discovered by Einstein in 1905.
Newton’s Second Law, which we have expressed by the equation was stated with the tacit assumption that m is a constant, but we now know that this is not true, and that the mass of a body increases with velocity. In Einstein’s corrected formula m has the value where the “rest mass” m0 represents the mass of a body that is not moving and c is the speed of light, which is about 3×105 km⋅sec−1 or about 186,000 mi⋅sec−1.
• Richard Feynman, The Feynman Lectures on Physics, Vol. I, Ch. 15. The Special Theory of Relativity
G - L[edit]
• The strangest explanation was put forth by an Irish physicist, George Francis Fitzgerald. Perhaps, he said, the ether wind puts pressure on a moving object, causing it to shrink a bit in the direction of motion. To determine the length of a moving object, its length at rest must be multiplied by the following simple formula, in which is the velocity of the object multiplied by itself, is the velocity of light multiplied by itself: . ...The speed of light in an unobtainable limit; when this is reached the formula becomes which reduces to 0. ...In other words, if an object could obtain the speed of light, it would have no length at all in the direction of its motion!
• Martin Gardner, Relativity Simply Explained (1962) Ch. 3 The Special Theory of Relativity, Part I
• Martin Gardner, Relativity Simply Explained (1962) Ch. 4 The Special Theory of Relativity, Part II
• Nothing in nature or the cosmos is ever completely still — as I write this, several wild Mallards have returned to the Museum courtyard and are creating a frantic spectacle of water and wings as they dive and attack in their annual spring ritual. Further from home, a supermassive black hole at the center of a galaxy 56 million light years from Earth has recently been observed to be spinning at close to the speed of light.
• Evalyn Gates, "Letter from the director", Explore magazine, Cleveland Museum of Natural History (Spring 2013) p. 4
• The false report that measuring one of the photons immediately affects the other leads to all sorts of unfortunate conclusions. ...the alleged effect ...would violate the requirement of relativity theory that no signal... can travel faster than the speed of light. If it were to do so, it would appear to observers in some states of motion that the signal were traveling backward in time.
• [A]fter a close study of the experimental work of Michael Faraday,... James Clerk Maxwell succeeded in uniting electricity and magnetism in the framework of the electromagnetic field. ...Beyond uniting... all... electric and magnetic phenomena in one mathematical framework, Maxwell's theory showed—quite unexpectedly, that electromagnetic disturbances travel at a fixed and never-changing speed that turns out to equal that of light. From this, Maxwell realized that visible light itself is nothing but a particular kind of electromagnetic wave... Maxwell's theory also showed that all electromagnetic waves—visible light among them—are the epitome of the peripatetic traveler. They never stop. They never slow down. Light always travels at light speed.
• Brian Greene, The Elegant Universe (1999, 2003) Ch. 1 Tied Up with a String.
• The alternative physics is a physics of light. Light is composed of photons, which have no antiparticle. This means that there is no dualism in the world of light. The conventions of relativity say that time slows down as one approaches the speed of light, but if one tries to imagine the point of view of a thing made of light, one must realize that what is never mentioned is that if one moves at the speed of light there is no time whatsoever. There is an experience of time zero. … The only experience of time that one can have is of a subjective time that is created by one's own mental processes, but in relationship to the Newtonian universe there is no time whatsoever. One exists in eternity, one has become eternal. The universe is aging at a staggering rate all around one in this situation, but that is perceived as a fact of this universe—the way we perceive Newtonian physics as a fact of this universe. One has transited into the eternal mode. One is then apart from the moving image; one exists in the completion of eternity.
• Terence McKenna, "New Maps of Hyperspace" (1984) given at the Berkeley Institute for the Study of Consciousness
• But here is my mark, and there is where I'm supposed to look, and believe me, the power and the pleasure and the emotion of this moment is as constant as the speed of light. It will never be diminished, nor will my appreciation.
• Tom Hanks, 67th Academy Award Speech (1995)
• Einstein had drawn attention to nonlocality in 1935 in an effort to show that quantum mechanics must be flawed. ...Einstein proposed a thought experiment—now called the EPR experiment—involving two particles that spring from a common source and fly in opposite directions.
According to the standard model of quantum mechanics, neither particle has a definite position or momentum before it is measured; but by measuring the momentum of one particle, the physicist instantaneously forces the other particle to assume a fixed position... Deriding this effect as "spooky action at a distance," Einstein argued that it violated both common sense and his own theory of special relativity, which prohibits the propagation of effects faster than the speed of light; quantum mechanics must therefore be an incomplete theory. In 1980, however, a group of French physicists carried out a version of the EPR experiment and showed that it did indeed give rise to spooky action. (The reason that the experiment does not violate special relativity is that one cannot exploit nonlocality to transmit information.)
• What Einstein actually said was that nothing can accelerate to the speed of light because its mass would become infinite. Einstein said nothing about entities already traveling at the speed of light or faster.
M - R[edit]
• Richard Lewontin "The Last of the Nasties?" New York Review of Books (Feb 29, 1996)
• The laws of economics are subject to the laws of physics. The physical processes that govern this planet and the continued life upon it place as stringent an upper limit on economic growth as the speed of light does on our knowledge of the universe.
• Marshall McLuhan, The Education of Mike McManus, TVOntario (28 December 1977).
• It is hard to understand how this infinitely dense singularity can evaporate into nothing. For matter inside the black hole to leak out into the universe requires that it travel faster than the speed of light.
• John Moffat, Reinventing Gravity (2008) Ch. 5, Conventional Black Holes, p. 85
• A much faster speed of light in the infant universe solved the horizon problem and therefore explained the overall smoothness of the temperatures of the CMB radiation, because light now traveled extremely quickly between all parts of the expanding but not inflating universe.
• John Moffat, Reinventing Gravity (2008) Ch. 6, Inflation And Variable Speed Of Light (VSL), p. 100
• Inflation itself proceeds at a speed faster than the measured speed of light.
• Minkowski's idea and the solution of the twin paradox can best be explained by means of an analogy between space and spacetime... Time as a fourth dimension rests vertically on the other three—just as in space the vertical juts out of the two-dimensional plane as a third dimension. Distances through spacetime comprise four dimensions, just as space has three. The more you go in one direction, the less is left for the others. When a rigid body is at rest and does not move in any of the three dimensions, all of its motion takes place on the time axis. It simply grows older. ...The faster he moves away from his frame of reference... and covers more distance in the three dimensions of space, the less of his motion through spacetime as a whole is left over for the dimension of time. ...Whatever goes into space is deducted from time. ...In comparison with the distances light travels, all distances in the dimensions of space, even those involving airplane travel, are so very small that we essentially move only along the time axis, and we age continually. Only if we are able to move away from our frame of reference very quickly, like the traveling twin... would the elapsed time shrink to near zero, as it approached the speed of light. Light itself... covers its entire distance through spacetime only in the three dimensions of space... Nothing remains for the additional dimension... the dimension of time... Because light particles do not move in time, but with time, it can be said that they do not age. For them "now" means the same thing as "forever." They always "live" in the moment. Since for all practical purposes we do not move in the dimensions of space, but are at rest in space, we move only along the time axis. This is precisely the reason we feel the passage of time. Time virtually attaches to us.
• Einstein's famous theory of relativity states that while phenomena appear different to someone close to a black hole, traveling close to the speed of light, or in a falling elevator here on earth, scientists in profoundly different environments will nevertheless always discover the same underlying laws of nature.
• Communication becomes the defining characteristic of homo sapiens; we are the species that speaks. We utter the words that create our world, and have learned to take our words and translate them into the ethereal play of zeros and ones, lay them out, at the speed of light, first on a wire, then a radio wave, and lately, on a beam of light, so that the voice, once constrained by mouth and ear, now straddles the entire planet in thirty millionths of a second, messages pinging back and forth, not unlike the meeting points of a synaptic gap, using photons as neurotransmitters, and each network router the equivalent of a synapic junction, deciding whether to activate or extinguish each message that crosses the continents, connected now in a seamless, endless web of knowledge, more than two billion pages, more than any one of us could ever read or know, the collected and collective intelligence of a species that seems to have made information the central mystery of culture, the project of civilization, and the goal of being.
• * The principle of the limiting character of the velocity of light. This statement... is not an arbitrary assumption but a physical law based on experience. In making this statement, physics does not commit the fallacy of regarding absence of knowledge as evidence for knowledge to the contrary. It is not absence of knowledge of faster signals, but positive experience which has taught us that the velocity of light cannot be exceeded. For all physical processes the velocity of light has the property of an infinite velocity. In order to accelerate a body to the velocity of light, an infinite amount of energy would be required, and it is therefore physically impossible for any object to obtain this speed. This result was confirmed by measurements performed on electrons. The kinetic energy of a mass point grows more rapidly than the square of its velocity, and would become infinite for the speed of light.
S - Z[edit]
• Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space (1994) p. 53
• The idea that as I walk in this direction my watch goes slightly slower and I am contracted in the direction of motion and my mass has increased slightly does not correspond to everyday experience. ...the reason that it does not correspond to common sense is that we are not in the habit of traveling close to the speed of light. We may one day be in that habit, and then the Lorentz transformations will be natural, intuitive.
• Carl Sagan, The Varieties of Scientific Experience: A Personal View of the Search for God (2006)
• Some of the effects predicted by the theory [of loop quantum gravity] appear to be in conflict with one of the principles of Einstein's special theory of relativity... that the speed of light is a universal constant. ...Photons of higher energy travel slightly slower than low-energy photons. ...the principle of [general] relativity is preserved but Einstein's special theory of relativity requires modification. ...A photon can have an energy-dependent speed without violating the principle of [general] relativity!
• Lee Smolin, "Loop Quantum Gravity," The New Humanists: Science at the Edge (2003)
• 'New Science' must demolish the Constants, supposedly Absolute,
... for example: Speed of Light must be increased a Billionfold,
in order to achieve Extraterrestrial Communication.
• B. B. Stoller, Biologizing the Universe (1983)
• One thing leads to another, and soon you are searching for answers to basic questions.
Another time during lectures on Classical Logic, we were introduced to an “experimentum crucis”. It was illustrated by the deciding experiment of Fizeau on the speed of light in water as compared to its speed in air. Since wave theory predicts that speed in water is less, and corpuscular theory (with point particles) predicts it would be faster, this is supposed to have selected the wave theory is correct. But then how would one accommodate the photoelectric effect? Then it turns out that if the “corpuscle” of light had a finite size, corpuscular theory also predicts lower speed of light in water. But then one can ask how come photoelectric emission being prompt even in feeble light, how could the energy of a photon spread over π(λ/2)2 act as a whole and liberate a single photoelectron! This leads us to question the square of the amplitude being interpreted as the probability of the particle being formed in the immediate vicinity. How do probabilities enter quantum mechanics? Thus the questions (and the quest) go on.
• George Sudarshan, A Glance Back at Five Decades of Scientific Research, published in Particles and Fields: Classical and Quantum, Journal of Physics: Conference Series 87 (2007), IOP Publishing, p. 1-2.
• Nikola Tesla, "On Light And Other High Frequency Phenomena" A lecture delivered before the Franklin Institute, Philadelphia (24 February 1893), and before the National Electric Light Association, St. Louis (1 March 1893), published in The Electrical review (9 June 1893), p. Page 683; also in The Inventions, Researches And Writings of Nikola Tesla (1894)
• Changing electric fields produce magnetic fields, and changing magnetic fields produce electric fields. Thus the fields can animate one another in turn, giving birth to self-reproducing disturbances that travel at the speed of light. Ever since Maxwell, we understand that these disturbances are what light is.
• E = mc2 really applies only to isolated bodies at rest. In general, when you have moving bodies, or interacting bodies, energy and mass aren't proportional. E = mc2 simply doesn't apply. ...For moving bodies, the correct mass-energy equation is
where is the velocity. For a body at rest , this becomes E = mc2. ...we must consider the special case of particles with zero mass... examples include photons, color gluons, and gravitons. If we attempt to put m = 0 and = c in our general mass-energy equation, both the numerator and denominator on the right-hand-side vanish, and we get the nonsensical relation E = 0/0. The correct result is that the energy of a photon can take any value. ...The energy E of a photon is proportional to the frequency f of the light it represents. ...they are related by the Planck-Einstein-Schrödinger equation E = hf, where h is Plank's constant.
• Note: when the velocity approaches the speed of light c, the denominator approaches 0 thus E approaches infinity, unless m = 0.
• The Lightness of Being – Mass, Ether and the Unification of Forces (2008) Ch. 3, p. 19 & Appendix A
External links[edit]
Wikipedia has an article about: |
59723f181207d8af | Frontiers reaches 6.4 on Journal Impact Factors
Original Research ARTICLE
Front. Phys., 11 August 2014 |
Quantum features of a charged particle in ionized plasma controlled by a time-dependent magnetic field
• 1Department of Radiologic Technology, Daegu Health College, Daegu, Republic of Korea
• 2Laboratoire de Physique Quantique et Systèmes Dynamiques, Département de Physique, Faculté des Sciences, Université Ferhat Abbas Sétif 1, Sétif, Algeria
Quantum characteristics of a charged particle traveling under the influence of an external time-dependent magnetic field in ionized plasma are investigated using the invariant operator method. The Hamiltonian that gives the radial part of the classical equation of motion for the charged particle is dependent on time. The corresponding invariant operator that satisfies Liouville-von Neumann equation is constructed using fundamental relations. The exact radial wave functions are derived by taking advantage of the eigenstates of the invariant operator. Quantum properties of the system is studied using these wave functions. Especially, the time behavior of the radial component of the quantized energy is addressed in detail.
1. Introduction
On account of the importance of plasma and plasma physics on materials science and nuclear fusion, the dynamical characteristics of plasma have been increasingly studied until now. Not only plasma reveal diverse properties during their process but also the features taken place in the plasma are so complex that it is very hard to control their behaviors and reactions. In a static magnetic field, charged particles go round in circles when their velocity vector is perpendicular to magnetic field lines. They however go round in helix in case they have a velocity component parallel to the lines of B-field as well as perpendicular. If the external magnetic field varies in time or in space, the motion of ionized particles becomes more random and both its treatment and analytical analysis require high technology.
The influence of magnetic fields on the motion of a charged particle involves the essential properties of acceleration and the transport of highly ionized particles. The analysis of classical and quantum behaviors of charged particles is important in connection with a well known application of the confinement of magnetized plasma. Charged particles are accelerated and decelerated as they cross a magnetic lens in a magneto optical trapping of ionized plasma. Although the charged particles are trapped both in the high-field region and low-field region, the plasma profile does not follow the naive magnetic field lines. More precisely, the radius of circling particle in the high-field region is smaller than the one that would be obtained by simply tracking the field line from the low-field radial edge toward the high field region [1]. Another application of the external magnetic field in ionized plasma is the use of it in reducing the effect of splash in pulsed laser deposition technique in plasma surface science [2, 3].
Exact theoretical description for quantum and classical properties of plasma may play a pivotal role for understanding the physics of plasma. Lewins studied the motion of a charged particle in a time-dependent magnetic field considering the conservation of magnetic moment about the circling center [4]. Stimulated by this work, we study in this paper quantum features of a charged particle moving under a time-dependent magnetic field in plasma. As magnetic field varies with time, the new electric potential would be created according to the Maxwell's equation. Hence, the motion of charged particle is more complex in the situation characterized by a varying magnetic field than by a static one.
The exact Hamiltonian for the motion of charged particle will be constructed considering the time dependence of the magnetic field. The complete quantum solutions of the system will be derived with the help of a quadratic invariant operator that is a potential tool for treating quantum systems that have time-dependent parameters. The introduction of the invariant operator is the main idea that enables us to overcome the difficulty in quantizing the system that is somewhat complicated. According to reports of Lewis and Riesenfeld [5, 6], a Schrödinger solution ψ(r, t) of a system that has time-dependent parameters is given in terms of an eigenstate ϕ(r, t) of the invariant operator. In fact, we can obtain ψ(r, t) by multiplying ϕ(r, t) by an appropriate phase factor. The Schrödinger solution ψ(r, t) plays a major role in investigating the quantum characteristics of the system. Quantized energy of the particle will be evaluated in this work using ψ(r, t) and its time behavior will be analyzed in detail in some situations that the time dependence of the magnetic field is chosen differently.
2. Hamiltonian Dynamics
Let us consider non-relativistic motion of a charged particle in ionized plasma controlled by a magnetic field. The magnetic force acting on a particle that has charge q under the static magnetic field is given by F = qv × B, where v = dr/dt is the velocity of the particle. However, if the magnetic field varies with time, it produces a new electric field according to the Maxwell's equation:
Then, the overall force exerting on the charged particle is
This gives the following Newtonian equation of motion for the particle
where m is the mass of the particle. Lewins showed that the radial part of the above equation in cylindrical coordinate described by a set of variables (r, θ, z) becomes [see Equation (22) of Lewins 4]
where r0 = r(0), ω(t) is a time-dependent frequency of the form
and K is a constant expressed as K=dθdt|t=0ω(0). We can see that angular momentum of the particle is conserved in variable magnetic fields as well as in the static limit [7]. Several interesting phenomena that take place by the presence of magnetic field in an ionized plasma include plume confinement, particle acceleration and deceleration, dissipation of kinetic energy into thermal energy, Debris mitigation, and instability of plasma [810].
The difficulty in the study of the quantum motion of charged particles in a “time-dependent” magnetic field is insisted many times in the literature [4, 1113] because of the production of electric field. We, in this work, may need to deal the problem of a time-dependent Hamiltonian system (TDHS) which is not easy to handle. There are several mathematical techniques available for rigorous quantum treatment of TDHSs, such as invariant operator method [5, 6], reduction method [14], propagator method [15], and canonical transformation method [16]. Among them, we will use invariant operator method as mentioned in the introductory part.
The Hamiltonian that yields the equation of motion given in Equation (4) can be written as
where p^ = −iℏ∂/∂r and ĤHO is the Hamiltonian of the harmonic oscillator with the time-dependent frequency ω(t), that is represented as
Even if a general harmonic oscillator in one dimension is defined through entire region for r, (−∞, ∞), Equation (7) is meaningful only in the positive r. In the next section, we will solve Schrödinger equation of the system that is described by the Hamiltonian (6) and quantum features of the system will be studied.
3. Theory and Results
3.1. Invariant Operator and Quantum Solutions
The Hamiltonian given in the last section is explicitly dependent on time as the magnetic field varies. Hence the system is a kind of TDHSs that have attracted wide interest in the physical society [5, 6, 1726]. To derive quantum solutions of a TDHS, it is convenient to introduce an invariant operator [5, 6] because the quantum properties of such system can be investigated via the eigenstates of the invariant operator. From the Liouville-von Neumann equation dÎ/dt = ∂Î/∂t + [Î, Ĥ]/(iℏ) = 0, it is possible to derive a quadratic invariant operator Î. Thus, considering Equation (6), we have the invariant operator as
where χ(t) is a complex classical solution of the following differential equation
and ÎHO is the invariant operator of the system described by ĤHO [24]:
One can check, by direct differentiation of Equation (8) with respect to time, that Î does not vary with time.
Since the eigenstates of the invariant operator play a crucial role in the development of the quantum theory of TDHS, it is necessary to compute them from fundamental relations. Let us write the eigenvalue equation of the invariant operator as
We will derive the eigenstates ϕ(r, t) by evaluating this equation. The substitution of Equation (8) with Equation (10) into the above equation yields
where β(t) and Π(t) are time functions of the form
Notice that β(t) is always real. In case of χ(t) = c(t)eiy(t) where c(t) and y(t) are time-dependent real values, we have β(t) = 2ċ(t)/c(t). On the other hand, for χ(t) = c1eiy(t) + c2eiy(t) where c1 and c2 are real constants, β(t) becomes
which is a more complicated expression. By putting r = ρ from Equation (12), we can rewrite the eigenvalue equation in the form
Now we let
where a constant s and a time function γ(t) is given by
Then, the substitution of Equation (17) in Equation (16) leads to
We easily derive the solution of this equation to be
where 1F1 is the hypergeometric series. Thus, we completely identified the solution ϕ(r, t) in Equation (12). After some rearrangements, the full expression of the normalized eigenstates becomes
Here, it can be easily shown that n should be quantized numbers (n = 0, 1, 2, …) from the condition that the physically allowed eigenstates cannot be divergent as r grows [27]. While it is manifest that ν is independent of time, we can easily verify that the Wronskian Ω is also a time-constant real value. For convenience, we choose χ(t) in a way that Ω to be positive. This can be always done without loss of generality.
We see from Equation (23) that the eigenvalues are given by
According to the invariant operator theory of Lewis-Riesenfeld [5, 6], the wave functions ψn(r, t) that satisfy Schrödinger equation are represented in terms of the eigenstates of the invariant operator. Hence, we can write the Schrödinger solutions in the form
where φn(t) are some time-dependent phases. By inserting the above equation together with Equation (6) into Schrödinger equation, we obtain the analytical forms of φn(t) such that
Thus, the complete radial wave functions of the system are identified. These wave functions are very useful for investigating quantum characteristics of the system. Recall that the expectation values of quantum observables are obtained via the use of wave functions.
3.2. Spectrum of Quantized Energy
We apply the quantization scheme developed previously to particular cases for better understanding of quantum features of the system. As an appropriate quantum observable that is worth to be investigated here, let us consider the radial part of the quantum energy. As is well known, the expectation values of the quantum energy are obtained from
With the use of Equation (6) and the wave functions in Equation (27), we readily have
This is the general expression of nth order quantum energy. The time evolution of quantum energy is determined by the type of B(t).
As an example, we choose a magnetic field that decreases with time in a fashion that
where B0 is the initial field and k is a positive constant which is relatively small (0 < k « 1). If we put χ(t) as χ(t) = χ0(1 + kt)z(t) where χ0 is a real constant, we can confirm via the use of Equations (5), (9), and (31) that the differential equation that z(t) should obey is given by
From a direct evaluation, we see that the solution for z(t) is an exponential function of the form z(t) = eiqB0/[2mk(1+kt)]. Hence, a complex solution of Equation (9) is given by
In this case, Equation (30) becomes
While the first term is constant, the second term decreases with time. We see from the above equation that quantum energy is independent of χ0. In general, the choice of any value for χ0 does not affect to the time behavior of a quantum system [15]. The time evolution of En for this case is plotted in Figure 1 with various values of k. As the magnetic field gradually disappears with time according to Equation (31), En also decay. Figure 1 shows that En decrease more rapidly for large k. If we consider the fact that k determines the rate of the decrease of applied magnetic field, this consequence is natural and corresponds to the classical analysis.
Figure 1. Time evolution of the radial energy expectation values divided by (2n + ν + 1) with the choice of B(t) as Equation (31). The values we used are ℏ = 1, m = 1, q = 1, and B0 = 1. All these values are taken to be dimensionless for convenience.
Now, as an another example, let us see the case that the time dependence of the external magnetic field is given by
In this case, the magnetic field (exponentially) increases with time whereas the field in the previous case decreases. It is easy to show from a little evaluation that Equation (9) has the form
where τ = qB0ekt/(2mk). A complex solution for this equation is given by
where J0 and N0 are zeroth order Bessel functions. We see from Figure 2 that the corresponding energy increases with time due to the amplification of the field, as expected. The ratio of energy increase becomes large with time due to the exponential increment of the field.
Figure 2. Time evolution of the radial energy expectation values divided by (2n + ν + 1) with the choice of B(t) as Equation (35). The values we used are ℏ = 1, m = 1, q = 1, and B0 = 1. All these values are taken to be dimensionless for convenience.
4. Conclusion
Quantum motion of a charged particle in an ionized plasma controlled by a time-dependent external magnetic field is studied using the invariant operator method that is available for TDHSs. If we consider that the time-varying magnetic field produces an electric field that plays the role of an another source of force acting on the moving particle that has some charge, the problem becomes more or less complicated. The radial part of equation of motion for the particle is represented in terms of a time-dependent angular frequency ω(t) as shown in Equation (4). Hence, the corresponding Hamiltonian given in Equation (6) with Equation (7) is a kind of TDHSs.
To see quantum features of the system, the invariant operator is constructed through the method of Lewis-Riesenfeld [see Equation (8)]. This enabled us to manage the system in more or less simple way by avoiding the direct consideration of the time-dependent problem by means of a constant of motion that is a quadratic form. The normalized radial wave functions derived from the use of the invariant operator are represented as Equation (27) with Equations (22) and (28). An interesting mathematical feature in this case is that the quantum solutions are expressed in terms of the complex classical solutions of Equation (9).
Considering the expression of the phases given in Equation (28), we can also define another invariant operator in the form yes = Î + ℏΩ/2 which seems a little improved than Î. It is easy to show that the eigenvalue equation of yes results in yesϕ(r, t) = Λnϕ(r, t) with
Although we have used Î in order to study quantum features of the system, yes may be more consistent invariant operator since its eigenvalues are represented in terms of (2n + ν + 1) which appear in the phases of the wave functions [Equation (28)]. In what follows, it is possible to derive exact quantum states by using either Î or yes.
The nth order expectation value of the Hamiltonian is computed by taking advantage of the wave function, as represented in Equation (30). This is the radial part of quantized energy for the particle. To promote the understanding of our development, we considered particular cases characterized by time-dependent magnetic fields appeared in Equations (31) and (35). We confirm from Figure 1 that En for the first example decrease with time as the magnetic field gradually vanishes, whereas, from Figure 2, the energy for the second example increases with time as the field grows. These consequences are consistent with the corresponding classical analyses.
All of the results in this work are obtained by treating electromagnetic field as classical backgrounds without incorporating the full quantized Yang-Mill theory. We believe that our theory is valid with high precision so long as we are interested in only the phenomenological quantum behavior of the charged particle, provided that the complex classical solutions χ and χ* of Equation (9) are found for given types of the time dependence of B(t).
Conflict of Interest Statement
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No.: NRF-2013R1A1A2062907) and was supported by the International Research and Development Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology(MEST) of Korea (Grant No.: 2011-0030864).
1. Tatematsu Y, Saito T, Kiwamoto Y, Ito K, Abe H, Ishikawa M, et al. Cyclotron emission spectra from collisionless electrons resonantly heated by cyclotron waves in a magnetic mirror. Fusion Eng Design (2001) 53:229–36. doi: 10.1016/S0920-3796(00)00489-0
CrossRef Full Text
2. García T, de Posada E, Villagrán M, Ll JLS, Bartolo-Pérez P, Peña JL. Effects of an external magnetic field in pulsed laser deposition. Appl Surf Sci. (2008) 255:2200–4. doi: 10.1016/j.apsusc.2008.07.061
CrossRef Full Text
3. Jordan R, Cole D, and Lunney JG. Pulsed laser deposition of particulate-free thin films using a curved magnetic filter. Appl Surf Sci. (1997) 109–110:403–7. doi: 10.1016/S0169-4332(96)00760-X
CrossRef Full Text
4. Lewins JD. On the motion of charged particles in a varying magnetic field. Int J Eng Sci. (1995) 33:1491–505. doi: 10.1016/0020-7225(94)00124-3
CrossRef Full Text
5. Lewis HR Jr. Classical and quantum systems with time-dependent harmonic-oscillator-type Hamiltonians. Phys Rev Lett. (1967) 18:510–2. doi: 10.1103/PhysRevLett.18.510
CrossRef Full Text
6. Lewis HR Jr, Riesenfeld WB. An exact quantum theory of the time-dependent harmonic oscillator and of a charged particle in a time-dependent electromagnetic field. J Math Phys. (1969) 10:1458–73. doi: 10.1063/1.1664991
CrossRef Full Text
7. Lewins JD. Conservation of angular momentum in a varying magnetic field. Fusion Tech. (1998) 34:241–53.
8. Joshi HC, Kumar A, Singh RK, Prahlad V. Effect of a transverse magnetic field on the plume emission in laser-produced plasma: an atomic analysis. Spect Acta B Atom Spect. (2010) 65:415–9. doi: 10.1016/j.sab.2010.04.018
CrossRef Full Text
9. Harilal SS, O'Shay B, Tillack MS. Debris mitigation in a laser-produced tin plume using a magnetic field. J Appl Phys. (2005) 98:036102. doi: 10.1063/1.1999851
CrossRef Full Text
10. Harilal SS. Confinement and dynamics of laser-produced plasma expanding across a transverse magnetic field. Phys Rev E (2004) 69:026413. doi: 10.1103/PhysRevE.69.026413
Pubmed Abstract | Pubmed Full Text | CrossRef Full Text
11. Laroze D, Rivera R. An exact solution for electrons in a time-dependent magnetic field. Phys Lett A (2006) 355:348–51. doi: 10.1016/j.physleta.2006.03.002
CrossRef Full Text
12. Abdalla MS, Choi JR. Propagator for the time-dependent charged oscillator via linear and quadratic invariants. Ann Phys. (2007) 322:2795–810. doi: 10.1016/j.aop.2007.01.006
CrossRef Full Text
13. Vatansever E, Akinci U, Polat H. Time dependent magnetic field effects on the ±J Ising model. J Magn Magn Mater. (2013) 344:89–95. doi: 10.1016/j.jmmm.2013.05.025
CrossRef Full Text
14. Degli Esposti Boschi C, Ferrari L, Lewis HR. Reduction method for the linear quantum or classical oscillator with time-dependent frequency, damping, and driving. Phys Rev A (1999) 61:010101(R). doi: 10.1103/PhysRevA.61.010101
CrossRef Full Text
15. Gweon JH, Choi JR. Propagator and geometric phase of a general time-dependent harmonic oscillator. J Korean Phys Soc. (2003) 42:325–30. doi: 10.3938/jkps.42.325
CrossRef Full Text
16. Park TJ. Canonical transformations for time-dependent harmonic oscillators. Bull Korean Chem Soc. (2004) 25:285-8. doi: 10.5012/bkcs.2004.25.2.285
CrossRef Full Text
17. Dekker H. Classical and quantum mechanics of the damped harmonic oscillator. Phys Rep. (1981) 80:1–110. doi: 10.1016/0370-1573(81)90033-8
CrossRef Full Text
18. Lohe MA. Exact time dependence of solutions to the time-dependent Schrödinger equation. J Phys A Math Theor. (2009) 42:035307. doi: 10.1088/1751-8113/42/3/035307
CrossRef Full Text
19. Choi JR. Nonclassical properties of superpositions of coherent and squeezed states for electromagnetic fields in time-varying media. In: Lyagushyn S, editor. Quantum Optics and Laser Experiments. Rijeka: InTech. (2012). p. 25–48.
20. Choi JR. (ed.). Quantum unitary transformation approach for the evolution of dark energy. In: Dark Energy-Current Advances and Ideas, Kerala: Research SignPost (2009). p. 117–134.
21. Maamache M, Bekkar H. Evolution of Gaussian wave packet and nonadiabatic geometrical phase for the time-dependent singular oscillator. J Phys A Math Gen. (2003) 36:L359–64. doi: 10.1088/0305-4470/36/23/105
CrossRef Full Text
22. Choi JR, Gweon BH. Operator method for a nonconservative harmonic oscillator with and without singular perturbation. Int J Mod Phys B (2002) 16:4733-42. doi: 10.1142/S0217979202014723
CrossRef Full Text
23. Choi JR. Exact quantum state and relation between Berry's phase and Hannay's angle for general time-dependent harmonic oscillator perturbed by a singularity. Int Math J. (2003) 4:209–27.
24. Malkin IA, Man'ko VI, Trifovov DA. Coherent states and transition probabilities in a time-dependent electromagnetic field. Phys Rev D (1970) 2:1371–85. doi: 10.1103/PhysRevD.2.1371
CrossRef Full Text
25. Dodonov VV, Malkin IA, Man'ko VI. Even and odd coherent states and excitations of a singular oscillator. Physica (1974) 72:597–615. doi: 10.1016/0031-8914(74)90215-8
CrossRef Full Text
26. Trifonov DA. Exact solutions for the general nonstationary oscillator with a singular perturbation. J Phys A Math Gen. (1999) 32:3649–61. doi: 10.1088/0305-4470/32/19/314
CrossRef Full Text
27. Ikhdair SM, Sever R. Exact polynomial eigensolutions of the Schrödinger equation for the pseudoharmonic potential. J Mol Struct. (2008) 855:13–7. doi: 10.1016/j.theochem.2007.12.044
CrossRef Full Text
Keywords: ionized plasma, charged particle, quantum energy, wave function, time-dependent magnetic field
Citation: Choi JR and Maamache M (2014) Quantum features of a charged particle in ionized plasma controlled by a time-dependent magnetic field. Front. Phys. 2:45. doi: 10.3389/fphy.2014.00045
Received: 16 May 2014; Accepted: 15 July 2014;
Published online: 11 August 2014.
Edited by:
Oleg N. Kirillov, Helmholtz-Zentrum Dresden-Rossendorf, Germany
Reviewed by:
Wu-yen Chuang, National Taiwan University, Taiwan
Setsuro Fujiie, Ritsumeikan University, Japan
Nikolai Sinitsyn, Texas A&M University, USA
*Correspondence: Jeong Ryeol Choi, Department of Radiologic Technology, Daegu Health College, Yeongsongro 15, Buk-gu, Daegu 702-722, Republic of Korea e-mail: |
78c56ea948cf8909 | Why Probability in Quantum Mechanics is Given by the Wave Function Squared
Born Rule: \mathrm{Probability}(x) = |\mathrm{amplitude}(x)|^2.
The Born Rule is certainly correct, as far as all of our experimental efforts have been able to discern. But why? Born himself kind of stumbled onto his Rule. Here is an excerpt from his 1926 paper:
Born Rule
That’s right. Born’s paper was rejected at first, and when it was later accepted by another journal, he didn’t even get the Born Rule right. At first he said the probability was equal to the amplitude, and only in an added footnote did he correct it to being the amplitude squared. And a good thing, too, since amplitudes can be negative or even imaginary!
The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:
1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
2. Wave functions evolve in time according to the Schrödinger equation.
3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).
It’s an ungainly mess, we all agree. You see that the Born Rule is simply postulated right there, as #4. Perhaps we can do better.
Of course we can do better, since “textbook quantum mechanics” is an embarrassment. There are other formulations, and you know that my own favorite is Everettian (“Many-Worlds”) quantum mechanics. (I’m sorry I was too busy to contribute to the active comment thread on that post. On the other hand, a vanishingly small percentage of the 200+ comments actually addressed the point of the article, which was that the potential for many worlds is automatically there in the wave function no matter what formulation you favor. Everett simply takes them seriously, while alternatives need to go to extra efforts to erase them. As Ted Bunn argues, Everett is just “quantum mechanics,” while collapse formulations should be called “disappearing-worlds interpretations.”)
Like the textbook formulation, Everettian quantum mechanics also comes with a list of postulates. Here it is:
That’s it! Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism.
The trickiest thing to extract from the formalism is the Born Rule. That’s what Charles (“Chip”) Sebens and I tackled in our recent paper:
Charles T. Sebens, Sean M. Carroll
A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but we give new reasons why that would be inadvisable. Applying lessons from this analysis, we demonstrate (using arguments similar to those in Zurek’s envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In particular, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers.
Chip is a graduate student in the philosophy department at Michigan, which is great because this work lies squarely at the boundary of physics and philosophy. (I guess it is possible.) The paper itself leans more toward the philosophical side of things; if you are a physicist who just wants the equations, we have a shorter conference proceeding.
Before explaining what we did, let me first say a bit about why there’s a puzzle at all. Let’s think about the wave function for a spin, a spin-measuring apparatus, and an environment (the rest of the world). It might initially take the form
(α[up] + β[down] ; apparatus says “ready” ; environment0). (1)
This might look a little cryptic if you’re not used to it, but it’s not too hard to grasp the gist. The first slot refers to the spin. It is in a superposition of “up” and “down.” The Greek letters α and β are the amplitudes that specify the wave function for those two possibilities. The second slot refers to the apparatus just sitting there in its ready state, and the third slot likewise refers to the environment. By the Born Rule, when we make a measurement the probability of seeing spin-up is |α|2, while the probability for seeing spin-down is |β|2.
In Everettian quantum mechanics (EQM), wave functions never collapse. The one we’ve written will smoothly evolve into something that looks like this:
α([up] ; apparatus says “up” ; environment1)
+ β([down] ; apparatus says “down” ; environment2). (2)
This is an extremely simplified situation, of course, but it is meant to convey the basic appearance of two separate “worlds.” The wave function has split into branches that don’t ever talk to each other, because the two environment states are different and will stay that way. A state like this simply arises from normal Schrödinger evolution from the state we started with.
So here is the problem. After the splitting from (1) to (2), the wave function coefficients α and β just kind of go along for the ride. If you find yourself in the branch where the spin is up, your coefficient is α, but so what? How do you know what kind of coefficient is sitting outside the branch you are living on? All you know is that there was one branch and now there are two. If anything, shouldn’t we declare them to be equally likely (so-called “branch-counting”)? For that matter, in what sense are there probabilities at all? There was nothing stochastic or random about any of this process, the entire evolution was perfectly deterministic. It’s not right to say “Before the measurement, I didn’t know which branch I was going to end up on.” You know precisely that one copy of your future self will appear on each branch. Why in the world should we be talking about probabilities?
Note that the pressing question is not so much “Why is the probability given by the wave function squared, rather than the absolute value of the wave function, or the wave function to the fourth, or whatever?” as it is “Why is there a particular probability rule at all, since the theory is deterministic?” Indeed, once you accept that there should be some specific probability rule, it’s practically guaranteed to be the Born Rule. There is a result called Gleason’s Theorem, which says roughly that the Born Rule is the only consistent probability rule you can conceivably have that depends on the wave function alone. So the real question is not “Why squared?”, it’s “Whence probability?”
Of course, there are promising answers. Perhaps the most well-known is the approach developed by Deutsch and Wallace based on decision theory. There, the approach to probability is essentially operational: given the setup of Everettian quantum mechanics, how should a rational person behave, in terms of making bets and predicting experimental outcomes, etc.? They show that there is one unique answer, which is given by the Born Rule. In other words, the question “Whence probability?” is sidestepped by arguing that reasonable people in an Everettian universe will act as if there are probabilities that obey the Born Rule. Which may be good enough.
But it might not convince everyone, so there are alternatives. One of my favorites is Wojciech Zurek’s approach based on “envariance.” Rather than using words like “decision theory” and “rationality” that make physicists nervous, Zurek claims that the underlying symmetries of quantum mechanics pick out the Born Rule uniquely. It’s very pretty, and I encourage anyone who knows a little QM to have a look at Zurek’s paper. But it is subject to the criticism that it doesn’t really teach us anything that we didn’t already know from Gleason’s theorem. That is, Zurek gives us more reason to think that the Born Rule is uniquely preferred by quantum mechanics, but it doesn’t really help with the deeper question of why we should think of EQM as a theory of probabilities at all.
Here is where Chip and I try to contribute something. We use the idea of “self-locating uncertainty,” which has been much discussed in the philosophical literature, and has been applied to quantum mechanics by Lev Vaidman. Self-locating uncertainty occurs when you know that there multiple observers in the universe who find themselves in exactly the same conditions that you are in right now — but you don’t know which one of these observers you are. That can happen in “big universe” cosmology, where it leads to the measure problem. But it automatically happens in EQM, whether you like it or not.
Think of observing the spin of a particle, as in our example above. The steps are:
1. Everything is in its starting state, before the measurement.
2. The apparatus interacts with the system to be observed and becomes entangled. (“Pre-measurement.”)
3. The apparatus becomes entangled with the environment, branching the wave function. (“Decoherence.”)
4. The observer reads off the result of the measurement from the apparatus.
The point is that in between steps 3. and 4., the wave function of the universe has branched into two, but the observer doesn’t yet know which branch they are on. There are two copies of the observer that are in identical states, even though they’re part of different “worlds.” That’s the moment of self-locating uncertainty. Here it is in equations, although I don’t think it’s much help.
You might say “What if I am the apparatus myself?” That is, what if I observe the outcome directly, without any intermediating macroscopic equipment? Nice try, but no dice. That’s because decoherence happens incredibly quickly. Even if you take the extreme case where you look at the spin directly with your eyeball, the time it takes the state of your eye to decohere is about 10-21 seconds, whereas the timescales associated with the signal reaching your brain are measured in tens of milliseconds. Self-locating uncertainty is inevitable in Everettian quantum mechanics. In that sense, probability is inevitable, even though the theory is deterministic — in the phase of uncertainty, we need to assign probabilities to finding ourselves on different branches.
So what do we do about it? As I mentioned, there’s been a lot of work on how to deal with self-locating uncertainty, i.e. how to apportion credences (degrees of belief) to different possible locations for yourself in a big universe. One influential paper is by Adam Elga, and comes with the charming title of “Defeating Dr. Evil With Self-Locating Belief.” (Philosophers have more fun with their titles than physicists do.) Elga argues for a principle of Indifference: if there are truly multiple copies of you in the world, you should assume equal likelihood for being any one of them. Crucially, Elga doesn’t simply assert Indifference; he actually derives it, under a simple set of assumptions that would seem to be the kind of minimal principles of reasoning any rational person should be ready to use.
But there is a problem! Naïvely, applying Indifference to quantum mechanics just leads to branch-counting — if you assign equal probability to every possible appearance of equivalent observers, and there are two branches, each branch should get equal probability. But that’s a disaster; it says we should simply ignore the amplitudes entirely, rather than using the Born Rule. This bit of tension has led to some worry among philosophers who worry about such things.
Resolving this tension is perhaps the most useful thing Chip and I do in our paper. Rather than naïvely applying Indifference to quantum mechanics, we go back to the “simple assumptions” and try to derive it from scratch. We were able to pinpoint one hidden assumption that seems quite innocent, but actually does all the heavy lifting when it comes to quantum mechanics. We call it the “Epistemic Separability Principle,” or ESP for short. Here is the informal version (see paper for pedantic careful formulations):
ESP: The credence one should assign to being any one of several observers having identical experiences is independent of features of the environment that aren’t affecting the observers.
That is, the probabilities you assign to things happening in your lab, whatever they may be, should be exactly the same if we tweak the universe just a bit by moving around some rocks on a planet orbiting a star in the Andromeda galaxy. ESP simply asserts that our knowledge is separable: how we talk about what happens here is independent of what is happening far away. (Our system here can still be entangled with some system far away; under unitary evolution, changing that far-away system doesn’t change the entanglement.)
The ESP is quite a mild assumption, and to me it seems like a necessary part of being able to think of the universe as consisting of separate pieces. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you. It is certainly implicitly used by Elga (he assumes that credences are unchanged by some hidden person tossing a coin).
With this assumption in hand, we are able to demonstrate that Indifference does not apply to branching quantum worlds in a straightforward way. Indeed, we show that you should assign equal credences to two different branches if and only if the amplitudes for each branch are precisely equal! That’s because the proof of Indifference relies on shifting around different parts of the state of the universe and demanding that the answers to local questions not be altered; it turns out that this only works in quantum mechanics if the amplitudes are equal, which is certainly consistent with the Born Rule.
See the papers for the actual argument — it’s straightforward but a little tedious. The basic idea is that you set up a situation in which more than one quantum object is measured at the same time, and you ask what happens when you consider different objects to be “the system you will look at” versus “part of the environment.” If you want there to be a consistent way of assigning credences in all cases, you are led inevitably to equal probabilities when (and only when) the amplitudes are equal.
What if the amplitudes for the two branches are not equal? Here we can borrow some math from Zurek. (Indeed, our argument can be thought of as a love child of Vaidman and Zurek, with Elga as midwife.) In his envariance paper, Zurek shows how to start with a case of unequal amplitudes and reduce it to the case of many more branches with equal amplitudes. The number of these pseudo-branches you need is proportional to — wait for it — the square of the amplitude. Thus, you get out the full Born Rule, simply by demanding that we assign credences in situations of self-locating uncertainty in a way that is consistent with ESP.
We like this derivation in part because it treats probabilities as epistemic (statements about our knowledge of the world), not merely operational. Quantum probabilities are really credences — statements about the best degree of belief we can assign in conditions of uncertainty — rather than statements about truly stochastic dynamics or frequencies in the limit of an infinite number of outcomes. But these degrees of belief aren’t completely subjective in the conventional sense, either; there is a uniquely rational choice for how to assign them.
Working on this project has increased my own personal credence in the correctness of the Everett approach to quantum mechanics from “pretty high” to “extremely high indeed.” There are still puzzles to be worked out, no doubt, especially around the issues of exactly how and when branching happens, and how branching structures are best defined. (I’m off to a workshop next month to think about precisely these questions.) But these seem like relatively tractable technical challenges to me, rather than looming deal-breakers. EQM is an incredibly simple theory that (I can now argue in good faith) makes sense and fits the data. Now it’s just a matter of convincing the rest of the world!
This entry was posted in arxiv, Philosophy, Science. Bookmark the permalink.
96 Responses to Why Probability in Quantum Mechanics is Given by the Wave Function Squared
1. Moshe says:
Sean, perhaps this concern is besides the point, but let me rephrase my discomfort with the discrete nature of the story the MWI tells. As you explained extremely well here, the same physical system can have multiple descriptions, each appropriate for experiments done at different length or energy scale. So suppose we look at scattering of protons and use our detector to measure some aspect of the final state. How many “worlds” do we have in the end? Do we think about all possible states of the final proton, or the vastly more complex story in terms of fluctuating quarks and gluons (which certainly interacted strongly with the measuring device)? Preferably those stories equivalent in some sense, but I am not sure in what precise sense they are.
2. Milkshake 2 says:
Sean you are a cosmologist so you can be forgiven for taking Many-Worlds seriously.
All you have done is drop qm axiom #4 and replaced it with amplitudes and ESP. But we may just as well drop other of the qm axioms and get other realist interpretations instead (e.g Bohmianism or GRW).
And it would be nice if you addressed the fact that our free choice of measurement leads to the set of possible outcomes, so that, in a sense, humans decide which universes are created in the splitting. But that’s odd!
3. Ken says:
I have a bit of a far-out question. In the path integral approach to QM, we see that two possibilities interfere — both contribute to the final amplitude before it is squared — only if they can converge onto the same physical state. All the ways of getting from the same state to the same state interfere. I assume that is equivalent to being entangled, but perhaps I am misunderstanding that. So the idea that two branches have split means, as I understand it, that there is no (or vanishingly small) possibility of them evolving to the same state.
Now, to throw in something I really know nothing about, I am vaguely aware that there are “bouncing universe” theories of cosmology. What I am wondering is if they involve different branches all converging on the same final state — which I imagine means non-unitary evolution, information being lost, which seems like it couldn’t be. But it seems like to be viable, these bouncing universe theories must get you back to a low-entropy post-bounce initial condition, and it’s a little hard to imagine how that could happen if that initial condition retained all the information specifying exactly which branch you were on before the bounce.
So that’s the vague understanding behind my question, but to state my question simply: in bouncing universe scenarios, do different branches converge on the same final state at the bounce? And if they do, doesn’t that mean they are entangled?
4. Sam says:
“Now it’s just a matter of convincing the rest of the world!” , I wouls say not to worry about convincing anybody since in infinite number of universes you have already convinced everybody.
You just happen to be in wrong infinity 🙂 .
5. Jerry says:
I am a layperson here also, a Physician fascinated by cosmology, listened to all the Teaching Company courses related to physics and read all the cosmology books aimed at lay people. I have a reasonably good grasp on the wave function and collapse of the wave function for small particles- how that leads to electron tunneling and other strange but true phenomena.
I absolutely cannot grasp the thought that there is a wave function for a complex large object. Sean used an orange in one of his lectures. Really? How about an animal? On a microscopic level the many parts of an orange or parts of an animal aren’t even close to each other. Can something larger than a molecule really have just one wave function?
I would be so appreciative if any of the physicists here could help me understand this question. Thank-you.
6. Dave Hooke says:
“a vanishingly small percentage of the 200+ comments actually addressed the point of the article”
What were the chances?
7. John says:
Jerry, I have two answers to your question. First, the simplest example of a wave function for a single particle confined to a 1-dimensional box is not just the wave function for the particle, it is the wave function for the particle confined to a box of a given size. Choose a box of a different size and you get a different wave function. The box can be a micron or a mile or lightyear in length.
But getting back to the wave function for the orange, the wave function would be the wave function for all the atoms in the orange. psi = psi(x1,x2,… xN). Then the probability of finding particle 1 near x1 particle 2 near x2, etc is given by the magnitude of psi squared.
The wave function for the many-particle orange is a vastly more complicated object than the wave function for a single particle in a box. But even in the simple case of a lone particle in a box the wave function is not just for the particle it is for the box/particle system and the box can be enormous.
8. Jerry Salomone says:
Thank-you John-
That makes so much more sense to me than to think of the wave function of the orange as a whole to be the same as to that of an electron or proton.
Now, since the atoms inside the orange are interacting with each other- isn’t that a measurement? Isn’t that an “Observation”? Doesn’t that collapse the wave function for the orange as a whole? Even though we can’t know the position and momentum of each electron or quark in the orange, don’t we now know the position and momentum of the orange?
9. Sean Carroll says:
Moshe– The “how many worlds” question is a good one, when we think about realistic situations. (I’ve heard experts say it’s just not a well-defined question, but I haven’t completely understood the argument.) But there’s no ambiguity concerning e.g. protons vs. quarks. You just look at what is entangled with what. In an ordinary nucleon, the quarks and gluons are entangled with each other in a very particular way to make the lowest-lying state. That state just has a couple of remaining quantum numbers (spin, position) that can possibly entangle with the outside world and lead to decoherent branches.
Jerry– According to quantum mechanics of any sort, there is actually only one wave function for the entire universe. Each living being is a part of it, just like each atom or particle is.
10. UncleMonty says:
Does this analogy help Jerry: when you listen to a symphony orchestra, there’s just one soundwave detected by your ears. It’s the brain that interprets that wave as a combination of violins, flutes, horns, oboes etc., and the better your musical ear the more easily you can mentally separate the wave that came into your ear into its constituent causes. If your “ear” is especially discriminating, you can attend to individual harmonics of a single instrument–but this is all in the interpretation; the pressure wave (“soundwave”) could be decomposed into sinusoidal components in any number of ways. The analogy, then, is that there’s a single wave function for the entire universe, but we can interpret parts of it to be associated with various smaller things (like cats) and sub-things (like their whiskers) and sub-sub-things (molecules)–etc. etc.
11. Brett says:
The reliance on entanglement would work well with LQG. It would be interesting if entanglement, long ignored, was this grand mechanism responsible for dynamic interaction.
12. Zurek is definitely worth a read.
Experimentally studying the boundary of coherence – decoherence is vitally important. If anything, these studies have practical benefit to see how ‘macroscopic’ we can begin to manipulate entangled states.
I am curious if there are any known ways to stimulate emission of 3.55 keV photons (like those described here http://arxiv.org/abs/1402.2301)? In light of this discussion: what part of our universe can make such states if there are no known atoms (machinery) or coherent interactions with fields that generate them. Is it some complex quantum mechanical explanation, or is it more likely a semi-classical explanation for the presence of such an x-ray line?
13. Michael says:
I think that a lot of the problems that people have with the multiple worlds idea (and really to quantum mechanics in general) are related to time. Things are usually presented as distinct worlds branching or splitting to become separate worlds, (or collapse occurring) and this happens “as time goes by”. I will try to quickly put together a string of thoughts here, but certainly it won’t be very clear. Hopefully just writing something down clarifies things in my own mind a little bit. Sorry if I start rambling.
I think it is very useful to think about physics (and the nature of reality) from the basic point of view that time doesn’t flow at all, and what we usually think of as time is not really a singular uni-directional phenomenon.
Most quantum mechanics examples and experiments use microscopic particles. Time is easily shown to have no preferred direction for microscopic particles. The examples used to show this are clear only because of the small numbers of possible options in each direction of time. As soon as one direction starts to have many more possible states than the other, then you can tell which is the future and which is the past. The future just has more options than the past, that is why it is the future.
These options are actually the components of entropy. It isn’t that there is just more entropy in the future, rather, the future is simply defined by the fact that it is the direction of more options.
But what does that really mean? It probably means that in the ‘future’ direction, the universal wave function has more small scale structure. More parts of it behave independently, or have decoherence with other parts. This is the ‘splitting’ of worlds. These decoherent branches are the ‘options’ that define entropy. Without distinct options, entropy doesn’t make sense, and time doesn’t appear to flow in one direction.
Of course the ‘time flowing’ and ‘world splitting’ are just illusions resulting from our position in the system. All of the branches actually do interact with each other, it is just mostly in the direction that we call the ‘past’.
It is only because we ourselves are actually a small part of this large system that we can not see the time independence. We are constrained by the decoherent structure to only be able to observe toward the past direction. This is what makes quantum mechanics and relativity seem difficult and illogical to people.
Most of the difficult to grasp principles of quantum mechanics are made simpler if you think of time as just another direction. Take the EPR paradox; where the measurement of a state of an entangled particle “instantaneously” determines the result of a measurement separated by a distance of millions of light years (seemingly faster the speed of light). The problem is the use of the term instantaneous. These particles can ‘move’ backward in time as easily as forward in time (just as they can ‘move’ up or down, or left or right).
If you think of ‘the outcome’ of the final measurement of one of the particles traveling backwards in time (physically with the particle), to the point where the entanglement occurred, and then forward in time with the other particle to when that one is measured (and vice versa), then there is never any action at a distance. All actions are local. This seems strange, but it is only because we can’t see the whole picture.
Quantum mechanical interactions are actually very similar to classical interactions, IF you consider time to be the same as the other spacial dimensions. The complication arises because time seems to have a property that differentiates it from the other dimensions: there are more options in one direction versus the other. However, this is also an illusion.
All of these dimensions are actually part of the same ‘thing’ which is the universal wave function. What we see as ‘THE time dimension’ is just whichever direction has the most options when seen from any particular spot. Time is defined by the entropy, which is defined by the decoherence branches of the wave function.
Time dilation and length contraction in special relativity are what you start to observe when more than one dimension starts to have a large number of options (decoherence branches). It is no longer so obvious which direction is the “time” dimension. You can then see that ‘before’ and ‘after’ are not definite things, but it is nonetheless always a consistent system because it is all one wave function.
While it isn’t totally clear how to put gravity and general relativity together with quantum mechanics, that is surely because of the issues of thinking about movement, velocity, and acceleration when time is not a distinct parameter.
Clearly some types of particles interact with other particles not only in the classical three dimensions, but rather in four dimensions, such that the time dimension for these interactions is not in exactly the same direction as the bulk of the surrounding particles. This can lead to seemingly stationary particles experiencing acceleration, such as gravity.
Applying this same kind of thought to the many worlds interpretation makes it easier to see that all of the ‘worlds’ do interact with each other, but mostly through the ‘past’ direction. The separate ‘worlds’ are only separate from our point of view. There is no need to worry about conservation of energy or mass or whatever people get hung up on. Everything is part of the same wave function, which is time independent.
14. Jim says:
Hope you will share results from the workshop on when and how branching occurs when you have time. I guess they’re technical issues to the faithful but may seem more fundamental to the agnostics.
Thanks for the great blog.
15. Mitchell Porter says:
A technical debunking of such arguments always ought to be possible. In the present case, it must have something to do with the use of these epistemic principles, “Indifference” vs “ESP”, but I still haven’t decoded it. What I want to do in this comment, is just to arm the reader with a general defense against this pernicious new trend in Many Worlds apologetics.
My suggested rule of thumb is this: if a Many Worlds theory *doesn’t* explain the Born rule by counting worlds, look upon it with suspicion, or just ignore it.
P.S. Sean cites Gleason’s theorem as a reason to think that probabilities in a quantum multiverse must come from the square of the amplitude. So please, Everett fans, why not try to come up with an exact and objective theory about how the wavefunction subdivides into worlds, that is somehow inspired by Gleason’s theorem? Rather than spreading confusion and an illusion of understanding.
16. vmarko says:
Mitchell Porter,
If there were a way to specify which part of the wavefunction is “a world”, it would be (more or less) straightforward to count how many of them are there, and use their frequencies as probabilities. Due to the separability axiom for the Hilbert space in QM, there would be at most countably infinitely many “worlds” in a given wavefunction, and the number of appearances of each particular “world” could be, well, counted.
But the main problem of MWI is that actually there is no way to specify which part of the wavefunction is “a world”. This is a serious problem of MWI, acknowledged by MWI fans (including even Sean, although he tries his best to avoid talking about it), and is called “the pointer basis problem”. It is the raison d’etre for all those additional axioms in the textbook version of QM, as compared to MWI. It lies at the core of the measurement problem and the Schrodinger cat paradox (see my previous comment).
MWI, as it stands, has no solution to this problem, and it can be resolved only by postulating additional axioms. These additional axioms will in turn kill the argument of parsimony that MWI fans are so fond of.
HTH, 🙂
17. Is there an experimental test for the MWI? Can it be falsified?
18. Stewart says:
I do not believe there is any logical pathway from Schrodinger’s equation describing a quantum system to there must be many universes in which every possible outcome of every “measurement” that has ever taken place is realized.
Instead, we need to treat Schrodinger’s equation as a model that works, not as absolute “gospel”. The Copenhagen Interpretation, as far as I’m concerned, is a model, and a very good one. And that’s all that we can ever hope to achieve in science. If we find a set of equations that accounts for observations, then we are doing good science. However, we shouldn’t overly extend those equations to a point wherein there is no logical pathway joining the two together.
For example, a typical optimization problem in first year calculus is finding the length of a rectangle that maximizes area, given certain constraints. Usually, we need to solve a quadratic equation for the length, and we get a positive solution, and a negative solution. The positive solution is the correct, physically relevant solution, and the negative solution is not physically relevant. We don’t take every mathematical solution seriously. Just because it comes out of the math, doesn’t mean it’s right. If, somehow, many worlds comes out of quantum mechanics, doesn’t mean it’s right. Our mathematics serve as useful models, and nothing more. There is no logical pathway from quadratic equation to negative length. In turn, there is no logical pathway from Schrodinger’s equation to many worlds.
19. Leo Vuyk says:
About HOW MANY multiverses, I would like to present the next thoughts about how we could measure this, related to MaxTegmark’s proposing: “Is there a copy of you reading this article?” inside other anti-material or material ( Charge Prity symmetric) COPY universes.
The idea:
Benjamin Libet measured the so called electric Readiness Potential (RP) time to perform a volitional act, in the brains of his students and the time of conscious awareness (TCA) of that act, which appeared to come 500 m.sec behind the RP. The “volitional act” was in principle based on the free choice to press an electric bell button. The results of this experiment gives still an ongoing debate in the broad layers of the scientific community, because the results are still (also in recent experiments) in firm contrast with the expected idea of Free Will and causality. However in this essay I propose the absurd but constructive possibility that we are not alone for decision making in a multiverse as an individual person, but we seem to be entangled resulting in the possibility to initiate but also Veto an act which is even a base for Considering, Revolve, Meditate, or Ponder. Even Max Tegmark suggested already about the multiverse: “Is there a copy of you reading this article?” We could be instant entangled with at least one instant entangled anti-copy person living inside a Charge and Parity symmetric copy Universe. In that case we could construct a causal explanation for Libet’s strange results. New statistical difference research on RPI and RPII of repeated Libet experiments described here could support these ideas. Wikipedia says: “Democracy is a form of government in which all eligible citizens participate equally”. Free will in a multiverse seems to be based on: all entangled copy persons living in all CP symmetric copy universes, have the same possibility to Veto an act and participate equally.
20. Jack Maginnis says:
I was struck by sponsorship of your upcoming workshop by the templeton foundation. hope they don’t have one of their pets participating.
21. You guys might be interested my observation that look like a model of the particle in a box problem.
22. Shmi Nux says:
A pragmatic question. Since so far MWI has no new predictions compared to the original collapse model, how can one hope to convince the adepts of other “formulations”? If I believe in objective collapse, with all the extra postulates, or in Bohmian mechanics, with its pilot wave, why should I change my mind based on logic alone, in defiance of the scientific approach, where experiment is the ultimate arbiter?
Clearly simplicity alone is not good enough, since [SU(3)xSU(2)xU(1)x 3 generations x 20+ parameters x dark matter x dark energy x GR x initial conditions] is anything but simple. This rather complicated and incomplete model won over much simpler alternatives thanks to many extensive cycles of modeling, experimenting, observing and revising. Why should EQM be different?
23. JimV says:
Let’s take the Schrodinger’s Cat case and postulate that usually the cat lives, but sometimes enough oxygen molecules tunnel their way out of the box so that the cat suffocates. Whether a specific O2 does or does not tunnel out of the box is a split, so there are many more universes in which the cat lives than those in which it dies, and the amplitudes measured over many such unethical experiments would be consistent with this.
Another case: a single electron is in an energy well. Sometimes (rarely) it bounces out of the well, but usually it does not. What if each different height it bounces is a split? Then again, there are many more universes in which it does not bounce high enough to get out than universes in which it does.
24. James Collins says:
I just watched you on YouTube Sixty Symbols talking about the “embarrassment” that there’s no consensus about “meaning” of QM, although it works perfectly. I was wondering that since we (well, you professionals) seem to agree that some deeper theory is needed that will unite GR and QM, is it possible that it’s just too early to try to fully understand QM? Didn’t it take two hundred years to begin to understand Newton’s gravity? Could this be like Einstein’s futile attempt at unification before all the fields and particles were discovered?
I imagine this probably seems very naive, and that everyone has thought of this many times!
25. Keith Allpress says:
There are three questions I needed to address, but I was unable to add to my earlier comment about quantum recombination where for examples neutrons are split into separate state but then the state description is merged again into the original state.
So the first question is whether or not multiple worlds is at all reasonable, and it plainly isn’t. The whole problem with Everettian metaphysics is that it specifies an hierarchical tree ontology, whereas our friend mother nature prefers lattices. However my denouncement of Everettianism did not actually address the justification for Born’s hypothesis, which is two additional questions.
Traditional mathematics was very poor at describing an oriented surface. Not so when bivectors are introduced. A bivector is an oriented surface and belongs to a much improved conceptual algebra for describing physics, known as “Geometric Algebra”. In particular the treatment of rotations is vastly improved, and rotational kinematics. Think particle physics, think rotational dynamics. (Think Relativity, again think Lorentz boots which are rotations too).
Multiplication of bivectors is a natural operation, and leads to measures of magnitude that are geometric, and also meaningful. The Schwarz inequalities are fundamental properties of such spaces, and from there, a la Pythagorus, you get to natural definitions of magnitude. This is the domain where conserved quantities appear. As an example, angular momentum appears as an area in orbital theory, if you recall Keplers law. Equal areas traced in equal time. The mathematical-physical object behind it being a “rotor”.
It turns out that rotors tend to dominate all kinds of kinematical equations, which gives a direct dynamical link to the interpretation of bivector products, and their algebraic properties. You can spot that a mile away, when Planck’s constant appears, there is angular momentum along for the ride.
So there are two aspects to the Born rule. One is whether or observables would take the form of product. This has been strongly justified by for entirely algebraic reasons.
The second question is whither the observations arise, which is indeed a question of interpretation.
Initially Born in fact suggested the function without a square, until suggested the square as a footnote. Schrodinger guessed at his equations too, as did others. But the question would remain the same regardless of whether it was a cube or a fourth and a half power that worked in practice. But of course it is the square that works, and so the answer lies where it always has in physics, in the investigation of the mathematics and the foundations therein that lead to it being successful as applied to the model.
Geometric algebra is new, but already it has made dramatic simplifications in the way that the physical viewpoint is expressed, and real insights have been revealed. This is obviously the way forward. I doubt very much that new age psycho-science can contribute anything useful. |
6e741670e94c9848 | The "Diamond Age"
First came the Stone age, followed by the Bronze age, which yeilded to the Iron Age (AKA "Industrial Age"). Enter the Diamond age, with the ability to create not only nano-sized diamonds of high purity, but assemble materials atom by atom.
My Pages
Important Forces
Vibration in CO2 Molecule
Important forces in nanoworld
Hydrogen bonding between molecules and silicon surface
Hydrogen bonds have about a tenth of the strength of an average covalent bond, and are being constantly broken and reformed in liquid water. If you liken the covalent bond between the oxygen and hydrogen to a stable marriage, the hydrogen bond has "just good friends" status. On the same scale, van der Waals attractions represent mere passing acquaintances!
Van der Waals (London) forces between molecules
Intermolecular attractions are attractions between one molecule and a neighboring molecule. All molecules experience intermolecular attractions, although in some cases those attractions are very weak. Attractions are electrical in nature. In a symmetrical molecule like hydrogen, however, there doesn't seem to be any electrical distortion to produce positive or negative parts. But that's only true on average. But the electrons are mobile, and at any one instant they might find themselves towards one end of the molecule, making that end delta "-". The other end will be temporarily short of electrons and so becomes delta "+". An instant later the electrons may well have moved up to the other end, reversing the polarity of the molecule. Imagine a molecule which has a temporary polarity being approached by one which happens to be entirely non-polar just at that moment. As the right hand molecule approaches, its electrons will tend to be attracted by the slightly positive end of the left hand one.
This sets up an induced dipole in the approaching molecule, which is orientated in such a way that the + end of one is attracted to the - end of the other.An instant later the electrons in the left hand molecule may well have moved up the other end. In doing so, they will repel the electrons in the right hand one. There is no reason why this has to be restricted to two molecules. As long as the molecules are close together this synchronized movement of the electrons can occur over huge numbers of molecules.
Quantum Mechanics
Matrix mechanics
In 1925 W. Heisenberg introduced matrix mechanics. His work was based on the correspondence principle of Bohr, which can be formulated as follows: In the limit of the quantum numbers approaching infinity the result of quantum theory should agree with that of the classical theory.
Heisenberg’s greatest contribution was uncertainty principal, which made knowing both position and momentum of an election was impossible. There was a trade-off, in which one could be certain about its position or momentum, which related to the act of observing changes the system.
Wave mechanics
In 1926 E. Schrödinger introduced the equation obeyed by the de Broglie waves, and he demonstrated that the quantization conditions emerge from the solution of the eigenvalue problem for his wave equation. He applied his equation to the hydrogen atom and he found that both the quantization of angular momentum and the quantization of energy emerge from his equation. The Schrödinger equation describes the behavior of quantum particles by means of waves and thereby reconciles, in a consistent manner, the wave / particle duality.
Respecting the Forces of Nature
Those so called "weak forces" (as seen in the left column) make materials at nano-sizes behave quite differently than our macro dimension. Combine the increased effect of thermal vibration and the so-called “weak” forces in the dimension, self assembly becomes possible as the necessary parts are vibrated until the achieve the correct position, or the jostling of the parts breaks up the wrong positions and forces the parts to try again and again until the get it “right.” Add quantium mechanics and you have a very alien word to what we are accustomed to living in.
In my RET-NANO course at Drexel University I’m working with Patricia Reddington (Valenzuela) on research improving the characteristics of lithium ion batteries so that a “super” lithium battery is produced. The “super” would imply faster charge (and discharge rates), increased storage of energy and vastly improved cycling from the current 5,000 to ~100,000 times. Using nano technology changing not the efficient lithium canode, but the anode (graphite). Currently the cathode, graphite, is not a nano material, and degrades rather quickly by the stress imposed by in the charging & discharging process. Other issues a result of intercalation of the graphite with lithium ions during the charging process, is the slow charge (and discharge) rate imposed by the relatively long distances involved with moving ions into the 2 dimensional planes of the graphite anode.
• Lithium-ion batteries can handle hundreds of charge/discharge cycles.
• If you completely discharge a lithium-ion battery, it is ruined.
• There is a small chance that, if a lithium-ion battery pack fails, it will burst into flames.[i]
The study of lithium diffusion in carbon materials is of great interest in both theoretical and practical aspects. During recent two decades this process was studied mainly in connection with the development of lithium ion batteries. Such batteries have negative electrodes based on graphite and other carbon materials.[iii]
In Dr. Yury Gogotsi.’s nanotecnology lab I am working with carbon nanotubes, filled with nano sized (~5NM) silicon particles using capillary action. Sonification is used to disperse the silicon particles into a solution. However, much like trying to put a camel through the eye of a needle, the origonal silicon particles must be reduced in size.
This requires hydrofluoric acid (HF) and nitric acid (HNO3) sometimes diluted with deionized water (to slow the reaction enough so that there is time to stop it as the 5 nm size is attained). The first samples of silicon particles we “nano-sized” with our solutions was 0.003 grams of Silicon, later we tried 0.006. HF will clear of a layer of SiO2. HNO3 then oxidizes the next layer of Si and HF the clears that layer of SIO2. One could compare this process to that of pealing and onion, layer bye layer. Again sonification is used to keep the silicon suspended in solution, while this size reduction proceeds.
When the correctly sized silicon particles are produced, photoluminescence is clearly seen under 254 nm ultra violet, the range in color can be from red to green. To slow desired reaction one the particle size is correct methanol is added. The particles are filtered out of solution using a ceramic filter holding a polyvinylidene fluoride (PVDF) membrane filter, and washing of the silicon continued until the pH was neutral.
Since silicon is very reactive in air (with oxygen, forming SIO2), it needed to be stabilized. For this we used 1-octadecene and ultra-violet cabinets, leading to the photoinitiated hydrosilylation of the silicon; some of the 1-octadecene reacted with the 4 hydrogen atoms on the silicon (like carbon, silicon has a valence of four). Part of the octadecene molicllure removed the hydrogen atoms, the rest of the organic C-H chain, then bonded with the silicon. The excess 1-octadecene molecules were removed in a vacuum oven at 90 degrees celcius.
This product was then dispersed in toluene (which was not degassed and remained saturated with dissolved oxygen), and was capped to prevent evaporation, and also subjected to UV radiation. This time we wanted dissolved oxygen in the mixture, as the oxygen would then insert itself into between the silicon and the organic chain forming a secure double bond.
[i] How Stuff Works (accessed 7/21/2008) http://electronics.howstuffworks.com/lithium-ion-battery.htm
// Login |
68bddb2d424afa2e | A simple discussion on Quantum Mechanics and "Quantum Teleportation".
Date : 6 Januari, 2015
Version: 1.5
By: Albert van der Sel
Status: Ready.
Remark: Please refresh the page to see any updates.
Some nice theories from physics... Let's explore "Quantum Teleportation"...
Let's take a look at something that's really interesting...: "Quantum Teleportation" (QT)...
Quantum Mechanics (QM) is a real working horse in the arena of "field- and particle" physics. It works !
But countless physicsts, philosophers (and who knows what else) broke their heads on the "interpretations" of QM.
Indeed, the stochastic character of QM, and the "fuzzyness" at times, is sometimes hard to "link" with "reality" (whatever that is).
Maybe modern insights may have shed lights on some effects, which were formerly not well understood (like "decoherence" replaces the "collapse"),
but plenty effects remain which needs much more study.
Let's try to find out what "Quantum Teleportation" actually is, and, let's try a very simple approach.
I suppose that I have to start with a general intro into QM. Otherwise, any talk about Teleportation, would not make much sense.
Let's start. But, I do not aim for a short note anymore. I'am afraid it will be rather lengthy (but not too long...)
QT is the "transfer" of the Quantum state of an observable of a Quantum System, like the "spin" of a qubit.
What a qubit is, will be explained later in this note, but for now, you may think of a particle like an electron,
which has a "spin" which is the superpostion (sum) of two basis states ("up" and "down"), simultaneously.
So, the "state" of such a qubit "qA" in location "A", might be transferred to a qubit "qC" in location "C", which is for example 100km
remote from location "A".
Since only a "state" is transferred, QT has nothing in common with "Teleportation" of material objects.
As we will see later, this is not a "cloning" process, since we will see that the state of "qA" will be lost.
One prerequisite is the existence of a socalled non-classical "EPR" channel between entangled particles. To actually transfer the information
(bits) of the state of a qubit, a classical channel is required too (like laser, EM radiation etc..).
So, since a classical channel is required, no "laws" from Einstein's relativity theories are challenged or broken.
As a matter of fact, you will see that QT is not that terribly "mysterious", after all. However, the "EPR" channel, up to this day, still is.
Although I consider this humble note to be "reasonable" for it's content, it has to be labeled to be "fun stuff", since creating a (hopefully)
entertaining note, is actually my main motive. But, I tried to get the facts straight..., so..., I hope it all works out alright.
Chapter 1. Just a few milestones, to get to the notion of the "wavefuction"..
1.1 Schrödinger's equation
1.2 Quanta.
1.3 Double slit experiment: interference patterns.
1.4 Superpositions, and wave-packets
Chapter 2. Hilbert space/Vector space. Or, the representation of a "state".
2.1 General representation of a "state vector"
2.2 Collapse of the State Vector, or Decoherence, or MWI, or something else...
2.3 Example of a 2 statevector: representation of a "qubit".
Chapter 3. Quantum Information versus Classical Information.
Chapter 4. Collapse of the statevector replaced by Decoherence.
Chapter 5. Some remarks on Commuting/Non commuting observables & Heisenberg & measurements.
5.1 Commuting and Non commuting observables.
5.2 Heisenberg uncertainty principle.
Chapter 6. Product states and Quantum Entanglement.
6.1 Simple "pure states" and simple "seperable states".
6.2 Quantum Entangled states.
Chapter 7. Quantum Teleportation.
7.1 Original setup.
Chapter 1. Just a few milestones..., to get to the notion of the "wavefuction".
I was wondering about a "good" (and informative) start of such a note like this, for quite some time, really.
I could not find a real good one. Then, when a looked at the older literature on Quantum Mechanics (QM), like books from the '70s. They often start
with Schrödingers equation, and then see how it works out on physical systems like the Hydrogen atom.
More modern literature often starts out with describing Hilbert spaces,and Dirac notation.
Still others, start with some great examples of experimental observations from the late 1800s, early 1900s.
So, finally, I got an idea: why not just make a start, using a nice 'soup' of all of those sort of intro's? Great!
Let's do it in "example" form. But, keep it short.
But please note this: If you see "math" you don't like, don't care about it! It's really for illustrational purposes only.
1.1 Schrödinger's equation
It was formulated (around 1924) by 'De Broglie', that there exists a "relation" between momentum (p) and wavelength (λ), in a "universal" way.
In fact, it's a rather simple equation (if you see it), but with rather large consequences. It's this: p = h / λ (where h is Planck's constant).
Now, "momentum", at that time, was considered to be a true 'particle-like' property, while 'wavelength' was understood to be a typical 'radiation-like' property,
which stuff like radio waves have. For example, a rolling bowling ball has quite some "momentum", and ultraviolet light has a certain "wavelength".
The formula of De Broglie, is quite amazing really. The conseqence is thus, for example, if you have a particle like an electron flying around,
you can associate a "wavelenght" to it. So, what's going on here? Do we have a "matterwave" or something?
It didn't fully came out of the blue, ofcourse. A few years before, scientists (most notably Planck and Einstein) independently found clues for the opposite, so to speak.
Namely, that radiation at certain experiments, seemed to behave like matter.
Planck and Einstein found in some observations, that radiation exibited a "corpusculair" character, like that radiation seemed to be emitted in "quanta" (discrete energy packets).
But not only that, it seemed that those quanta possessed "momentum" too, as was observed in certain experiments.
So, in a way, both sides were covered now. It was puzzling. People started to talk about the "particle-wave" duality, as if in certain circumstances,
radiation behaves like matter, but other circumstances, matter behaves like radiation.
Inspired by this and other stuff, around 1926, Erwin Schrödinger published a mathematical equation about the evolution, that a quantum system
undergoes with respect to time, when that system happens to be in some sort of force field.
This quantum system is represented by a "wave function" Ψ (r,t), which will be explained in a minute. The equation goes like this:
ih ∂ Ψ (r,t) / ∂ t = Η Ψ (r,t) (1)
((1): Note: this is a "condensed" notation. If the Hamiltonian "H" is expanded, kinetic- en potential Energy terms get visible.)
The "ih" can be considered to be just a constant, so we dont worry about it. What the equation really says is this:
the change of the system in time (notated by ∂ Ψ (r,t)/ ∂ t), is the effect of the Hamiltionian "Η",
where the Hamiltonian "H" just stands for all forces and fields operating on the system.
It should make sense, doesn't it? A system changes in time due to the forces or fields acting on it.
Note: ∂ Ψ (r,t)/ ∂ t means that small delta's in time occurs, and the cooresponding change on the position (r) of the system, is a result
of the acting Energy on it..., or the Hamiltonian on it..., or the effects due to kinetic and potential Energy...
Usually, it is supposed to be applied to microscopic systems, like for example elementary particles etc..
However, in our macroscopic world there are similarities too: if you apply a force to a ball, it starts to roll.
Now, here is the crux of Schrödingers theorem: the solutions to the equation are wave equations like Ψ(r,t)=A . e-iωrt
So, believe it or not, such solutions are thus indeed wave equations.
Actually, the Schrödinger equation applies to a "quantum system", which usually is percieved as a particle. Now, this too suggests that a particle is sort of
"smeared out" in space. However, such a view was never very satisfactory.
After many other publications, conferences etc.., during the late twenties, and thirties of the former century, most physicists gradually
adopted the view that the wave function Ψ, can best be viewed as a probability distribution of the particle in space.
This actually means that it is not as much viewed as a "true wave", but should be better viewed as the likelyhood (chance) of finding the particle
at a particular place.
For example, the solutions of Schrödinger's equation applied to the Hydrogen atom, shows "electron orbitals", which are "distributions" of the electron
around the atomic nucleus (instead of circular orbits). Maybe you like to "google" on that further.
1.2 quanta.
Photo electric effect:
One of the many experiments that shows the quantum nature of light (or in general, ElectroMagnetic radiation), is the "Photo electric effect".
The experimental setup, is rather simple. But it's fair to say that the results of such experiments were a large stimulus for Quantum Mechanics.
These sorts of experiments were performed in the late 1800's and early 1900's, and clearly showed the "breakdown" of the classical field theories
like ElectroDynamics from Maxwell.
In short it is this: of you shine light on a sheet of metal, under certain conditions, electrons get's freed from that metal.
If the frequency is too low, it will not happen. No matter how intense that light is, it will not work.
Only if you use a very precise frequency, electrons will be emitted. That happens even when the intensity is very low.
Such a observation is not in accordance with classical theories: if you shine light with a very high intensity (very bright), a lot
of energy is poured in, and it should expell electrons of the metal. But none is freed.
Now, if the intensity is very low, but of a certain freqency, electrons will be emitted from the metal.
It was percieved as really strange (at that time). It was not in accordance with "continuous wave theory" like the one from Maxwell.
Around 1905, Einstein came with a good explanation (partly based on thoughts of Planck). Instead of a "continuous wave", Einstein proposed
discrete packages, or quanta, or photons, with a precise Energy Ephoton=hν, where h is Planck's constant, and ν is the
frequency of the radiation.
So, if such a quantum has an Energy that is equal or higher than the "binding" energy of outer orbit electrons, it is able to knock it out
from the metal.
Einstein's reasoning explained why the energy of freed electrons was dependent only on the frequency of the incident light and not on its intensity.
I recommend to download a small 'simulation' program (from colorado.edu), which illustrates the 'photo electric effect' brilliantly.
If you go here you can download it, and run it on your workstatstion.
But you need java (since the program is a .jar file). If you have it, then use for example "sodium", and vary the frequency from red to violet,
while at each colour (freqency) also varying the "intensity" of that light.
While this example has no direct link to "wave function", it highlights another aspect of QM, namely that some entities, or properties,
come in "discrete" steps. This does not mean that QM deals with "discrete" properties or entities only. Oh no!
However, it's an example of how QM can differ from "continuous classical theories".
1.3 Double slit experiment: interference patterns.
Again, here we have a relatively simple experimental setup. However, in some cases, the results are quite puzzling.
But, generally speaking, the experiment is a strong "plus" for wave-particle duality, as we will see.
Please take a look at figure 1. The picture in the middle, looks like how Thomas Young performed his experiment, in the early nineteenth century.
He used an ordinary light source, and a screen with two narrow slits.
he light emitted, will pass through those two narrow slits. Then, on a screen in the back, an "interference pattern" is shown, that is,
bands of lighter and darker regions can be observed.
At later times, these experiments were repeated, however this time using "true" particles, like electrons or neutrons.
Amazingly, a similar "interference pattern" is then observed.
Fig 1. The double slit experiment.
At this stage, we have to bypass certain angles to approach stuff, like using Heisenberg theorems, or viewpoints from "weak" measurements.
However, what we are going to see in a minute, is already spectacular enough. But let's start with an easy explanation.
=> 1. The "classical wave" approach:
Using light, and the classical wave interpretation, the results ("interference pattern") is as expected.
When light passes the two slits, at such a slit, it will be "defracted", and a spherical wavefront travels to the screen.
But keep in mind that there are two slits. Now, the waves are "sinusoidal" and have a certain fixed wavelength.
Even without calculations whatsoever, you can imaging that those waves travel different lengths, before they reach the screen in the back.
Now, take a look at the right image of figure 1. In some cases, waves may "cancel out", because at a certain point at that screen,
it holds that the wave from slit 1 happens to have a max amplitude, while the wave from slit 2 is exactly the opposite. Therefore the sum
of those waves cancels out (the centre of a dark region).
At other points on the screen, it holds that the wave from slit 1 happens to have a max amplitude, while the wave from slit 2 is max too.
This will then be the case for the centre of the bright bands on the screen.
The whole effect is just due, by the fact that the waves from slit 1 and 2, have travelled different distances, where at some points,
it will be n x λ (n as an integer number), and thus they amplify (the center of the bright bands).
At other places it will be ½ x n x λ, so the maxs and mins of the waves, cancel out (the center of the dark bands).
And ofcourse, there are regions where the waves partly positively or negatively add up (regions between the bright and dark bands).
=> 2. Using light again, but now we use the "quanta of Einstein" approach:
Here, it gets interesting if we lower the intensity. So, suppose we have a situation where occasionally photons are emitted.
Next, we observe the screen in the back. Now, we can see the small "impacts" which at first sight, are randomly distributed on the screen.
However, as time goes by, we see that an interference pattern builds up, exactly as we saw above.
This is not so easy at all. There was no way that a particular photon, could interfere with another photon. No, the intensity is so low, that they
are emitted one after the other, with sufficiently large "gaps" between those emissions.
The following may seem unbelievable (at first). Many physicists say that the photons are interacting with themselves, within their own wave packets.
to produce the interference pattern.
So, you might be tempted to suppose that a quantum goes through one slit. However, the best view is that such a quanta goes through both slits simultaneously.
This is difficult to reconsile with a "matter/quantum - like" view of radiation.
But there is no good alternative to explain the "interference pattern", if we use photons which are emitted one after the other.
So, it really looks like, in some strange way, that each photon is interfering with itself.
But hold on. Now let's see what happens if we do not use light, but "true" particles like electrons.
=> 3. Using particles:
This completely resembles what we have seen in (2). So, using a very low intensity, where electons one by one are emitted to the slits,
after a while, an "interference pattern" is build up.
Again, we must say that an electron goes through both slits simultaneously.
Unfortunately, at this stage, my choice of words is not very "optimal". This is so since there are some "intracies" with the "art of measurements",
like for example if we would use Heisenberg's theorems, or see what we have with commuting- and non commuting observables, and using "weak" measurements.
We have to leave that for a later moment.
However, what we discussed above, is really true, and at this point, quite amazing.
The discussion above partly reinforces the idea "particle-wave" dualisme. However, what it really shows is that we must "give up"
the idea of a photon or an electron having location. The "location" of such an entity (like a photon, or electron) is not defined until it is observed.
You see that? We cannot even fully "escape" using a "probability distribution". The entity always seem to pass through both
slits. Here we get to a point where, even up to this day, physicists, philosophers, and many others, break their heads
on the results of this "seemingly" simple experiment.
Again, let's talk about such an electron again. It goes through both slits, so it acts like a wave? Thus, if so, you might then say
that it makes no sense to ask at which slit the electron passed through. Ok, so the electron behaves like a "wave" here.
Great stuff..., Yes?...No?...Yes!!!
If you don't fully understand it, Richard Feynman (more or less) said: Nobody understands Quantum Mechanics... And that's true.
1.4 Superpositions, and wave-packets
If you have read section 1.3, you remember the electron that passed through both slits. It sounds very strange, weird maybe, but that's the best
quantum description of the event. If you say, in this case, the electron is like a wave, then at least that makes it more understandable that the
interference patterns emerge, which can be explained by the fact that the electron interfered with itself...
But, If it interfered with itself, it just "looks" like as if multiple 'waves' were 'superimposed'.
But there must be some connection with "probabilities" too.
The Schrödinger equation from section 1.1, might help us too, understanding the principle of "superposition".
It's just a "partial linear differential" equation, and there is nothing "fuzzy" about it at all.
Solutions for it, for several traditional "problems" have been worked out (like H-atom, particle in a box, a potential "well" etc...),
and it seems a nice starting point for QM.
Since the equation is linear, it means that superpositions (additions) of solutions, are solutions too.
Ask any mathematician which is nearest to you: he/she will absolutely confirm that !
So, if Ψ1 (r,t) and Ψ2 (r,t), are wavefunctions which are solutions for Schrödinger's equation, then
Ψ (r,t) = a. Ψ1 (r,t) + b. Ψ2 (r,t)
is a solution too (it's just math..., don't worry about it). In fact, any sum of solutions..., is a solution.
Now, a solution like:
Ψ (r,t) = ei(kr-wt)
is a solution indeed, but it looks like a "flat" wavefront, which does not "normalize"!
Because, if we calculate this:
ALL SPACE d3r |Ψ (r,t)|2 = ∞
it means this: is if you would calculate, or actually "sum" the distribution of such "flat" wavefront
over all space, you will end up with an infinite number. That can't be good. That can't describe a particle.
This is why physicists introduced the "Wave Packet". It's a superpostition too, but for a free particle, the components add up
in such a way, that the packet is grossly localized, like some sort of "gaussian" distribution, with a maximum, and it fastly diminishes
at the fringes. So, the following is a good solution:
Ψ (r,t) = 1/√ 2π ∫ ALL SPACE d3r g(k) ei(kr-wt)
where "g(k)" is a sort of gaussian function, which gives the wavepacket a maximum at it's "center", and quickly lowers the amplitude
as you move "away" from the packet's center.
However, theoretically, even at large distances, the amplitude is not "0", but it sort of asymptotically nears "0" the further you go.
It's very important to realize, that such a wave packet is actually a sum of superimposed (summed up) waves.
Often, it is called "superposition of states", where each state represent a certain "probability" that the particle actually resides in.
Those waves (states) of the packet are coherent, which means you can see them as solutions to a "harmonic occilator",
that is, sinusoidal-like waves in "shape".
That the component waves (states) of the packet are coherent, is often interpreted as that they do not "spread".
However, a wave packet in general. "naturally" spreads because it contains waves of different momenta and hence different velocities.
They all exist at the same time.
Remember the single electron that was used in the double slit experiment (1.3)? Here there was interference "with itself" meaning
interference from those superimposed waves.
Fig 2. Illustrating a Flat wave versus a wavepacket.
Let's now turn to a good way to "notate" or "describe" Quantum Systems (like a particle, photon, atom etc..), using the Dirac way
of handling stuff...
Chapter 2. Hilbert space/function space/vector space. Or, the representation of a "state".
Bohr, Bohm, Planck, Einstein, Pauli, Dirac, Heisenberg, Schrödinger, and many others, were the men who build
the original framework of QM, grossly in the period 1900-1950.
Originally, to work with QM, a "calculus" type of mathematics was used initially (like the partial differential equation of Schrödinger).
Somewhat later (in that same period), a "vector/matrix" type of approach to QM, was introduced. In many ways, that was in large thanks to Dirac.
This indeed made operations in QM much more uniform, and way better to understand (I think).
However, new work/discoveries as from, say, 1950 up to now (2015), were enormous. Also in the field of Interpretations of QM.
For about the latter: we certainly will see some stuff about "Collapse of the state vector", "Decoherence", "Many World Interpretation" (MWI),
and the "no-nonsense" interpretation (as some new physicists apparently seem to view QM lately).
Now, here is a little about Hilbert space/function space/vector space analysis.
Essentially, it's a "vector type of calculus". Ofcourse, if you dive into the professional literature, you will learn about
formal definitions on Hilbert spaces, complex spaces, conjugates, bras & kets, all of them with lots of theorems and collaries (and proofs thereof).
But we keep it simple ! We only want to crack Quantum Teleportation, and to understand what it is, and therefore, we only
need the info to get there.
What is a "ket" (Dirac) or "vector" anyway? It's often defined as an entity with both a magnitude and direction. However, visualizing a vector
in 2 dimensional space (a plane), or 3 dimensional space, is really easy.
Take a look at figure 3, where we see a picture of a point in space (1,2,3), and a vector "pointing", from the origin "O", to that point.
Fig 3. A vector in R3 (3D space).
In figure 3, you see the vector (1,2,3) "going" from the origin "O" to the point (1,2,3) in space.
Most interesting is the fact, that we can say that to reach (1,2,3), we need to do "1 unit step" in the x direction, then "2 unit steps" in the y direction,
followed by "3 unit steps" in the z direction.
With a little imagination, you see the "unit vectors" (1,0,0), (0,1,0) and (0,0,1) positioned along the x, y, and z directions, respectively.
So, the vector (1,2,3) is equal to:
(1,2,3) = 1 x (1,0,0) + 2 x (0,1,0) + 3 x (0,0,1).
2.1 General representation of a "state vector"
Remember the "superpostion of waves", or "superpostion of states" from section 1.4?
Indeed, the quantum system (like a particle) is in een superpostion of such basis states.
Vector calculus provides a natural way to describe a quantum system in that way. Note that we now go to one of the "true" hearts
of the original (1900-1950) formulation of Quantum Mechanics!
So, if we let the states that all together form the superposition, be represented by basis vectors, then we can represent the Wavefunction,
or statevector, of a quantum system, in the following way:
|Ψ > = c1|a1 >+ c2|a2 >+...+ cn|an > = ∑ ci |ai >
where |a1 >, |a2 > (etc..) are n (ortogonal) eigen- or basis vectors (n=2,3,4,5...)
Instead of fully writing down all unit vectors of the superposition, often the Greek ∑ "summation" symbol is used, to denote exactly that.
Those basis vectors are often callen "eigen states", or "pure states".
So, a quantum system then, written in vector notation (the State vector), generally is a superposition of those "eigen states".
This is truly a core concept in the framework of Quantum Mechanics. Another one is the "measurement problem", which I will illustrate
using an example of what we have learned in this section, namely using the description of a "qubit". But, first something "special"....
From the former section, we know how to represent a State vector as a superpostion of "eigen states" (unit vectors).
But, at a measurement, "something" happens.
As we talk about pure quantum systems here, if we do not observe it, we actually do not know the state of that system,
only that it is in a superposition of (possibly) many states at the same time.
Remember the electron we used with the double slit experiment of section 1.3? It seemed that it passed both slits.
We even had to give up our notion of "clear path" or "clear trajectory" here.
Now, the following might not strike you hard, initially, but it's really something!
Suppose I place a small detector at one of the slits, say slit 1. Maybe I see a reading from my detector, meaning I have detected
the electron.
There won't be ever a interference pattern at the screen in the back.
Since I have located (or measured) the position of the electron, the "superposition" of all possible locations of the wavepacket
suddenly collapsed into a specific location, one of the possible eigenstates of that observable.
There are many other examples. The next one is a bit "blown up", but I want to make a point clear, even with an exeggerated example.
Suppose we send out a photon. We should regard it (for now) as a sperical wave, since we do not know anything of it's location.
Now, somewere in space, I have located a detector. If I find a reading, meaning I found (detected) the photon,
all posible locations "collapsed" into that single point.
As a more quantitative and realistic example:
Suppose I have the following 2 state quantum system:
|Ψ > = a . |a> + b . |b>
The system is in a superposition (or linear combination) of the eigen vectors |a> and |b>.
The coefficents "a" and "b" should determine the actual state of the system. However.., are we really allowed to talk that way?
QM tells us that we only know that |Ψ > is in a superposition of the eigen vectors |a> and |b>.
And we have not observed, or measured, anything yet!
Now, just as was the case with the electron, if we perform a measurement on the system, then we always find
the system to be in:
state |a> or in state |b>.
This might be percieved as quite weird.
Actually, some folks formulated it this way: our measurement "destroyed" the former quantum system. It (the quantum system) is something else now.
This is very close to the famous "measurement problem" in QM. It seems that our measurement was quite "disturbing" to the system.
Note: many people also call it a "strong measurement", or "perturbative measurement".
Did you noticed the "collapse of the State vector"? From |Ψ > = a . |a> + b . |b>, the system collapsed into either |a> or |b>
The stuff clearly shows us that QM is probabilistic in nature.
Moreover, experiments have shown that those coefficients (a and b) relate to the probability of finding the system in state |a> or in state |b>.
Then, since the probability of finding the state (after measurement) to be in |a> or |b>, must be "1" (or 100%).
Ofcourse, we can only find |a> or |b>, so the total chance added up, must be 100%.
However, the chance to find |a>, or to find it in |b>, is less than 1. It could be (for example) 30% and 70% respectively.
This can only be effectively determined after many experiments and simply count how often you have found |a> or |b>.
The only thing we really can say is this: all probabilities added, must be "1" (or 100%), and mathematically this equates to:
|a|2 + |b|2 = 1 (or 100%)
The interpretation of this phenomenon has long been a true issue for physicists and other scientists like philosophers.
It is still not fully understood, however, the socalled "Decoherence theory" provided us with a somewhat simpler way to digest it.
I like to touch on "Decoherence", a bit later on.
What we have seen here, is often called "The Copenhagen Interpretation", and it's great if you would google somewhat further on this.
It should be clear, that once a quantum system "collapsed" (or decohered) into an eigen vector, and you would immediately perform
a measurement again, then you will find that same eigen vector again. At least, that would be obvious from the theory presented thusfar.
Maybe all this stuff was a bit "abstract", so let's take a look at a real world "2 state" quantum system in the next section: a qubit.
We already have seen the statevector of a two state system. This is also often called a "qubit" as a shortage of "quantum bit",
since "qubits" are used in the new technique of "Quantum Computing".
Here is a general representation:
|Ψ > = a1 |a> + a2 |b>
where |a> and |b> are ofcourse the eigen states (base unit vectors), and a1 and a2 are just numbers (the coefficients).
Now, since qubits are used in quantum computing, the |a> and |b> vectors are often "rewritten" as |0> and |1>.
There are two reasons for that.
One is to emphasis the "computing" element, since in traditional computing, ordinary "bits" (0,1) are the most fundamental units, ofcourse.
Secondly, often particles with "spin" (we will come to that in a minute) are used, as the physical entities, and such a spin
can be "up" or "down", which is often expressed as |0> and |1>, or sometimes also as |↑> and |↓>
There is ofcourse an enormous difference between classical bits (used in regular computing) and "qbits".
- A regular bit can only be "0" or "1".
- A quantum bit, or qubit, is a superpostion of |0> and |1>, and those "span", in principle, an infinite number of resultant states.
You might still be amazed. Well, the different combinations, which two basis vectors can "span" (with coefficents where |a1|2 + |a2|2 = 1 ),
can be visualized as a circle. Please take a look at figure 4.
Fig 4. Ψ as all possible combinations of |0> and |1>: the Bloch sphere.
In figure 4, you see the two example Ψ Statevectors, in green. But these are just two examples. Any combination of the basis vectors |0> and |1>,
can "span" a Ψ statevector. That is, a1 |a> + a2 |b> will define a "circle" of possible eindpoints of the statevector.
Just keep in mind that a1 and a2 are numbers where the real part is < 1, that is, it needs to hold that |a1|2 + |a2|2 = 1.
Since we have two independent eigen vectors (|0> and |1>), it automatically defines this vectorspace as 2 dimensional (or a plane).
So, many folks visualize state |0> as the unit vector (1,0) (along the x-axis), and |1> as the unit vector (0,1) (along the y-axis).
In accordance with the theory of section 2.2, if you would perform a "measurement" on the qubit, you will find |0> or |1>
since the State vector will "collapse" or "decoheres" into one of those states.
=> (1) What physical system can be a Qubit?:
Researchers might consider any system or entity that could potentionally work as a qubit, like for example molecules, atoms, ions, photons,
electrons, or any other particle or system. Thus, In principle, any system having an observable quantity which has at least two
eigen states, could be a "candidate" for a qubit. Again, unmeasured, the system would be in a superposition of those states.
Often, a spin 1/2 particle, is a good choice, like a "trapped" electron (quantum dot).
The big "enemy" of researchers is decoherence with the "environment", since that will alter the state of the qubit.
=> (2) Limitations:
As note before, this is not a "book" on Quantum Mechanics. Lot's of stuff is omitted in this note, and the concepts that are shown,
are simplified to a high level. For example, some "States" cannot be represented by by a "ket" or Statevector.
Chapter 3. "Quantum Information" versus "Classical Information".
We are now ready to compare some aspects of Quantum Information, to "Classical" information.
To narrow this down a bit, let's compare information of qubits to "classical bits".
Maybe not all of the items below will be immediately clear, but don't worry about it.
If you would consider a special case of Classical Information, like for example "bits" in a digital computer memory,
then some obvious observations can be made.
For example, the state of a computer register, is completely "known" at a certain moment. It might contain a bitstring like "1001 1101",
and this can be read, copied to tape, or disk, or another computer, or multiple disks etc..
=> (1) For Classical information, it is generally true that:
• you can read a register, or memory location, without altering it. It's "state" will not be altered.
• you can "clone" classical bits without any problem.
• you can "broadcast" classical bits to many destinations without any problem.
• If "power keeps on" and futher you do nothing at all, a memory location keeps that same bit string for years.
• if you don't know the state (content) of a register (or memory location) you can still copy it.
=> (2) For Quantum information, it is generally true that:
• You don't know the "value" of a qubit. It's "unknown". You only know it can be represented by a superposition of states.
• If try to "read" it, you interact with it, which "alters" it state.
• You cannot "copy" a qubit, since you cannot read or measure it precisely. It's a superposition, unless you "collaps" or "project" it.
• In general, if you "measure" a superposition of states, you get different outcomes all the time. At best you can get (over time) an expectation value.
• In general, if you "measure" a Quantum System, you get "a value "which you might consider as "classical" but it does not represent the former (fuzzy) Quantum state. That is lost.
• Quantum Information cannot be fully converted precisely to classical information, and the other way around.
How about that? Would you (largely) agree with the listings above?
Before we go to "Teleportation", we need need some basic information about a fantastic quantum phenomenon: Entanglement.
Then, we need just a tiny bit more knowledge of vector calculus or "ket" algbra.
Then, in chapter 7, we will finally deal with the true subject of this note: Quantum Teleportation.
Chapter 4. "Collapse of the state vector" replaced by "Decoherence".
Next, we spend a few words on a couple of terms that were mentioned before: "Collapse of the State vecor (or wavefunction)" and "Decoherence".
=> Collapse of the Statevector / Copenhagen Interpretation:
In chapter 2, we have seen a few examples of the socalled "collapse of the Statevector". If an actual measurement on a quantum system is done,
out of many possible results, just one is "selected" in "some magical way".
An observable, initially in a superposition of different eigenstates, appears to reduce to a single value of the states,
after interaction with an observer (that is: if it's being measured).
This idea was much debated in the late 20s and 30s of the former century. Eventually the idea evolved into the socalled "Copenhagen Interpretation",
and while you could not say that it had true "advocates", it's a fact that many just accepted it as a "workable solution" for the Theory and observations.
(You may note that I completely bypass any thoughts about Hidden Variables, as an alternative for the collapse).
=> Decoherence:
In in the late 70s, 80s and early 90s of the former century, research got a renewed momentum to find a good "internal explanation" for the presumed "collapse".
This is no to say that much earlier, some folks (like Von Neumann in 1932) already investigated idea's that are quite close to "decoherence".
However, during the "renewed investigations", it was finally argued that a process called "decoherence" provides for a reasonable explanation
for just an "apparant" Collapse. There is no "sudden collapse", or "sudden reduction", it only appears to be so.
During that time, it was more and more realized that a quantum system is generally "embedded" in the environment. This is even true if you do not
measure anything at all. But it is especially true, if you let the quantum system interact with a "measuring device".
In QM, each state that contributes to the superposition of the State vector, needs to be in a coherent state.
Remember from section 1.4, that the waves (states) of the "wave packet" are coherent, which means they are similar like harmonics.
You can see them as solutions to a "harmonic occilator", that is, sinusoidal waves in "shape".
The theory of Decoherence states, that while the quantum system "nears" the measuring device, the different components entangles more and more with the many quantum systems
of the measuring device, (or in general: the environment), and a process called "einselection" takes place. It means that most entangled waves (or "coupled" waves) leakes out
to the environment, until a pure state is left over. This process is thus responsible for an apparent collapse.
Nowadays, it's "almost" fully accepted that "measurements" involve entanglement of the quantum system with the environment,
and the former idea of the sudden "reduction", or "collapse", of the statevector, is replaced by decoherence.
Especially Zurek created nice articles that go into depth in the theory of Decoherence, while it's still nice reading.
I can recommend this article (arxiv.org), although it's quice "spicy" (technical).
Other great stuff can be found here: Quantum Decoherence (www.ipod.org.uk)
Sometimes you may think that QM is kind of "fuzzy". Actually, it's generally speaking not "fuzzy" .
If you look at a wave packet (of section 1.4), a lot of waves are superposed, with a sort of gaussian probability distribution.
It might look "fuzzy", but it's just a superposition, and we can even capture it in an equation.
So is it really fuzzy? Probably not, but what might look "fuzzy" is the "probability distribution".
Such statements are not really exact. Besides that, different folks may have different interpretations too.
Please consider again the formalism of section 2.2, where we represented the Statevector (of a quantum system) as a superposition of eigenstates.
In such a case, if you observe the quantum system (or perform a measurement), then we find a eigen state with a certain probability.
That's not fuzzy, since you can find a well defined value. What does make people wonder, is that finding a certain value due to a measurement,
is that this is associated with a "probability". Indeed QM, is intrinsically "stochastic".
However, some say that the theory of "decoherence" removed the "stochastic" character of QM. That's not true, since the original superpositions
still are part of the theory. But, again, different folks may have different interpretations.
And, what we have not seen here much, is that at many places in QM, discrete "quantum numbers" play an important role.
Now, I need to make a little "refinement" on what I have presented sofar:
Actually, when we observe a quantum system, an "observable" is involved. This is an important term. In general, a quantum system (like a particle),
may have multiple observable "properties", like position, or momentum, or spin, or polarization etc...
Question is: if we measure for example the "spin" of a particle, then the statevector for that observable will collapse/decoheres to an eigen state.
So, did we now collapsed the "whole of the particle", or just that particular property? In general, this is not so easy to answer.
However, in some cases, it's not difficult. Suppose you measure the orientation of the spin of an electron, and you find the state "up" (|↑>),
then it's still an electron! However, that specific observable decohered (collapsed).
In some other cases, it's not so easy. If you "measure" the position of a photon, then it's "gone" (meaning that maybe some electron in the material of a screen,
absorbed the energy quantum, and was freed, or went into a higher atomic orbital..).
5.1 Commuting and Non Commuting observables.
How many observables may a quantum system have? Indeed. It can be quite a lot.
Only in the very most simplest case, an observable defines the state of of a Quantum system.
But in general, more observables are needed to "define" or "pin down", a quantum system. Just take a look at a particle. It has a
position in space (wave packet), a momentum, and possibly it may have a "spin" too.
Up to now, we have "acted" as if one observable defines a quantum system. Now, we know that in general that is not true.
Measurements on observables "A" and "E", can be interpreted as having Operators acting on those observables. Often, those Operators
get a similar naming as the observables (with an additional token like a circumflex, hyphen etc.., but that depends a bit really).
1. Commuting observables (operators):
It is said that operators (or the associated observables) commute if the following holds:
Â Ê Ψ = Ê Â Ψ
In the equation above, you see that in this case the order of taking measurements does not matter!
We are not going to mathematically prove it, but the following may sound plausible.
It can be true if both observables (State vectors) can be expanded (or written) in a common set of eigen vectors.
Often, people say that A and E have a "mutual eigenbasis".
Actually, it's a better way to describe that two observables "commute", if both can be measured simultaneously, with definite values
and thus both (sort of) more "pinned down" the state of the quantum system.
It's also often said that the observables are "compatible", which is the same as "commute".
Unfortunately, it's all pretty abstract for now. But if you believe that both observables have a "mutual eigenbasis", then both
can be measured and definite values can be found at the same time.
You can also say this:
- This is only so if Ψ is an eigenvector of both operators.
- Two observables can be known simultaneously only if they have a common set of eigen vectors.
2. Non commuting observables (operators):
There are also Non commuting observables (or Operators). It's also often said that those onservables are "incompatible".
Two incompatible observables cannot have a common set of eigenstates (no "mutual eigenbasis").
Say we have A and E again. It can be proven that it's simply not possible to find a well defined value for A, and a well defined value
for E at the same time (simultaneously).
Furthermore, the order of performing measurements now is relevant, and Â Ê Ψ ≠ Ê Â Ψ
Rephrasing Non commuting operators in "brute force language":
Gaining knowledge of one observable through a measurement, destroys information about the other.
3. Further explanation:
If Ψ really would be an eigenvector of both operators (observables), so both observables have a mutual set of eigenvectors
then you can hopefully see that measuring A and E would always produce an eigen vector in the same vector space.
Since finding an eigenvector was already a matter of probability, but the system stayed in the same vectorspace.
So, the second measurement will produce again an eigenvector with a certain probability.
But, it boils down to the fact that you can find precise values for A and E, even at the same time.
If the observables are incompatible, both have different eigenvectors. And a perturbative measurement on A, disrupts everything
so much, that the uncertainty in E increases highly. A perturbative measurement on E disrupts everything so much, that the uncertainty in A increases highly.
You cannot have precise values for both observables.
I realize that this is not a good attempt to try to explain the differences between commuting and non-commuting operators.
Using math, it could be explained way better. But I try to avoid math as much as possible.
Some examples:
• Adding numbers commute, since in any addition you can reposition terms as you like, e.g. a+b+c=a+c+b
• Rotating an object over two axes (thus two operations), does not commute. If you turn 90% along the x-direction,
and then 90% along the z-direction, will produce a different orientation of the object than the other way around.
• Position and momentum does not commute. If you measure precisely the position of a photon, the momentum is destroyed.
5.2 Heisenberg uncertainty principle.
Some historians have researched where the "principle" actually first originated.
They have reasons to believe that actually Pauli was first to express ideas of such content, while others see evidence
in papers of Wiener to believe that actually "Wiener was first...."
Sometimes the Heisenberg uncertainty principle is associated with the "precision of measurement".
That is not a good view, really.
Here I am not saying that there does not exists a "measurement problem" in QM. There absolutely exists a "measurement problem" in QM,
which constitutes one of the hearts of QM.
A remark on the measurement problem in QM:
If you would measure the temperature of a large swimming pool, your device will not (really) have any relevant influence
on the swimming pool. In fact, the measuring device is so small compared to that large body of water, that it is absolutely
correct to say that your measurement did not "disturbed" the swimming pool at all. It was not a "perturbative" measurement.
The large swimming pool was not changed by your measurement.
However, QM is used in the world of elementary particles, photons, atoms etc.. These quantum systems are very small,
and any measurement you perform, will be "perturbative" to some extend.
In fact, your measurement will change the state of the quantum system. And even one of the "core" descriptions in QM, namely
that a State vector "collapse" or "decoheres" to an eigen state when a measurement is performed, actually "sort of" confirms
that your measurement is "perturbative".
It's a core problem, and even up to this day, different perspectives are published in a whole range of scientific articles.
Note: "weak" measurements.
A "perturbative" measurement is also often referred to as a "strong" measurement.
Contrary to such "disturbing" measurements, in 1988 a couple of physicists published an article on "weak" measurements,
in which they stated that in principle, extremely weakly coupled measurements could be performed, which, run over a longer period,
could give information about the "undisturbed" quantum system.
This idea raised quite some debate in the scientific community, which is not over yet.
A remark on the Heisenberg uncertainty principle:
The Heisenberg uncertainty principle is simply build into QM. Once you deal with non commuting observables (or operators),
you "get it for free". In fact, it's really not directly related to the "measurement problem".
One of the most used example of the Heisenberg uncertainty principle, is when you consider the pair of "non commuting" observables
position (x) and momentum (p).
In fact, as from the moment that you proposed a "wave packet" to express the "position" of a particle, the "Heisenberg uncertainty principle"
will be build in automatically. It arises from the wave properties which are inherent in the QM description of nature. Really !
The Heisenberg inequality relation for position (x) and momentum (p) is:
Δ x . Δ p > ℏ / 2
and the other way around.
It has been proven, theoretically and experimentally, for many quantum systems.
For example, if you go to the following wikipedia article, and go to the "particle in a box" problem, the inequality will be derived in a fairly simple way.
And it's just based on the wave mechanics of QM ! So, as please see this example (wikipedia)
Please note that the derivation has nothing (!) to do with "precision" of measurements. It's just build deep in the description of QM.
Next, a discussion on "Quantum Entanglement" is on the list of "things to do".
6. Product states and Quantum Entanglement.
We already have seen some "State vector" examples. I will provide for a few more simple examples here, in a minute.
Technically, QM distinguishes between "pure states" and "mixed states" (and even others), which can be quite confusing, really.
Since this note is very simple, I don't go into that. What we have seen thusfar, are pure states since we just have
a "State vector" as linear combination of eigenvectors, which are in superposition simulaneously.
I want to keep that image, so to speak, as not to introduce too much complexity.
A mixed state is often a statistical mixture of pure states, where a more complex interference might be present,
and it has certain complexities in interpreting coefficients as probabilities.
Mixed states introduce an additional layer of complexity, and is often used in "ensembles" studies.
However, some say that "mixed states" can only be described using density matrices.
Here we keep it simple (and thus less general...).
6.1 Simple pure states and simple seperable states.
1. A few example of (simple/pure) State vectors:
- A state vector with a basis of two eigenstates (a qubit):
Ψ = a|0> + b|1>
- A state vector with a basis of three eigenstates:
|Ψ > = c11 >+ c22 >+c33 >
- A state vector with a basis of n eigenstates:
|Ψ > = c11 >+ c22 >+...+ cnn > = ∑ cii >
2. A few example of "product states":
We will first take a look at the normal "product state". If we have two systems, and we want to describe their "joint state",
an outer product is used to accomplish that.
Suppose we have two qubits like:
1 > = a|0> + b|1>
2 > = c|0> + d|1>
|Ψ> = |φ1 > ⊗ |φ2 > = ac|0>|0> + bd|1>|1> + ad|0>|1> + bc|1>|0> =
ac|00> + bd|11> + ad|01> + bc|10>
The equation above, is a way to describe the combined state of both qubits.
It's just a product state of all components, and those components are not further "correlated".
Note that for example |00> is no more than a short way to mean |0>|0>.
Actually, what we have seen above, is the "Quantum Way" for the following simple mathematical equation:
(a + b) x (c + d) = ac + ad + bc + db
Whenever you have a situation where it is possible to write:
|Ψ> = |φ1 > ⊗ |φ2 >
and |φ1 > and |φ2 > are any n-dimensional pure states, then we can speak of "seperable" systems,
since their joint state is a simple product of both states.
It's really just the same as the simple mathematical analogy, that:
"ac + ad + bc + db" is seperable into "(a + b) x (c + d)".
6.2 Quantum Entangled states.
"Entanglement" in Quantum Mechanics (QM) plays a crucial role here, so let's spend a few words on that subject now.
Here again, we describe two or more quantum systems (like particles, photons), with a sort of common state vector, like we
have seen above. However, this time, the systems are "not seperable".
That is, we cannot simply write it as a product state, like |Ψ> = |φ1 > ⊗ |φ2 >
It means that we cannot "seperate" the systems "just like that"!.
Their "intertwinement"is so high, that we speak of "correlated" systems, with respect to the "observable" (often spin, polarization).
Automatically, since we cannot seperate the individual states anymore, a measurement on one particle, affects the other as well.
That is the "crux" of "entanglement". Don't think lightly about that. Although it's a common feature in Nature, as we know now,
it has important consequences, and produces lot's of brain fire-crackers for "philosophers" and others who want to understand nature.
Just take a look at this. Supose two correlated particles are in close vincinity. Now, we move one of those system further and further away
from it's partner particle. It's commonly accepted that still holds that: a measurement on one particle, affects the other as well.
Now, if the distance is very large, even so large, that some sort of "signal" between the partners would violate the speed of light barrier,
then we might be very puzzled.
Indeed, it's experimentally established that both "collapse" to another state, simultaneously, even if their distance is larger
than some "magical" signal would need timewise. So, the partners cannot "inform" each other about a change in state.
This is the effect that Einstein called "spooky action at a distance...
Noteworthy is the article "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" (1935)
by A. Einstein, B. Podolsky, and N. Rosen which resulted in what later was called "The EPR paradox".
As the authors argue, a number of results of QM, seem to be inadequate to describe physical reality. As we can interpreted it now,
most of their arguments actually refer to the effects of entanglement, collapse of the state vector, and other aspects of QM.
It's certainly worth it, to perform a web search, and browse through some articles which explain the original article.
It's certain that the last words has not been spoken yet on entanglement, and many suggestions have been forwarded why we
still not fully comprehend it, like for example:
- We do not correctly describe the entangled state, and/or interpret it wrong.
- SpaceTime issues related to "Entanglement" are not (yet) understood.
- Strange "hidden variables" are at work here, and sadly, we don't known of those yet...
- Nature sometimes just works "non-local" in certain events. This is the non-locality principle.
There are some interesting studies to the "nature" of "what" exactly sort of "binds" the partners of an entangled system.
The "non-locality" principle is widely accepted. However, at this moment it does not deliver the "mechanics" of the strange binding.
In chapter 7, we will see that it is often called "The EPR channel" as a tribute to the famous EPR article.
Some further studies try to explain the EPR channels using further maths on Hilbert spaces, and the resulting "channel maps" thereof.
One consequence seems to be, that "time reversals" are neccessary to keep the theory consistent.
Needless to say, that if true, it would be very spectacular.
As an example, you might browse through this article (arxiv.org), but it's quite hard to read.
Other studies try a lattice approach. Still others, use any other concievable way to to pursue the EPR channel further.
One other amazing path, is the ER=EPR wormhole suggestion. Some folks have put forward, that at the Planck scale, wormholes exists,
at the ultimate smallest scale of SpaceTime. The EPR wormholes then, supposedly connect entangled particles.
However, many physicists are sceptical, and like much more "body" on those models. Many articles have appeared, pro and contra,
since the original ideas were published.
As an example, you might browse through this article (arxiv.org), but here too, it's not easy to read.
One thing is for sure though: physicists do not accept any "faster than light" signalling, to explain it.
(Note: Even in the case of Planck-scale wormholes, the theory uses "non-traversable" wormholes, or maybe, the theory leads
to non-traversable wormholes. This should evade superluminous signalling.)
A well known example of entangled particles:
Here is a well-known example, namely a singlet state of two particles (particle 1 and particle 2) with respect to their spin.
1,2 >= 1/√2 . ( |↑12> - |↓12> )
So, the wave function above, describes the "common" state of such an entangled system with respect to their spin.
Do you see that it is a superposition of the states "|↑12>" and "|↓12>"?
If a measurement is done, and you find particle 1 to be "up" (↑1), then it follows that particle 2 must be ↓2.
Since "obviously", the wavefunction "collapsed" into state "|↑12>", it automatically follows
that particle 1 is "up" and particle 2 is "down".
But both spins might take any value along a particular axis, as long as you have not measured it. This strange "thing" is a hard part of QM
to understand.
An entangled system may "exist" for some time, but often, quite quickly, a particle interacts with the environment, at which decoherence
will end the (two-system) entanglement. So, it's not easy to seperate the members of a singlet state, for example, hunderds of meters, or a few km's (or further) apart.
People working in the field of "Quantum Computing" often spend quite some time investigating how to "battle" Decoherence so that
they can preserve their qubits and gates for a longer period of time.
So, what do we have "qubit wise" now?:
1. One qubit:
|Ψ> = a|0> + b|1>
2. Two qubits:
The general state of a "two qubit system" is given by the quantum state:
|Ψ> = a00|00> + a01|01> + a10|10>+ a11|11>
Here, the system exists in the states |00>, |11>, |01> and |01>, at the same time (simultaneously).
We know that it is a "product state" of two single qubits, and it's "seperable".
3. Two strong entangled qubits (Bell states):
Two qubits (or more qubits) can be in an "entangled" state.
There are different possibilities here. Especially, the socalled "Bell states" denote strongly entangled systems.
You know that the "parts" of an entangled system might be seperated, in different regions, for example in different corners of a Lab,
or different rooms, or maybe 500m apart. Usually, folks use characters like Alice and Bob as two
imaginary persons, each holding one part the entangled system. That's why often a subscript "A" and "B" is added to a substate of the
superposition, to denote "where it is" (at Alice or at Bob). Be warned though, many folks just leave it out just as easily.
Here they are:
A,B >= 1/√2 . ( |0>A |0>B + |1>A |1>B )
A,B >= 1/√2 . ( |0>A |0>B - |1>A |1>B )
A,B >= 1/√2 . ( |0>A |1>B + |1>A |0>B )
A,B >= 1/√2 . ( |0>A |1>B - |1>A |0>B )
The most important to remember here is, for example, if you look at "|0>A |0>B", it means that
if the system would collapse into that state, it means Alice found |0> (down), and Bob must have found that too.
Such "pure" maximally entangled states, could be used as a noiseless "EPR channel" (see below) in Quantum Teleportation.
However, the "sender" and "receiver" might only be able to share a "mixed" entangled state (noisy channel), due to the decoherence.
In most articles on entanglement and qubits, you might often see representations like we already know, and I mean like this example:
Keep this one in mind.
Maybe we can try Quantum Teleportation now...
7. Quantum Teleportation.
Finally, we are "ready" to get into Quantum Teleportation (QT).
This will be a relatively short chapter. This is so, since we already have covered so much ground above.
For example, we don't need to explain "qubits", "the state vector", "entanglement" etc..., anymore.
Quantum Teleportation is not about the "teleportation" of matter, like for example a particle.
It's about teleporting the information which we can associate with that particle, like the state of it's spin.
For example, the state of the system described by equation 1 above.
A collarly of "Quantum Information Theory" says, that "unknown" Quantum Information cannot be cloned.
This means that if you would succeed in teleporting Quantum Information to another location,
the original information is lost. This is also often referred to as the "no-cloning theorem".
It might seem rather bizar, since in the classical world, many examples exists where you can simply copy
unknown information to another location (e.g. copying the content of a computer register, to another computer).
In QM, it's actually not so bizar, because if you look at equation 1 again, you see an example of an unknow state.
It's also often called a "qubit" as the QM representative of a classical "bit".
Unmeasured, it is a superposition of the basis states |0> and |1>, using coefficients "a" and "b".
Indeed, unmeasured, we do not know this state. If you would like to "copy" it, you must interact with it,
meaning that in fact you are observing it (or measuring it), which means that it flips into
one of it's basis states. So, it would fail. Hence, the "no-cloning theorem" of unknown information.
Note that if you would try to (stronly) "interact" with a qubit, it collapses (or flips) from the superpostion
into one of the basis states.
Instead of the small talk above, you can also formally work with an Operator on the qubit, which tries to copy it,
and then it gets proven that it can't be done.
One of the latest "records" in achieved distances, over which Quantum Teleportation succeeded, is about 150 km.
What is it, and how does an experimental might look like?
Again, we have Alice and Bob. Alice is in Lab1, and Bob is in Lab2, which is about 100km away from Alice.
Suppose Alice is able to create an "entangled 2 particle system", with respect to the spin.
So, the state might be written as |Ψ> = 1/√2 ( |01> + |10> ), just like equation 3 above.
It's very important to realize, that we need this equation (equation 3) to describe both particles,
just as if "they are melted into one entity".
As a side remark, I like to mention that actually four of such (Bell) states would be possible, namely:
1> = 1/√2 ( |00> + |11> )
2> = 1/√2 ( |00> - |11> )
3> = 1/√2 ( |01> + |10> )
4> = 1/√2 ( |01> - |10> )
In the experiment below, we can use any of those, to describe an entangled pair in our experiment.
Now, let's return to the experimental setup of Alice and Bob.
Let's call the particle which Alice claims, "particle 2", and which Bob claims "particle 3".
Why not 1 and 2? Well, in a minute, a third particle will be introduced. I like to call that "particle 1".
This new particle (particle 1), is the particle which "state" will be teleported to Bob's location.
At this moment, only the entangled particles 2 and 3, are both at Alice's location.
Next, we move particle 3 to Bob's location. The particles 2 and 3, remain entangled, so they stay
strongly correlated.
After a short while, particle 3 arrived at Bob's Lab.
Next, a new particle (particle 1), a qubit, is introduced at Alice's location.
In the picture below, you see the actions above, be represented by the subfigures 1, 2, and 3.
The particles 2 and 3, are ofcourse still entangled. This situation, or non-local property, is often also expressed
(or labeled) as an "EPR channel" between the particles.
This is presumably not to be understood as a "real channel" between the particles, like in the sense of a
channel in the classical world.
In chapter 2, we try to see what physicists are suggesting today, of which physical principles may
be the source for the "EPR channel/non-locality" phenomenon.
Let's return to the experimental setup again. Suppose we have the following:
-The entangled particles, Particles 2 and 3, are collectively described by:
2,3> = 1/√2 ( |01> - |10> )
-The newly introduced particle, Particle 1 (a qubit) is decribed like we already saw in equation 1, thus by:
1> = a|1> + b|0>
Also note the subscripts, which may help in distinguishing the particles.
At a certain moment, when particles 1 and 2 are "really close", (as in subfigure 4 of the figure above),
we have a 3 particle system, which have to be described using a product state, as in:
| θ123> = |φ1> ⊗ |Ψ2,3 >
Such a product state, does not imply a "strong" measurement or interaction, so the entanglement still holds.
Remember, we "are" still in the situation as depicted in subfigure 4 of the figure above.
We now try to rewrite our product state in a more convienient way. If the product is expanded,
and some some re-arrangements are done, we get an interresting endresult.
It's quite a bit math, and does not add value to our understanding, I think, so I will represent this endresult
in a sort of "pseudo Ket" equation:
| θ123> = |φ1> ⊗ |Ψ2,3 >
½ x [ Φ12 (-a |13 > - b |03 >) + Φ12 (-a |13 > + b |03 >) + Φ12 ( a |03 > + b |13 >) + Φ12 ( a |03 > - b |13 >) ] (equation 5)
½ x Φ12 [ (-a |13 > - b |03 >) + (-a |13 > + b |03 >) + ( a |03 > + b |13 >) + ( a |03 > - b |13 >) ] (equation 6)
Note the factor "Φ12".
We have managed to "factor out" the state of particles 1 and 2 into the "Φ12" term. At the same time,
the state of particle 3 "looks like" a superpostion of four qubit states.Indeed. Actually, it is a superposition.
Now, Alice performs a masurement on particle 1 and particle 2. For example, she uses a laser, or EM radiation
to alter the state of "Φ12".
This will result in the fact that "Φ12" will collapse (or "flip") into another state.
It will immediately have an effect on Particle 3, and Particle 3 will collapse (or be projected, or flip) into one
of the four qubit states as we have seen in equations 5 and 6 above.
Ofcourse, the Entanglement is gone, and so is the EPR channel.
Now note this: While Alice made her measurement, a quantum gate recorded the resulting "classical" bits
that resulted from that measurement on Particles 1 & 2.
Before that measurement, nothing was changed at all. Particle 1 still had it's original ket equation |φ1> = a|1> + b|0>
We only smartly rearranged equation 4 into equation 5 or 6, that's all.
Now, it's possible that you are not aware of of the fact that "quantum gates" do exists, which functions as experimental devices,
by which we can "read out" the classical bits that resulted from the measurement of Alice.
This is depicted in subfigures 5 and 6 in the figure above.
These bits can be transferred in a classical way, using a laser, or any sort of other classical signalling,
to Bob's Lab, where he uses a similar gate to "reconstruct" the state of Particle 3, exactly as the state of
particle 1 was directly before Alice's measurement.
It's an amazing experiment. But it has become a reality in various real experiments.
-Note that such an experiment cannot work without an EPR channel, or, one or more entangled particles.
It's exactly this feature which will see to it, that Particle 3 will immediately respond (with a collapse),
on a measurement far away (in our case: the measurement of Alice on particles 1 & 2).
-Also note that we need a classical way to transfer bits, which encode the state of Particle 1, so that Bob
is able to reconstruct the state of Particle 3 into the former state of Partcle 1.
This can only work using a classical signal, thus QT does NOT breach Einstein's laws.
-Also note that the "no cloning" theorem was also proven here, since just before Bob was able to
reconstruct the state of Particle 1 onto Particle 3, the state of the original partice (particle 1)
was "destroyed" in Alice's measurement.
Well, that's about it.
Hope you liked it! |
5ae3e8de910220d2 | Relational quantum mechanics
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is intended for those already familiar with quantum mechanics and its attendant interpretational difficulties. Readers who are new to the subject may first want to read the introduction to quantum mechanics.
Relational quantum mechanics (RQM) is an interpretation of quantum mechanics which treats the state of a quantum system as being observer-dependent, that is, the state is the relation between the observer and the system. This interpretation was first delineated by Carlo Rovelli in a 1994 preprint, and has since been expanded upon by a number of theorists. It is inspired by the key idea behind Special Relativity, that the details of an observation depend on the reference frame of the observer, and uses some ideas from Wheeler on quantum information.[1]
The physical content of the theory is not to do with objects themselves, but the relations between them. As Rovelli puts it: "Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".[2]
The essential idea behind RQM is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may appear to be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, RQM argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by RQM that this applies to all physical objects, whether or not they are conscious or macroscopic (all systems are quantum systems). Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. The proponents of the relational interpretation argue that the approach clears up a number of traditional interpretational difficulties with quantum mechanics, while being simultaneously conceptually elegant and ontologically parsimonious.
History and development[edit]
Relational quantum mechanics arose from a historical comparison of the quandaries posed by the interpretation of quantum mechanics with the situation after the Lorentz transformations were formulated but before special relativity. Rovelli felt that just as there was an "incorrect assumption" underlying the pre-relativistic interpretation of Lorentz's equations, which was corrected by Einstein's deriving them from Lorentz covariance and the constancy of the speed of light in all reference frames, so a similarly incorrect assumption underlies many attempts to make sense of the quantum formalism, which was responsible for many of the interpretational difficulties posed by the theory. This incorrect assumption, he said, was that of an observer-independent state of a system, and he laid out the foundations of this interpretation to try to overcome the difficulty.[3]
The idea has been expanded upon by Lee Smolin[4] and Louis Crane,[5] who have both applied the concept to quantum cosmology, and the interpretation has been applied to the EPR paradox, revealing not only a peaceful co-existence between quantum mechanics and special relativity, but a formal indication of a completely local character to reality.[6][7]
David Mermin has contributed to the relational approach in his "Ithaca interpretation."[8] He describes "correlations without correlata", meaning that relations have more concrete existence than the objects being related. The name "Zero Worlds" has also been popularized by Garret.[9]
The problem of the observer and the observed[edit]
This problem was initially discussed in detail in Everett's thesis, The Theory of the Universal Wavefunction. Consider observer O, measuring the state of the quantum system S. We assume that O has complete information on the system, and that O can write down the wavefunction |\psi\rangle describing it. At the same time, there is another observer O', who is interested in the state of the entire O-S system, and O' likewise has complete information.
To analyse this system formally, we consider a system S which may take one of two states, which we shall designate |\uparrow \rangle and |\downarrow \rangle , ket vectors in the Hilbert space H_S. Now, the observer O wishes to make a measurement on the system. At time t_1, this observer may characterize the system as follows:
| \psi \rangle = \alpha|\uparrow\rangle + \beta|\downarrow\rangle
where |\alpha|^2 and |\beta|^2 are probabilities of finding the system in the respective states, and obviously add up to 1. For our purposes here, we can assume that in a single experiment, the outcome is the eigenstate |\uparrow\rangle (but this can be substituted throughout, mutatis mutandis, by |\downarrow\rangle). So, we may represent the sequence of events in this experiment, with observer O doing the observing, as follows:
\begin{matrix} t_1 & \rightarrow & t_2 \\
\alpha |\uparrow\rangle + \beta |\downarrow\rangle & \rightarrow & |\uparrow\rangle.
This is observer O's description of the measurement event. Now, any measurement is also a physical interaction between two or more systems. Accordingly, we can consider the tensor product Hilbert space H_S \otimes H_{O}, where H_{O} is the Hilbert space inhabited by state vectors describing O. If the initial state of O is |init\rangle, some degrees of freedom in O become correlated with the state of S after the measurement, and this correlation can take one of two values: |O_{\uparrow}\rangle or |O_{\downarrow}\rangle where the direction of the arrows in the subscripts corresponds to the outcome of the measurement that O has made on S. If we now consider the description of the measurement event by the other observer, O', who describes the combined S+O system, but does not interact with it, the following gives the description of the measurement event according to O', from the linearity inherent in the quantum formalism:
t_1 & \rightarrow & t_2 \\
\left( \alpha | \uparrow \rangle + \beta | \downarrow \rangle \right)
\otimes | init \rangle
& \rightarrow
& \alpha | \uparrow \rangle \otimes | O_{\uparrow} \rangle
+ \beta | \downarrow \rangle \otimes | O_{\downarrow} \rangle.
Thus, on the assumption (see hypothesis 2 below) that quantum mechanics is complete, the two observers O and O' give different but equally correct accounts of the events t_1 \rightarrow t_2.
Central principles[edit]
Observer-dependence of state[edit]
According to O, at t_2, the system S is in a determinate state, namely spin up. And, if quantum mechanics is complete, then so is his description. But, for O', S is not uniquely determinate, but is rather entangled with the state of O — note that his description of the situation at t_2 is not factorisable no matter what basis chosen. But, if quantum mechanics is complete, then the description that O' gives is also complete.
Thus the standard mathematical formulation of quantum mechanics allows different observers to give different accounts of the same sequence of events. There are many ways to overcome this perceived difficulty. It could be described as an epistemic limitation — observers with a full knowledge of the system, we might say, could give a complete and equivalent description of the state of affairs, but that obtaining this knowledge is impossible in practice. But whom? What makes O's description better than that of O', or vice versa? Alternatively, we could claim that quantum mechanics is not a complete theory, and that by adding more structure we could arrive at a universal description (the troubled hidden variables approach). Yet another option is to give a preferred status to a particular observer or type of observer, and assign the epithet of correctness to their description alone. This has the disadvantage of being ad hoc, since there are no clearly defined or physically intuitive criteria by which this super-observer ("who can observe all possible sets of observations by all observers over the entire universe"[10]) ought to be chosen.
RQM, however, takes the point illustrated by this problem at face value. Instead of trying to modify quantum mechanics to make it fit with prior assumptions that we might have about the world, Rovelli says that we should modify our view of the world to conform to what amounts to our best physical theory of motion.[11] Just as forsaking the notion of absolute simultaneity helped clear up the problems associated with the interpretation of the Lorentz transformations, so many of the conundra associated with quantum mechanics dissolve, provided that the state of a system is assumed to be observer-dependent — like simultaneity in Special Relativity. This insight follows logically from the two main hypotheses which inform this interpretation:
• Hypothesis 1: the equivalence of systems. There is no a priori distinction that should be drawn between quantum and macroscopic systems. All systems are, fundamentally, quantum systems.
• Hypothesis 2: the completeness of quantum mechanics. There are no hidden variables or other factors which may be appropriately added to quantum mechanics, in light of current experimental evidence.
Thus, if a state is to be observer-dependent, then a description of a system would follow the form "system S is in state x with reference to observer O" or similar constructions, much like in relativity theory. In RQM it is meaningless to refer to the absolute, observer-independent state of any system.
Information and correlation[edit]
All systems are quantum systems[edit]
Consequences and implications[edit]
In our system above, O' may be interested in ascertaining whether or not the state of O accurately reflects the state of S. We can draw up for O' an operator, M, which is specified as:
M\left(|\uparrow \rangle \otimes |O_{\uparrow} \rangle \right) = |\uparrow \rangle \otimes |O_{\uparrow} \rangle
M\left(|\uparrow \rangle \otimes |O_{\downarrow} \rangle \right) = 0
M\left(|\downarrow \rangle \otimes |O_{\uparrow} \rangle \right) = 0
M\left(|\downarrow \rangle \otimes |O_{\downarrow} \rangle \right) = |\downarrow \rangle \otimes |O_{\downarrow} \rangle
with an eigenvalue of 1 meaning that O indeed accurately reflects the state of S. So there is a 0 probability of O reflecting the state of S as being |\uparrow\rangle if it is in fact |\downarrow\rangle, and so forth. The implication of this is that at time t_2, O' can predict with certainty that the S+O system is in some eigenstate of M, but cannot say which eigenstate it is in, unless O' itself interacts with the S+O system.
An apparent paradox arises when one considers the comparison, between two observers, of the specific outcome of a measurement. In the problem of the observer observed section above, let us imagine that the two experiments want to compare results. It is obvious that if the observer O' has the full Hamiltonians of both S and O, he will be able to say with certainty that at time t_2, O has a determinate result for S's spin, but he will not be able to say what O's result is without interaction, and hence breaking the unitary evolution of the compound system (because he doesn't know his own Hamiltonian). The distinction between knowing "that" and knowing "what" is a common one in everyday life: everyone knows that the weather will be like something tomorrow, but no-one knows exactly what the weather will be like.
But, let us imagine that O' measures the spin of S, and finds it to have spin down (and note that nothing in the analysis above precludes this from happening). What happens if he talks to O, and they compare the results of their experiments? O, it will be remembered, measured a spin up on the particle. This would appear to be paradoxical: the two observers, surely, will realise that they have disparate results.
However, this apparent paradox only arises as a result of the question being framed incorrectly: as long as we presuppose an "absolute" or "true" state of the world, this would, indeed, present an insurmountable obstacle for the relational interpretation. However, in a fully relational context, there is no way in which the problem can even be coherently expressed. The consistency inherent in the quantum formalism, exemplified by the "M-operator" defined above, guarantees that there will be no contradictions between records. The interaction between O' and whatever he chooses to measure, be it the S+O compound system or O and S individually, will be a physical interaction, a quantum interaction, and so a complete description of it can only be given by a further observer O'', who will have a similar "M-operator" guaranteeing coherency, and so on out. In other words, a situation such as that described above cannot violate any physical observation, as long as the physical content of quantum mechanics is taken to refer only to relations.
Relational networks[edit]
An interesting implication of RQM arises when we consider that interactions between material systems can only occur within the constraints prescribed by Special Relativity, namely within the intersections of the light cones of the systems: when they are spatiotemporally contiguous, in other words. Relativity tells us that objects have location only relative to other objects. By extension, a network of relations could be built up based on the properties of a set of systems, which determines which systems have properties relative to which others, and when (since properties are no longer well defined relative to a specific observer after unitary evolution breaks down for that observer). On the assumption that all interactions are local (which is backed up by the analysis of the EPR paradox presented below), one could say that the ideas of "state" and spatiotemporal contiguity are two sides of the same coin: spacetime location determines the possibility of interaction, but interactions determine spatiotemporal structure. The full extent of this relationship, however, has not yet fully been explored.
RQM and quantum cosmology[edit]
The universe is the sum total of all that is in existence. Physically, a (physical) observer outside of the universe would require the breaking of gauge invariance,[12] and a concomitant alteration in the mathematical structure of gauge-invariance theory. Similarly, RQM conceptually forbids the possibility of an external observer. Since the assignment of a quantum state requires at least two "objects" (system and observer), which must both be physical systems, there is no meaning in speaking of the "state" of the entire universe. This is because this state would have to be ascribed to a correlation between the universe and some other physical observer, but this observer in turn would have to form part of the universe, and as was discussed above, it is impossible for an object to give a complete specification of itself. Following the idea of relational networks above, an RQM-oriented cosmology would have to account for the universe as a set of partial systems providing descriptions of one another. The exact nature of such a construction remains an open question.
Relationship with other interpretations[edit]
The only group of interpretations of quantum mechanics with which RQM is almost completely incompatible is that of hidden variables theories. RQM shares some deep similarities with other views, but differs from them all to the extent to which the other interpretations do not accord with the "relational world" put forward by RQM.
Copenhagen interpretation[edit]
RQM is, in essence, quite similar to the Copenhagen interpretation, but with an important difference. In the Copenhagen interpretation, the macroscopic world is assumed to be intrinsically classical in nature, and wave function collapse occurs when a quantum system interacts with macroscopic apparatus. In RQM, any interaction, be it micro or macroscopic, causes the linearity of Schrödinger evolution to break down. RQM could recover a Copenhagen-like view of the world by assigning a privileged status (not dissimilar to a preferred frame in relativity) to the classical world. However, by doing this one would lose sight of the key features that RQM brings to our view of the quantum world.
Hidden variables theories[edit]
Bohm's interpretation of QM does not sit well with RQM. One of the explicit hypotheses in the construction of RQM is that quantum mechanics is a complete theory, that is it provides a full account of the world. Moreover, the Bohmian view seems to imply an underlying, "absolute" set of states of all systems, which is also ruled out as a consequence of RQM.
We find a similar incompatibility between RQM and suggestions such as that of Penrose, which postulate that some process (in Penrose's case, gravitational effects) violate the linear evolution of the Schrödinger equation for the system.
Relative-state formulation[edit]
The many-worlds family of interpretations (MWI) shares an important feature with RQM, that is, the relational nature of all value assignments (that is, properties). Everett, however, maintains that the universal wavefunction gives a complete description of the entire universe, while Rovelli argues that this is problematic, both because this description is not tied to a specific observer (and hence is "meaningless" in RQM), and because RQM maintains that there is no single, absolute description of the universe as a whole, but rather a net of inter-related partial descriptions.
Consistent histories approach[edit]
In the consistent histories approach to QM, instead of assigning probabilities to single values for a given system, the emphasis is given to sequences of values, in such a way as to exclude (as physically impossible) all value assignments which result in inconsistent probabilities being attributed to observed states of the system. This is done by means of ascribing values to "frameworks", and all values are hence framework-dependent.
RQM accords perfectly well with this view. However, where the consistent histories approach does not give a full description of the physical meaning of framework-dependent value (that is it does not account for how there can be "facts" if the value of any property depends on the framework chosen). By incorporating the relational view into this approach, the problem is solved: RQM provides the means by which the observer-independent, framework-dependent probabilities of various histories are reconciled with observer-dependent descriptions of the world.
EPR and quantum non-locality[edit]
The EPR thought experiment, performed with electrons. A radioactive source (center) sends electrons in a singlet state toward two spacelike separated observers, Alice (left) and Bob (right), who can perform spin measurements. If Alice measures spin up on her electron, Bob will measure spin down on his, and vice versa.
RQM provides an unusual solution to the EPR paradox. Indeed, it manages to dissolve the problem altogether, inasmuch as there is no superluminal transportation of information involved in a Bell test experiment: the principle of locality is preserved inviolate for all observers.
The problem[edit]
In the EPR thought experiment, a radioactive source produces two electrons in a singlet state, meaning that the sum of the spin on the two electrons is zero. These electrons are fired off at time t_1 towards two spacelike separated observers, Alice and Bob, who can perform spin measurements, which they do at time t_2. The fact that the two electrons are a singlet means that if Alice measures z-spin up on her electron, Bob will measure z-spin down on his, and vice versa: the correlation is perfect. If Alice measures z-axis spin, and Bob measures the orthogonal y-axis spin, however, the correlation will be zero. Intermediate angles give intermediate correlations in a way that, on careful analysis, proves inconsistent with the idea that each particle has a definite, independent probability of producing the observed measurements (the correlations violate Bell's inequality).
This subtle dependence of one measurement on the other holds even when measurements are made simultaneously and a great distance apart, which gives the appearance of a superluminal communication taking place between the two electrons. Put simply, how can Bob's electron "know" what Alice measured on hers, so that it can adjust its own behavior accordingly?
Relational solution[edit]
In RQM, an interaction between a system and an observer is necessary for the system to have clearly defined properties relative to that observer. Since the two measurement events take place at spacelike separation, they do not lie in the intersection of Alice's and Bob's light cones. Indeed, there is no observer who can instantaneously measure both electrons' spin.
The key to the RQM analysis is to remember that the results obtained on each "wing" of the experiment only become determinate for a given observer once that observer has interacted with the other observer involved. As far as Alice is concerned, the specific results obtained on Bob's wing of the experiment are indeterminate for her, although she will know that Bob has a definite result. In order to find out what result Bob has, she has to interact with him at some time t_3 in their future light cones, through ordinary classical information channels.[13]
The question then becomes one of whether the expected correlations in results will appear: will the two particles behave in accordance with the laws of quantum mechanics? Let us denote by M_A(\alpha) the idea that the observer A (Alice) measures the state of the system \alpha (Alice's particle).
So, at time t_2, Alice knows the value of M_A(\alpha): the spin of her particle, relative to herself. But, since the particles are in a singlet state, she knows that
M_A(\alpha)+M_A(\beta)=0 ,
and so if she measures her particle's spin to be \sigma, she can predict that Bob's particle (\beta) will have spin -\sigma. All this follows from standard quantum mechanics, and there is no "spooky action at a distance" yet. From the "coherence-operator" discussed above, Alice also knows that if at t_3 she measures Bob's particle and then measures Bob (that is asks him what result he got) — or vice versa — the results will be consistent:
Finally, if a third observer (Charles, say) comes along and measures Alice, Bob, and their respective particles, he will find that everyone still agrees, because his own "coherence-operator" demands that
M_C(A)=M_C(\alpha) and M_C(B)=M_C(\beta)
while knowledge that the particles were in a singlet state tells him that
M_C(\alpha)+M_C(\beta) = 0.
Thus the relational interpretation, by shedding the notion of an "absolute state" of the system, allows for an analysis of the EPR paradox which neither violates traditional locality constraints, nor implies superluminal information transfer, since we can assume that all observers are moving at comfortable sub-light velocities. And, most importantly, the results of every observer are in full accordance with those expected by conventional quantum mechanics.
A promising feature of this interpretation is that RQM offers the possibility of being derived from a small number of axioms, or postulates based on experimental observations. Rovelli's derivation of RQM uses three fundamental postulates. However, it has been suggested that it may be possible to reformulate the third postulate into a weaker statement, or possibly even do away with it altogether.[14] The derivation of RQM parallels, to a large extent, quantum logic. The first two postulates are motivated entirely by experimental results, while the third postulate, although it accords perfectly with what we have discovered experimentally, is introduced as a means of recovering the full Hilbert space formalism of quantum mechanics from the other two postulates. The 2 empirical postulates are:
• Postulate 1: there is a maximum amount of relevant information that may be obtained from a quantum system.
• Postulate 2: it is always possible to obtain new information from a system.
We let W\left(S\right) denote the set of all possible questions that may be "asked" of a quantum system, which we shall denote by Q_i, i \in W. We may experimentally find certain relations between these questions: \left\{\land, \lor, \neg, \supset, \bot \right\}, corresponding to {intersection, orthogonal sum, orthogonal complement, inclusion, and orthogonality} respectively, where Q_1 \bot Q_2 \equiv Q_1 \supset \neg Q_2 .
From the first postulate, it follows that we may choose a subset Q_c^{(i)} of N mutually independent questions, where N is the number of bits contained in the maximum amount of information. We call such a question Q_c^{(i)} a complete question. The value of Q_c^{(i)} can be expressed as an N-tuple sequence of binary valued numerals, which has 2^N = k possible permutations of "0" and "1" values. There will also be more than one possible complete question. If we further assume that the relations \left\{\land, \lor\right\} are defined for all Q_i, then W\left(S\right) is an orthomodular lattice, while all the possible unions of sets of complete questions form a Boolean algebra with the Q_c^{(i)} as atoms.[15]
The second postulate governs the event of further questions being asked by an observer O_1 of a system S, when O_1 already has a full complement of information on the system (an answer to a complete question). We denote by p\left(Q|Q_c^{(j)}\right) the probability that a "yes" answer to a question Q will follow the complete question Q_c^{(j)}. If Q is independent of Q_c^{(j)}, then p=0.5, or it might be fully determined by Q_c^{(j)}, in which case p=1. There is also a range of intermediate possibilities, and this case is examined below.
If the question that O_1 wants to ask the system is another complete question, Q_b^{(i)}, the probability p^{ij}=p\left(Q_b^{(i)}|Q_c^{(j)}\right) of a "yes" answer has certain constraints upon it:
1. 0 \leq p^{ij} \leq 1, \
2. \sum_{i} p^{ij} = 1, \
3. \sum_{j} p^{ij} = 1. \
The three constraints above are inspired by the most basic of properties of probabilities, and are satisfied if
p^{ij} = \left|U^{ij}\right|^2,
where U^{ij} is a unitary matrix.
• Postulate 3 If b and c are two complete questions, then the unitary matrix U_{bc} associated with their probability described above satisfies the equality U_{cd} = U_{cb}U_{bd}, for all b, c and d.
This third postulate implies that if we set a complete question |Q^{(i)}_c \rangle as a basis vector in a complex Hilbert space, we may then represent any other question |Q^{(j)}_b \rangle as a linear combination:
|Q^{(j)}_b \rangle = \sum_i U^{ij}_{bc} |Q^{(i)}_c \rangle.
And the conventional probability rule of quantum mechanics states that if two sets of basis vectors are in the relation above, then the probability p^{ij} is
p^{ij} = |\langle Q^{(i)}_c | Q^{(j)}_b \rangle|^2 = |U_{bc}^{ij}|^2.
The Heisenberg picture of time evolution accords most easily with RQM. Questions may be labelled by a time parameter t \rightarrow Q(t), and are regarded as distinct if they are specified by the same operator but are performed at different times. Because time evolution is a symmetry in the theory (it forms a necessary part of the full formal derivation of the theory from the postulates), the set of all possible questions at time t_2 is isomorphic to the set of all possible questions at time t_1. It follows, by standard arguments in quantum logic, from the derivation above that the orthomodular lattice W(S) has the structure of the set of linear subspaces of a Hilbert space, with the relations between the questions corresponding to the relations between linear subspaces.
It follows that there must be a unitary transformation U \left( t_2 - t_1 \right) that satisfies:
Q(t_2) = U \left( t_2 - t_1 \right) Q(t_1) U^{-1} \left( t_2 - t_1 \right)
U \left( t_2 - t_1 \right) = \exp({-i \left(t_2 - t_1 \right)H})
where H is the Hamiltonian, a self-adjoint operator on the Hilbert space and the unitary matrices are an abelian group.
See also[edit]
1. ^ Wheeler (1990): pg. 3
2. ^ Rovelli, C. (1996), "Relational quantum mechanics", International Journal of Theoretical Physics, 35: 1637-1678.
3. ^ Rovelli (1996): pg. 2
4. ^ Smolin (1995)
5. ^ Crane (1993)
6. ^ Laudisa (2001)
7. ^ Rovelli & Smerlak (2006)
8. ^ Mermin, N.D. (1996, 1998).
9. ^ Garret, R. (Jan 2011) "The Quantum Conspiracy: What Popularizers Of QM Don't Want You To Know" (slides, video).
10. ^ Page, Don N., "Insufficiency of the quantum state for deducing observational probabilities", Physics Letters B, Volume 678, Issue 1, 6 July 2009, 41-44.
11. ^ Rovelli (1996): pg. 16
12. ^ Smolin (1995), pg. 13
13. ^ Bitbol (1983)
14. ^ Rovelli (1996): pg. 14
15. ^ Rovelli (1996): pg. 13
• Bitbol, M.: "An analysis of the Einstein–Podolsky–Rosen correlations in terms of events"; Physics Letters 96A, 1983: 66-70
• Crane, L.: "Clock and Category: Is Quantum Gravity Algebraic?"; Journal of Mathematical Physics 36; 1993: 6180-6193; arXiv:gr-qc/9504038.
• Everett, H.: "The Theory of the Universal Wavefunction"; Princeton University Doctoral Dissertation; in DeWitt, B.S. & Graham, R.N. (eds.): "The Many-Worlds Interpretation of Quantum Mechanics"; Princeton University Press; 1973.
• Finkelstein, D.R.: "Quantum Relativity: A Synthesis of the Ideas of Einstein and Heisenberg"; Springer-Verlag; 1996
• Garret, R.: "Quantum Mysteries Disentangled" (pdf), Nov 2001, revised Aug 2008.
• Floridi, L.: "Informational Realism"; Computers and Philosophy 2003 - Selected Papers from the Computer and Philosophy conference (CAP 2003), Conferences in Research and Practice in Information Technology, '37', 2004, edited by J. Weckert. and Y. Al-Saggaf, ACS, pp. 7–12. [1]
• Laudisa, F.: "The EPR Argument in a Relational Interpretation of Quantum Mechanics"; Foundations of Physics Letters, 14 (2); 2001: pp. 119–132; arXiv:quant-ph/0011016
• Laudisa, F. & Rovelli, C.: "Relational Quantum Mechanics"; The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.);online article.
• Mermin, N.D.: "The Ithaca Interpretation of Quantum Mechanics"; Pramana , 51 (1996): 549-565, arXiv:quant-ph/9609013.
• Mermin, N.D.: "What is Quantum Mechanics Trying to Tell us?"; American Journal of Physics, 66 (1998): 753-767, arXiv:quant-ph/9801057.
• Rovelli, C. & Smerlak, M.: "Relational EPR"; Preprint: arXiv:quant-ph/0604064.
• Rovelli, C.: "Relational Quantum Mechanics"; International Journal of Theoretical Physics 35; 1996: 1637-1678; arXiv:quant-ph/9609002.
• Smolin, L.: "The Bekenstein Bound, Topological Quantum Field Theory and Pluralistic Quantum Field Theory"; Preprint: arXiv:gr-qc/9508064.
• Wheeler, J. A.: "Information, physics, quantum: The search for links"; in Zurek,W., ed.: "Complexity, Entropy and the Physics of Information"; pp 3–28; Addison-Wesley; 1990.
External links[edit] |
e5a140cc4ea59eec | Tuesday, April 29, 2008
Pop goes the housing bubble
Figure via Calculated Risk. Larger version here
I believe the outlines of the bust are becoming as visible as the bubble itself was to any astute observer a few years ago. But no bottom yet! If I had to guess, I'd say we are going to give back most of the integral over the curve from 1997 to 2007 or so, net of overall inflation during that period (say 20-30%). In other words, extend the blue trend line beyond the early nineties, and integrate your favorite curve minus this trend line from 1997-2007 to get the overvaluation. (Note the graph is of year over year price changes in nominal dollars, not absolute price.)
Or, just look at the figure below to see that prices might have to drop 30-40% to return to consistency with the long term trend.
As we've discussed before, house price increases tend to be quite modest if measured in real dollars. (Except, of course, for bubbles and special cases :-)
The Case-Shiller national index will probably be off close to 12% YoY (will be released in earlylate May). Currently (as of Q4) the national index is off 10.1% from the peak.
The composite 10 index (10 large cities) is off 13.6% YoY. (15.8% from peak)
The composite 20 index is off 12.7% YoY. (14.8% from peak)
A poem for the West
To get a feel for the reaction of average Chinese towards criticism from the West, read the excerpt below from a poem now circulating on the internet. (See also video version at bottom.) If you are not familiar with some of the historical references, you might ask someone like Howard Zinn to explain them to you.
by Anonymous
...When we closed our doors, You smuggled drugs to open markets.
When we tried to put the broken pieces back together again, Free Tibet you screamed, It was an Invasion!
When we tried Communism, You hated us for being Communist.
When we embrace Capitalism, You hate us for being Capitalist.
When we had a billion people, You said we were destroying the planet.
When we tried limiting our numbers, You said we abused human rights.
When we were poor, You thought we were dogs.
When we loan you cash, You blame us for your national debts.
When we build our industries, You call us Polluters.
When we buy oil, You call it exploitation and genocide.
When you go to war for oil, You call it liberation. ...
This NYTimes article and this Time magazine blog post are good examples of how poorly the Chinese worldview is understood here.
The poem was erroneously attributed to Dou-Liang Lin, an emeritus professor of physics at SUNY Buffalo. Professor Lin writes that he is not the author and doesn't know who is.
On 4/25/2008 at 7:56 AM, Duoliang Lin wrote:
Dear Friends,
Thank you for your enthusiastic praise and support. Several of you have asked for my authorization for translation into Chinese and/or reprinting. Since this was an anonymous poem circulating in the email, I suppose that the author would not mind to be quoted, translated or reprinted. But I was not the author of the poem. Please see below.
This is to clarify that the poem circulated in the email recently was not my work. I received it via email last week. There was no author shown. I read it with great interest and was impressed very much. I then decided to share it with my friends through my email network. Apparently some of them forwarded it to their friends, and in a few days, it has reached a large number of readers. Because my email is set with a signature block, some of the recipients assumed that I was the author. This is a misunderstanding and I should not be credited for its success.
I appreciate compliments from many within the last few days, but I must say that I am not the one to be credited. I am trying to trace back the email routes to see if I can find the original author.
I was informed today that it was also quoted in Wall Street Journal: There has been a poem by an anonymous author circulating in the internet recently. I feel relieved because I was not cited as the author. Thank you for your attention.
Here is a nice observation, originally due to Henry Kissinger, which appeared in the comments below:
...America needs to understand that a hectoring tone evokes in China memories of imperialist condescension and is not appropriate in dealing with a country that has managed 4,000 years of uninterrupted self-government.
As a new century begins, the relations between China and the United States may well determine whether our children will live in turmoil even worse than the 20th century or whether they will witness a new world order compatible with universal aspirations for peace and progress.
Sunday, April 27, 2008
Soros tells it like it is
The Financial Crisis: An Interview with George Soros with Judy Woodruff on Bloomberg TV.
According to his estimates (see below), housing starts would have to go to zero (he gives a normal value of 600k per annum, but I think the correct number is about twice that) for several years to compensate for the inventory that will flood the market due to foreclosures. So, no recovery in the near future, and continued pressure on the dollar. He also has some nice things to say about academic economics :-)
Judy Woodruff: You write in your new book, The New Paradigm for Financial Markets,[1] that "we are in the midst of a financial crisis the likes of which we haven't seen since the Great Depression." Was this crisis avoidable?
Woodruff: How can so many smart people not realize this?
Soros: In my new book I put forward a general theory of reflexivity, emphasizing how important misconceptions are in shaping history. So it's not really unusual; it's just that we don't recognize the misconceptions.
Woodruff: Who could have? You said it would have been avoidable if people had understood what's wrong with the current system. Who should have recognized that?
Soros: The authorities, the regulators—the Federal Reserve and the Treasury—really failed to see what was happening. One Fed governor, Edward Gramlich, warned of a coming crisis in subprime mortgages in a speech published in 2004 and a book published in 2007, among other statements. So a number of people could see it coming. And somehow, the authorities didn't want to see it coming. So it came as a surprise.
Woodruff: The chairman of the Fed, Mr. Bernanke? His predecessor, Mr. Greenspan?
Soros: All of the above. But I don't hold them personally responsible because you have a whole establishment involved. The economics profession has developed theories of "random walks" and "rational expectations" that are supposed to account for market movements. That's what you learn in college. Now, when you come into the market, you tend to forget it because you realize that that's not how the markets work. But nevertheless, it's in some way the basis of your thinking.
Woodruff: How much worse do you anticipate things will get?
Soros: Well, you see, as my theory argues, you can't make any unconditional predictions because it very much depends on how the authorities are going to respond now to the situation. But the situation is definitely much worse than is currently recognized. You have had a general disruption of the financial markets, much more pervasive than any we have had so far. And on top of it, you have the housing crisis, which is likely to get a lot worse than currently anticipated because markets do overshoot. They overshot on the upside and now they are going to overshoot on the downside.
Woodruff: You say the housing crisis is going to get much worse. Do you anticipate something like the government setting up an agency or a trust corporation to buy these mortgages?
Soros: I'm sure that it will be necessary to arrest the decline because the decline, I think, will be much faster and much deeper than currently anticipated. In February, the rate of decline in housing prices was 25 percent per annum, so it's accelerating. Now, foreclosures are going to add to the supply of housing a very large number of properties because the annual rate of new houses built is about 600,000. There are about six million subprime mortgages outstanding, 40 percent of which will likely go into default in the next two years. And then you have the adjustable-rate mortgages and other flexible loans.
Problems with such adjustable-rate mortgages are going to be of about the same magnitude as with subprime mortgages. So you'll have maybe five million more defaults facing you over the next several years. Now, it takes time before a foreclosure actually is completed. So right now you have perhaps no more than 10,000 to 20,000 houses coming into the supply on the market. But that's going to build up. So the idea that somehow in the second half of this year the economy is going to improve I find totally unbelievable.
Woodruff: When you talk about currency you have more than a little expertise. You were described as the man who broke the Bank of England back in the 1990s. But what is your sense of where the dollar is going? We've seen it declining. Do you think the central banks are going to have to step in?
Soros: Well, we are close to a tipping point where, in my view, the willingness of banks and countries to hold dollars is definitely impaired. But there is no suitable alternative so central banks are diversifying into other currencies; but there is a general flight from these currencies. So the countries with big surpluses—Abu Dhabi, China, Norway, and Saudi Arabia, for example—have all set up sovereign wealth funds, state-owned investment funds held by central banks that aim to diversify their assets from monetary assets to real assets. That's one of the major developments currently and those sovereign wealth funds are growing. They're already equal in size to all of the hedge funds in the world combined. Of course, they don't use their capital as intensively as hedge funds, but they are going to grow to about five times the size of hedge funds in the next twenty years.
Are you Gork?
Slide from this talk.
Survey questions:
1) Could you be Gork the robot? (Do you split into different branches after observing the outcome of, e.g., a Stern-Gerlach measurement?)
2) If not, why? e.g,
I have a soul and Gork doesn't
Decoherence solved all that! See previous post.
I don't believe that quantum computers will work as designed, e.g., sufficiently large algorithms or subsystems will lead to real (truly irreversible) collapse. Macroscopic superpositions larger than whatever was done in the lab last week are impossible.
QM is only an algorithm for computing probabilities -- there is no reality to the quantum state or wavefunction or description of what is happening inside a quantum computer.
Stop bothering me -- I only care about real stuff like the Higgs mass / SUSY-breaking scale / string Landscape / mechanism for high-Tc / LIBOR spread / how to generate alpha.
Wednesday, April 23, 2008
Feynman and Everett
A couple of years ago I gave a talk at the Institute for Quantum Information at Caltech about the origin of probability -- i.e., the Born rule -- in many worlds ("no collapse") quantum mechanics. It is often claimed that the Born rule is a consequence of many worlds -- that it can be derived from, and is a prediction of, the no collapse assumption. However, this is only true in a particular limit of infinite numbers of degrees of freedom -- it is problematic when only a finite number of degrees of freedom are considered.
Today I noticed a fascinating paper on the arXiv posted by H.D. Zeh, one of the developers of the theory of decoherence:
Feynman's quantum theory
H. D. Zeh
(Submitted on 21 Apr 2008)
A historically important but little known debate regarding the necessity and meaning of macroscopic superpositions, in particular those containing different gravitational fields, is discussed from a modern perspective.
The discussion analyzed by Zeh, concerning whether the gravitational field need be quantized, took place at a relativity meeting at the University of North Carolina in Chapel Hill in 1957. Feynman presents a thought experiment in which a macroscopic mass (source for the gravitational field) is placed in a superposition state. One of the central points is necessarily whether the wavefunction describing the macroscopic system must collapse, and if so exactly when. The discussion sheds some light on Feynman's (early) thoughts on many worlds and his exposure to Everett's ideas, which apparently occurred even before their publication (see below).
Nowadays no one doubts that large and complex systems can be placed in superposition states. This capability is at the heart of quantum computing. Nevertheless, few have thought through the implications for the necessity of the "collapse" of the wavefunction describing, e.g., our universe as a whole. I often hear statements like "decoherence solved the problem of wavefunction collapse". I believe that Zeh would agree with me that decoherence is merely the mechanism by which the different Everett worlds lose contact with each other! (And, clearly, this was already understood by Everett to some degree.) Incidentally, if you read the whole paper you can see how confused people -- including Feynman -- were about the nature of irreversibility, and the difference between effective (statistical) irreversibility and true (quantum) irreversibility.
Zeh: ... Quantum gravity, which was the subject of the discussion, appears here only as a secondary consequence of the assumed absence of a collapse, while the first one is that "interference" (superpositions) must always be maintained. ... Because of Feynman's last sentence it is remarkable that neither John Wheeler nor Bryce DeWitt, who were probably both in the audience, stood up at this point to mention Everett, whose paper was in press at the time of the conference because of their support [14]. Feynman himself must have known it already, as he refers to Everett's "universal wave function" in Session 9 – see below.
... Toward the end of the conference (in the Closing Session 9), Cecile DeWitt mentioned that there exists another proposal that there is one "universal wave function". This function has already been discussed by Everett, and it might be easier to look for this "universal wave function" than to look for all the propagators. Feynman said that the concept of a "universal wave function" has serious conceptual difficulties. This is so since this function must contain amplitudes for all possible worlds depending on all quantum-mechanical possibilities in the past and thus one is forced to believe in the equal reality [sic!] of an infinity of possible worlds.
Well said! Reality is conceptually difficult, and it seems to go beyond what we are able to observe. But he is not ready to draw this ultimate conclusion from the superposition principle that he always defended during the discussion. Why should a superposition not be maintained when it involves an observer? Why “is” there not an amplitude for me (or you) observing this and an amplitude for me (or you) observing that in a quantum measurement – just as it would be required by the Schrödinger equation for a gravitational field? Quantum amplitudes represent more than just probabilities – recall Feynman’s reply to Bondi’s first remark in the quoted discussion. However, in both cases (a gravitational field or an observer) the two macroscopically different states would be irreversibly correlated to different environmental states (possibly including you or me, respectively), and are thus not able to interfere with one another. They form dynamically separate “worlds” in this entangled quantum state.
... Feynman then gave a resume of the conference, adding some "critical comments", from which I here quote only one sentence addressed to mathematical physicists:
Feynman: "Don't be so rigorous or you will not succeed."
(He explains in detail how he means it.) It is indeed a big question what mathematically rigorous theories can tell us about reality if the axioms they require are not, or not exactly, empirically founded, and in particular if they do not even contain the most general axiom of quantum theory: the superposition principle. It was the important lesson from decoherence theory that this principle holds even where it does not seem to hold. However, many modern field theorists and cosmologists seem to regard quantization as of secondary or merely technical importance (just providing certain "quantum corrections") for their endevours, which are essentially performed by using classical terms (such as classical fields). It is then not surprising that the measurement problem never comes up for them. How can anybody do quantum field theory or cosmology at all nowadays without first stating clearly whether he/she is using Everett’s interpretation or some kind of collapse mechanism (or something even more speculative)?
Previous posts on many worlds quantum mechanics.
Tuesday, April 22, 2008
Deep inside the subprime crisis
Moody's walks Roger Lowenstein (writing for the Times Sunday magazine) through the construction, rating and demise of a pool of subprime mortgage securities. Some readers may have thought the IMF was exaggerating when it forecast up to $1 trillion in future losses from the credit bubble. After reading the following you will see that it's not an implausible number, and it will be clear why the system is paralyzed in dealing with (marking to market) the complicated securities (CDOs, etc.) that are contaminating the balance sheets of banks, investment banks, hedge funds, pension funds, sovereign wealth funds, etc. around the world.
Here's a quick physicist's calculation: roughly 10 million houses sold per year, assume that 10% of these mortgages are bad and will cost the issuer $100k to foreclose and settle. That means $100B per year in losses. Over the whole bubble, perhaps $300-500B in losses, which is more or less what the IMF estimates as the residential component of credit bubble losses (the rest of the trillion comes from commercial and corporate lending and consumer credit).
The internet bubble, with irrational investors buying shares of pet food e-commerce companies, was crazy. Read the excerpts below and you'll see that our recent housing boom was even crazier and at an unimaginably larger scale. (Note similar bubbles in the UK, Spain and in China.)
The best predictor, going forward, of mortgage default rates (not just subprime, but even prime mortgages) in a particular region will likely be the decline in home prices in that region. The incentive for a borrower to default on his or her mortgage is the amount by which they are "upside down" on the loan -- the amount by which their indebtedness exceeds the value of the home. Since we can't forecast price declines very well -- indeed, it's a nonlinear problem, with more defaults leading to more price declines, leading to more defaults -- we can't price the derivative securities built from those mortgages.
Efficient markets! ;-)
The figure above compares Case-Shiller data on the current bust (magenta) to the bust of the 80s-90s (blue). (Click for larger version.) You can see we have some way to go before all the fun ends.
Wall Street (Oliver Stone):
Monday, April 21, 2008
Returns to elite education
In an earlier post I discussed a survey of honors college students here at U Oregon, which revealed that very few had a good understanding of elite career choices outside of the traditional ones (law, medicine, engineering, etc.). It's interesting that, in the past, elite education did not result in greater average earnings once SAT scores are controlled for (see below). But I doubt that will continue to be the case today: almost half the graduating class at Harvard now head into finance, while the top Oregon students don't know what a hedge fund is.
NYTimes: ...Recent research also suggests that lower-income students benefit more from an elite education than other students do. Two economists, Alan B. Krueger and Stacy Berg Dale, studied the earnings of college graduates and found that for most, the selectivity of their alma maters had little effect on their incomes once other factors, like SAT scores, were taken into account. To use a hypothetical example, a graduate of North Carolina State who scored a 1200 on the SAT makes as much, on average, as a Duke graduate with a 1200. But there was an exception: poor students. Even controlling for test scores, they made more money if they went to elite colleges. They evidently gained something like closer contact with professors, exposure to new kinds of jobs or connections that they couldn’t get elsewhere.
“Low-income children,” says Mr. Krueger, a Princeton professor, “gain the most from going to an elite school.”
I predict that, in the future, the returns to elite education for the middle and even upper middle class will resemble those in the past for poor students. Elite education will provide the exposure to new kinds of jobs or connections that they couldn't get elsewhere. Hint: this means the USA is less and less a true meritocracy.
It's also interesting how powerful the SAT (which correlates quite strongly with IQ, which can be roughly measured in a 12 minute test) is in predicting life outcomes: knowing that a public university grad scored 99th percentile on the SAT (or brief IQ test) tells you his or her expected income is equal to that of a Harvard grad (at least that was true in the past). I wonder why employers (other than the US military) aren't allowed to use IQ to screen employees? ;-) I'm not an attorney, but I believe that when DE Shaw or Google ask a prospective employee to supply their SAT score, they may be in violation of the law.
Saturday, April 19, 2008
Trading on testosterone
Take with boulder-sized grain of salt. Cause and effect? Only an eight day interval? Couldn't that have been an exceptional period over which aggressiveness paid off?
NYTimes: MOVEMENTS in financial markets are correlated to the levels of hormones in the bodies of male traders, according to a study by two researchers from the University of Cambridge (newscientist.com).
Wednesday, April 16, 2008
Crossfit: cult or ultimate training?
Having played a lot of sports and done a lot of physical training, it's not often that I see something in the gym that shocks me.
But recently I came across the Crossfit training system. It's based around short, hyper intense workouts using basic bodyweight gymnastic moves (pushups, pullups, burpees, rope climbing), olympic and power lifts (cleans, jerks, presses, squats) and track sprints and rowing. The goal is to engage the large muscle groups and push them to both anaerobic and aerobic failure at the same time. For experienced athletes, the idea of using olympic lifts for cardiovascular stress training seems over the top, but anyone who can survive this is going to get very, very fit.
The founder of Crossfit, former gymnast Greg Glassman, is the guru behind this movement. He rails against bodybuilders who lack functional strength, and runners, cyclists and triathletes who are so specialized that they lack overall athleticism. (He doesn't have any bad words for ultimate fighters, though, some of whom use his system :-) The point I think Glassman overlooks is that the traditional training methods are meant to minimize injury and allow regular performance by an average person. It's telling that Glassman, 49, doesn't Crossfit train anymore. (See this NYTimes profile from a few years ago; the followup reader discussion is very good.)
If you have any athletic background at all (endurance training doesn't count -- it's gotta be something with a little explosiveness and testosterone ;-), watch the videos and tell me you are not freaked out.
More video:
Uneven Grace mov wmv
(check out the women doing 30 clean and jerks with 85lbs in 5-7 minutes!)
GI Jane mov wmv
(pushup, burpee, pullup -- basic, but so brutal. Greg Amundson is a badass!)
Interview: Coach Greg Glassman
CFJ: What’s wrong with fitness training today?
Coach Glassman: The popular media, commercial gyms, and general public hold great interest in endurance performance. Triathletes and winners of the Tour de France are held as paradigms of fitness. Well, triathletes and their long distance ilk are specialists in the word of fitness and the forces of combat and nature do not favor the performance model they embrace. The sport of competitive cycling is full of amazing people doing amazing things, but they cannot do what we do. They are not prepared for the challenges that our athletes are. The bodybuilding model of isolation movements combined with insignificant metabolic conditioning similarly needs to be replaced with a strength and conditioning model that contains more complex functional movements with a potent systemic stimulus. Sound familiar? Seniors citizens and U.S. Marine Combatant Divers will most benefit from a program built entirely from functional movement.
CFJ: What about aerobic conditioning?
Coach Glassman: I know you’re messing with me – trying to get me going. Look, why is it that a 20 minute bout on the stationery bike at 165 bpm is held by the public to be good cardio vascular work, whereas a mixed mode workout keeping athletes between 165-195 bpm for twenty minutes inspires the question, ”what about aerobic Conditioning?” For the record, the aerobic conditioning developed by CrossFit is not only high-level, but more importantly, it is more useful than the aerobic conditioning that comes from regimens comprised entirely of monostructural elements like cycling, running, or rowing. Now that should start some fires! Put one of our guys in a gravel shoveling competition with a pro cyclist and our guy smokes the cyclist. Neither guy trains by shoveling gravel, why does the CrossFit guy dominate? Because CrossFit’s workouts better model high demand functional activities. Think about it – a circuit of wall ball, lunges and deadlift/highpull at max heart rate better matches more activities than does cycling at any heart rate.
What good is happiness if it can't buy money?
This NYTimes article covers recent results in happiness research, which shows that money does buy happiness after all ;-) The new data seem to show a stronger correlation between average happiness and economic development than earlier studies which had led to the so-called Easterlin paradox. One explanation for the divergence between old and new data is that people around the world are now more aware of how others in developed countries live, thanks to television and the internet. That makes them less likely to be content if their per capita incomes are low (see the hedonic treadmill below). The old data showed surprisingly little correlation between average income and happiness, but 30-50 years ago someone living in Malawi might have been blissfully unaware of what he or she was missing. See the article for links to the research papers and a larger version of the figure. Also see these reader comments from the Times, which range from the "happiness is a state of mind" variety to "money isn't everything but it's way ahead of whatever is in second place."
In previous posts we've discussed the hedonic treadmill, which is based on the idea of habituation. If your life improves (e.g., move into a nicer house, get a better job, become rich), you feel better at first, but rapidly grow accustomed to the improvement and soon want even more. This puts you on a treadmill from which it is difficult to escape. The effect is especially pernicious if you adjust your perceived peer group as you progress (rivalrous thinking) -- there is always someone else who is richer and more successful than you are! Note, the hedonic treadmill is not inconsistent with an overall correlation between happiness and income or wealth. It just suggests diminishing returns due to psychological adjustment.
Monday, April 14, 2008
John Wheeler, dead at 96
John Archibald Wheeler, one of the last great physicists of a bygone era, has died. He outlived most of his contemporaries (Bohr, Einstein, Oppenheimer) and even some of his students, like Feynman.
I mentioned the history of black holes in general relativity in an earlier post on J.R. Oppenheimer:
Friday, April 11, 2008
Young and Restless in China
Looks like a fascinating documentary, profiling nine young people trying to make it in modern China. Among those profiled are a US-educated entrepreneur, a hip hop artist, an environmental lawyer and a migrant factory worker. It's meant to be a longitudinal study like Michael Apted's Up series in the UK, so look for future installments. Interview with the filmmaker on the Leonard Lopate show. (I highly recommend Lopate's podcasts -- he's the sharpest interviewer I've found in arts, literature and contemporary culture. Not exactly your guy for science or economics, though.)
More clips from the film.
PS Forget about Tibet. The vast majority of (Han) Chinese consider it part of China. Let's restore the Navajo nation to its pre-European contact independence before lecturing China about Tibet.
Wednesday, April 09, 2008
$1 trillion in losses?
We've had $200B in write-downs so far. The Fed has taken about $300B of shaky debt onto its balance sheet. The IMF is talking about a global bailout of the US economy.
It took Japan well over a decade to clean up its banking system after their property bubble burst. I doubt the US is going to take all of its bitter medicine at once. Whither the dollar?
The figure below first appeared on this blog in 2005:
Financial Times: IMF fears credit crisis losses could soar towards $1 trillion
Losses from the credit crisis by financial institutions worldwide are expected to balloon to almost $1 trillion (£507 billion), threatening to trigger severe economic fallout, the International Monetary Fund said yesterday.
In a grim assessment of the deepening crisis delivered days before ministers from the Group of Seven leading economies meet in Washington, the IMF warns governments, central banks and regulators that they face a crucial test to stem the turmoil.
“The critical challenge now facing policymakers is to take immediate steps to mitigate the risks of an even more wrenching adjustment,” it says in its twice-yearly Global Financial Stability Report.
The IMF sounds an alert over the danger that banks’ escalating losses, along with credit market uncertainties, could prompt a vicious downward spiral as they weaken economies and asset prices, leading to higher unemployment, more loan defaults and still deeper losses. “This dynamic has the potential to be more severe than in previous credit cycles, given the degree of securitisation and leverage in the system,” the Fund argues.
It says that it is clear that global financial upheavals are now more than just a shortage of ready funds, or liquidity, but are rooted in “deep-seated fragilities” among banks with too little capital. This “means that its effects are likely to be broader, deeper and more protracted”, the report concludes.
“A broadening deterioration of credit is likely to put added pressure on systemically important financial institutions,” it adds, saying that the risks have increased of a full-blown credit crunch that could undercut economic growth.
The warning came as Kenneth Rogoff, a former chief economist at the IMF and currently Professor of Economics at Harvard University, said that there was a “likely possibility” that the Fund will have to coordinate a global policy package to prop up the US economy. “They [the US] would not go for a conventional bail-out from the IMF. The IMF could not afford it – they have around $200 billion, which the US would burn through in a matter of months. It would be a package where various countries would try and prop up global demand to cushion the US economy.” He added: “The US is going to be looking for help to prevent this banking and housing problem from getting worse.”
The report also highlights the threat posed by the rapid spread of the credit crisis from its roots in the US sub-prime home loans to more mainstream lending markets worldwide.
While banks have so far declared losses and writedowns over the crisis totalling $193 billion, the IMF expects the ultimate toll to reach $945 billion.
Global banks are expected to shoulder about half of the total losses – between $440 and $510 billion – with the rest being borne by insurance companies, pension funds, hedge funds and money market funds, and other institutional investors, predominantly in the US and Europe.
Most of the losses are expected to stem from defaults in the US, with $565 billion written off in sub-prime and prime mortgages and a further $240 billion to be lost on commercial property lending. Losses on corporate loans are projected to mount to an eventual $120 billion and those on consumer lending to $20 billion.
Monday, April 07, 2008
The New Math
Alpha magazine has a long article on the current state of quant finance. It may be sample bias, but former theoretical physicists predominate among the fund managers profiled.
I've always thought theoretical physics was the best training for applying mathematical techniques to real world problems. Mathematicians seldom look at data, so are less likely to have the all-important intuition for developing simple models of messy systems, and for testing models empirically. Computer scientists generally don't study the broad variety of phenomena that physicists do, and although certain sub-specialties (e.g., machine learning) look at data, many do not. Some places where physics training can be somewhat weak (or at least uneven) include statistics, computation, optimization and information theory, but I've never known a theorist who couldn't pick those things up quickly.
Physicists have a long record of success in invading other disciplines (biology, computer science, economics, engineering, etc. -- I can easily find important contributions in those fields from people trained in physics, but seldom the converse). Part of the advantage might be pure horsepower -- the threshold for completing a PhD in theoretical physics is pretty high. However, a colleague once pointed out that the standard curriculum of theoretical physics is basically a collection of the most practically useful mathematical techniques developed by man -- the high points and greatest hits! Someone trained in that tradition can't help but have an advantage over others when asked to confront a new problem.
Having dabbled in fields like finance, computer science and even biology, I've come to consider myself as a kind of applied mathematician (someone who applies mathematical ideas to the real world) who happens to have had most of his training from working on physical systems. I suspect that physicists who have left the field, as well as practitioners of biophysics, econophysics, etc. might feel the same way.
Readers of this blog sometimes accuse me of a negative perspective towards physics. Quite the contrary. Although I might not be optimistic about career prospects within physics, or the current state of the field, I can't think of any education which gives a richer understanding of the world, or a greater chance of contributing to it.
...Finkelstein, who also grew up in Kharkov, has a Ph.D. in theoretical physics from New York University and a master’s degree in the same discipline from the Moscow Institute of Physics and Technology. Before joining Horton Point as chief science officer, he was head of quantitative credit research at Citadel Investment Group in Chicago.
Most of the 12 Ph.D.s at Horton Point’s Manhattan office are researching investment strategies and ways to apply scientific principles to finance. The firm runs what Finkelstein, 54, describes as a factory of strategies, with new models coming on line all the time. “It’s not like we plan to build ten strategies and sit on them,” he says. “The challenge is to keep it going, to keep this factory functioning.”
Along with his reservations about statistical arbitrage, Sogoloff is wary of quants who believe the real world is obliged to conform to a mathematical model. He acknowledges the difficulty of applying scientific disciplines like genetics or chaos theory — which purports to find patterns in seemingly random data — to finance. “Quantitative work will be much more rewarding to the scientist if one concentrates on those theories or areas that attempt to describe nonstable relationships,” he says.
Sogoloff sees promise in disciplines that deal with causal relationships rather than historical ones — like mathematical linguistics, which uses models to analyze the structure of language. “These sciences did not exist five or ten years ago,” he says. “They became possible because of humongous computational improvements.”
However, most quant shops aren’t exploring such fields because it means throwing considerable resources at uncertain results, Sogoloff says. Horton Point has found a solution by assembling a global network of academics whose research could be useful to the firm. So far the group includes specialists in everything from psychology to data mining, at such schools as the Beijing Institute of Technology, the California Institute of Technology and Technion, the Israel Institute of Technology.
Sogoloff tells the academics that the goal is to create the Bell Labs of finance. To align both parties’ interests, Horton Point offers them a share of the profits should their work lead to an investment strategy. Scientists like collaborating with Horton Point because it combines intellectual freedom with the opportunity to test their theories using real data, Sogoloff says. “You have experiments that can be set up in a matter of seconds because it’s a live market, and you have the potential for an amazing economic benefit.” ...
Friday, April 04, 2008
Credit crisis for pedestrians
Here is a 40 minute discussion of the credit crisis on NPR's Fresh Air. The "expert" is a law professor with a tenuous grasp of finance, a love of regulation and an axe to grind against Wall St. and former Senator Phil Gramm. Terri Gross, ordinarily an astute interviewer, can't seem to get beyond concepts like big bets at a big casino by unregulated fat cats. 8-/
Tuesday, April 01, 2008
Hsu scholarship at Caltech
I donated a number of shares in my previous startup (SafeWeb, Inc., acquired by Symantec in 2003) to endow a permanent undergraduate scholarship in memory of my father. In the course of setting up the scholarship I had to assemble a brief bio of my dad, which I thought I would post here on the Internet, to preserve for posterity.
The first recipient of the scholarship was a student from Shanghai, who had won a gold medal in the International Physics Olympiad. The second recipient was a woman from Romania. I encourage all of my friends in the worlds of technology and finance to give back to the institutions from which they received their educations.
Cheng Ting Hsu Scholarship
This scholarship was endowed on behalf of Cheng Ting Hsu by his son Stephen Hsu, Caltech class of 1986. It is to be awarded in accordance with Institute policies to the most qualified international student each year. Preference is to be given to applicants from Chinese-speaking countries: China (including Hong Kong), Taiwan and Singapore. Also, preference should be given, if possible, to those with outstanding academic qualifications (such as, but not limited to, performance in national-level competitions in math, physics or computer science or other similar distinction).
If the recipient is a continuing (rather than incoming) student, academic qualification can be based on GPA at Caltech, or other outstanding performance (such as, but not limited to, performance on competitive exams such as those in computer programming or mathematics, or outstanding research work).
Cheng Ting Hsu was born December 1, 1923 in Wenling, Zhejiang province, China. His grandfather, Zan Yao Hsu was a poet and doctor of Chinese medicine. His father, Guang Qiu Hsu graduated from college in the 1920's and was an educator, lawyer and poet. Cheng Ting was admitted at age 16 to the elite National Southwest Unified University, which as created during WWII by merging Tsinghua, Beijing and Nankai Universities. This university produced numerous famous scientists and scholars such as the physicists C.N. Yang and T.D. Lee. Cheng Ting studied aerospace engineering (originally part of Tsinghua), graduating in 1944. He became a research assistant at China's Aerospace Research Institute and a lecturer at Sichuan University. He also taught aerodynamics for several years to advanced students at the air force engineering academy.
In 1946 he was awarded one of only two Ministry of Education fellowships in his field to pursue graduate work in the United States. In 1946-1947 he published a three-volume book, co-authored with Professor Li Shoutong on the structures of thin-walled airplanes. In January, 1948, he left China by ocean liner, crossing the Pacific and arriving in San Francisco. In March of 1948 he began graduate work at the University of Minnesota, receiving his masters degree in 1949 and PhD in 1954. During this time he was also a researcher at the Rosemount Aerospace Research Institute in Minneapolis.
In 1958 Cheng Ting was appointed associate professor of aerospace engineering at Iowa State University. He was one of the founding faculty members of the department and became a full professor in 1962. During his career he supervised about 30 Masters theses and PhD dissertations. His research covered topics including jet propulsion, fluid mechanics, supersonic shock waves, combustion, magneto-hydrodynamics, vortex dynamics (tornados) and alternative energy (wind turbines). He published widely, in scientific journals ranging from physics to chemistry and aerodynamics.
Professor Hsu retired from Iowa State University in 1989 due to ill health, becoming Professor Emeritus. He passed away in 1996.
Blog Archive |
5a18ea8d7f93acab | General Chemistry
Intermolecular Forces
Be prepared to describe and identify the intermolecular/interparticle forces present in matter and relate these forces to the relative magnitude of physical properties such as vapor pressure, boiling point, melting point, viscosity, surface tension, and solubility.
Be prepared to develop an equilibrium constant expression for a chemical reaction. Know the significance of a large Keq versus a small Keq. Know how to apply Le Châtelier’s Principle (Law of Mass Action) to perturbations of an equilibrium system. Make sure you could calculate a reaction quotient (Q), and determine the direction of “shift” to reach equilibrium and determine the new equilibrium concentrations of all species in a chemical reaction.
Gas Laws
Be prepared to work problems using the ideal gas law. Know how to rearrange this expression to solve for gas density or to find the molecular weight or mass of a gas. Know under what conditions one can utilize Boyle’s Law, Charles’ Law, Avogadro’s Law, or the Combined Gas Law and be able to solve problems related to these laws.
Know the key points of the Kinetic Molecular Theory for an Ideal Gas and know under what conditions gases tend to behave non-ideally. Know, in general terms, what corrections have to be made to the ideal gas law under non-ideal conditions.
Basic Quantitation
Be able to write and balance a chemical equation. Be able to perform basic stoichiometry problems, including limiting reagent problems.
Know how to calculate enthalpy changes associated with chemical reactions by utilizing the reaction stoichiometry. Know how to apply Hess’ Law.
Describe how one could make a specific volume of a desired solution of known concentration or prepare a given volume of a dilute solution (concentration given) from a more concentrated solution.
Atomic Structure and Periodic Properties
You should be able to describe the quantum mechanical view of the atom and assign quantum numbers for electrons within an atom. You should be able to generate electron configurations for atoms or ions. You should be able to identify and describe the importance of “valence” electrons. You should know the origins of paramagnetic versus diamagnetic behavior. You should know the meaning of “effective nuclear charge” and be able to express how the “n” value and the ENC give rise to the periodic trends of atomic size and ionization energy.
You should be able to describe ways to “speed up” a chemical reaction. You should be able to draw and/or interpret a reaction energy diagram (i.e., be able to identify: reactants, intermediates, transition states, activation energies, rate-determining step, overall energy changes…). You should know the general form for a rate law and be able to generate a rate law from experimental data. You should know the concept of “order” with respect to an individual reactant and the overall reaction. You should be able to evaluate the plausibility of suggested reaction mechanisms by comparison to a known rate law.
Organic Chemistry
Nucleophilic Substitution
What characteristics of an alkyl halide are important in determining whether nucleophilic displacement will proceed by an SN1 or an SN2 reaction, and why do these characteristics play a part? What characteristics of the attacking nucleophile leaving group and solvent play a part? How would one determine experimentally (two ways) whether a reaction of this sort would be SN1 or SN2?
Electrophilic Addition of Alkenes
Upon addition of unsymmetrical molecules across an unsymmetrical carbon-carbon double bond, what factors govern the regio-selectivity for the molecular fragments and the final structure of the product? How can one control the selectivity? Use mechanisms and intermediate stability in your explanations.
Aromatic Substitution
Be able to discuss electrophilic aromatic substitution with emphasis on how and why the inductive effects of ring substituents affect structures of products and the rate of the reactions. Know the 5 reactions that fall in this category and ways of modifying the groups already on the benzene ring.
Structural Determination from Spectroscopy and Spectrometry
Be able to suggest a structure for a substance based on NMR, FTIR, and mass spectrometry data. Alternatively, be able to draw the above spectra if molecular structures are given. Know the most important absorption values for proton NMR and IR.
Grignard Reactions
Be able to discuss the Grignard reaction, identifying starting materials, different products and when each product would be expected, and the complexities of actually running Grignard reactions. Show the mechanisms by which the products are formed.
Types of Reactions
Be able to identify reactions as addition, condensation, hydrolysis, elimination, rearrangement, substitution, oxidation or reduction.
Be prepared to describe the components of a buffer and know which factors determine “buffering capacity”. You should know the basic biologic mechanisms for buffering the body’s fluids and the reasons behind the requirement for buffering of the body’s fluids. You should know how one would select and then prepare a buffer for use in the lab. (Implicit in this is that you must know the Henderson-Hasselbach equation and be able to work problems utilizing this equation.) You should be able to generate and/or interpret titration curves for both monoprotic and polyprotic acids.
Hemoglobin and Myoglobin
You should be able to discuss the structural and functional similarities and differences between hemoglobin and myoglobin. Be able to discuss in detail the various species that influence hemoglobin’s affinity for oxygen (i.e. cooperativity of O2 binding, H+, CO2, 2,3-DPG). You should also be able to compare and contrast important isoforms of hemoglobin including HbA, HbS, HbF, and HbA1c.
Enzyme Regulation and Signal Transduction
Be able to discuss different ways that enzymes’ activities are regulated, including both reversible and irreversible regulation. This means that you should be able to discuss competitive, noncompetitive, and uncompetitive inhibitors as well as allosteric effectors. You should know the roles of kinases and phosphatases. You should understand the concepts of zymogen activation and feedback inhibition. You should also be able to describe the signal transduction cascade responsible for generating the second messenger cAMP and the activation of Protein Kinase A.
Physical Chemistry I
Be prepared to discuss various thermodynamic variables (ex., heat, work, temperature, internal energy, enthalpy, entropy, Gibbs free energy, etc.) and how they relate to one another. Know the steps in the Carnot Cycle and how to calculate efficiency. Be able to determine the sign (i.e. positive, negative, or zero) for q, w, DT, DU, DH, DS and DG for an isothermal or reversible adiabatic expansion or compression of a gas.
Quantum Mechanics and Electronic Spectroscopy
Be prepared to discuss what information is contained in a wavefunction and how this information is obtained (energies, probability densities). You should know the general form of the Schrödinger equation and how it is used with particular reference to a “particle in a box” and the Hydrogen atom. Be able to discuss how “particles in a box” may be used to model electronic spectra of conjugated organic molecules.
Physical Chemistry II
Reaction Equilibrium
Be able to write the equilibrium expression for an ideal gas mixture (the KPº equation). Be prepared to discuss the relationship between Gibbs free energy and the equilibrium constant. Be able to write the equilibrium expression for a nonideal system (examples: saturated aqueous solution of a salt, weak acid aqueous solution, autoprotolysis of water).
Kinetic Molecular Theory
Be prepared to discuss the Boltzmann distribution and the various physical properties of an ideal gas. You should be able to calculate the root-mean-square speed of a gas at room temperature. You should know the dependence of collision frequency and mean-free-path upon the various physical properties.
Inorganic Chemistry
Crystal Field Stabilization Energy
Be prepared to explain what is meant by the term “crystal field stabilization energy,” CFSE. Be able to calculate the CFSE for octahedral and tetrahedral complexes. Be able to predict whether particular coordination complexes are high or low spin. Be able to relate the CFSE to the color of coordination complexes.
Group Theory
Be able to determine the symmetry point group for a given compound. Be prepared to predict the vibrational spectra (infrared and Raman) of compounds from character tables. Be able to write a representation (the characters) of parts of a molecule (orbitals, bonds, angles, SALCs), given the point group, and to use the character tables to identify the irrep that corresponds to this representation.
Atomic Properties
Be prepared to describe trends in ionization energy, electronegativity, size, polarizability, metallicity, and electron affinity. Be able to account for exceptions to broad trends. Be prepared to predict Hard/Soft acid base behavior and solubility based on these trends.
Be prepared to identify the oxidation states of all elements in compounds and ions. Be able to balance Redox half reactions and complete reactions and to calculate reduction potentials given tables of standard reduction potentials and initial cell concentrations.
Analytical Chemistry
Be able to describe the instrumental set-up for both Gas and Liquid Chromatography. That should include the types of injectors, columns, and detectors. As part of the discussion you should be able to delineate characteristic advantages and disadvantages of each component. For LC, be able to distinguish between normal phase and reversed phase separation. You should be able to tell what type of instrument is best for different types of samples. An ability to discuss the instrumentation and capabilities of the hyphenated techniques of GC-MS and LC-MS is also important.
Atomic and Molecular Spectroscopy
Be able to describe the instrumental set-up for the following types of spectrometers: AA, ICP-AE, MS, UV-Vis, and IR (both dispersive and FT). You should have an intuitive feel for what’s happening to the atoms or molecules during the analysis process.
NMR, IR, and Mass Spec
Know numerical values for NMR and IR spectra. Be able to go from spectra to molecular structure and visa versa for proton NMR, carbon-13 NMR and IR. Understand the role of decoupling and shift reagent experiments in elucidating complex NMR spectra.
Laser Spectroscopy and Fluorescence
Be able to discuss the parts of a laser and how a laser works. Be able to discuss the various processes a molecule may undergo following electronic excitation (including fluorescence, phosphorescence, intersystem crossing, etc.) and how these processes and their time scales relate to fluorescence spectra.
Updated 2009 |
b8a6db56e6576685 | Oregon State UniversitySpecial Collections & Archives Research Center
“A Liking for the Truth: Truth and Controversy in the Work of Linus Pauling.”
February 28, 2001
Video: Keynote Address: “Timing in the Invisible,” Part 1 Ahmed Zewail
Get the Flash Player to see this video.
41:54 - Abstract | Biography | More Videos from Session 1
John Westall: It gives me great pleasure this morning to introduce the lead speaker for today's symposium, Professor Ahmed Zewail, Linus Pauling Professor of Chemical Physics and Professor of Physics at the California Institute of Technology, and Nobel Laureate in Chemistry in 1999.
No one can be more appropriate to lead off a symposium on Linus Pauling than Ahmed Zewail. While Pauling pioneered the understanding of the structure of the chemical bond, and won a Nobel Prize for that, half a century later Zewail has pioneered the understanding of the dynamics of the chemical bond, and won a Nobel Prize for that. At the risk of oversimplification, let me try to convey to you in just a few words the excitement Zewail's work has caused in the last decade of the 20th century, just as I imagine Pauling's work did in the earlier part of the 20th century.
Early work on bonding and structure can be seen in the early part of the 19th century when Berzelius provided a fairly satisfactory explanation of what has become known today as ionic bonding. By the early part of the 20th century, G.N. Lewis had provided the early framework for covalent bonding. When Pauling got involved in the late ‘20s and early ‘30s, there were still large questions unanswered about molecular geometry and the relation of chemical bonding to the emerging atomic theory. Not until Pauling's groundbreaking papers on the nature of the chemical bond in 1931 did the world see how the nature of the structure of molecules could be understood from the ground up with geometry consistent with the fundamental atomic theory.
Pauling's success was based on his skill as an experimentalist in the field of x-ray crystallography, his mastery of theory, and his ability to make the appropriate approximation at the appropriate time. A similar evolution and groundbreaking event can be seen in our understanding of our dynamics of the chemical bond. Dynamics encompasses vibrational motions that make or break the chemical bond. Dynamics is at the heart of chemical reactivity, and at the heart of chemistry itself. By the end of the 19th century, Arrhenius had related chemical reaction rates to temperature. Whereas Arrhenius had considered ensembles of molecules, in the 1930s it was Eyring and Polanyi who devised a transition state theory - that is, a state midway between reactants and products, and on a time scale related to the vibrations of the molecules. This time scale is on the order of 10 to 100 femtoseconds, and that is a very, very short time. A femtosecond is to a second as a second is to about 30 million years. [3:56]
For the rest of the 20th century, the world worked with this theory of Eyring, but no one ever dreamed of actually trying to observe one of these transition states at such a very short time scale. But that is exactly what Zewail set out to do, to observe the transition state and, in an extraordinary set of experiments in the late 1980s, he succeeded. By using an ingenious combination of experimental technology, theory, and appropriate approximations, he developed what amounts to a high speed camera to image molecules at the femtoscale time scale of chemical reactions, and was actually able to view chemical bonds as they were breaking.
The impact of this work has been tremendous. Rudolph Marcus, a Nobel Prize-winner from the early 1990s, has said of Zewail: "Zewail's work has fundamentally changed the way scientists view chemical dynamics. The study of chemical, physical, and biological events that occur on the femtosecond time scale is the ultimate achievement of over a century of effort by mankind, since Arrhenius, to view dynamics of the chemical bond as it actually unfolds in real time."
Ahmed Zewail is currently Linus Pauling Professor of Chemistry and Professor of Physics at the California Institute of Technology, and Director of the National Science Foundation Laboratory for molecular science. He received his B.S. and M.S. degrees from Alexandria University in Egypt, and his Ph.D. from the University of Pennsylvania. He was appointed to the faculty of Caltech in 1976, and in 1990 was appointed Linus Pauling Chair at the Institute. Professor Zewail has numerous honorary degrees from all over the world - he is a member of several national academies of science, he has won top awards from national chemistry and physics organizations, only one of which I'll mention, the 1997 Linus Pauling Award from the Northwest sections of the American Chemical Society, which was awarded in Portland. [6:39]
Pauling's work culminated in the Nobel Prize for chemistry in 1954 with the citation for his research into the nature of the chemical bond and its application for the elucidation of the structure of complex substances. Zewail won the Nobel Prize in chemistry in 1999 with the citation for his studies on the transition states of chemical reactions using femtosecond spectroscopy. It is now my great pleasure to introduce for today's keynote lecture Professor Ahmed Zewail. [7:20]
Ahmed Zewail: It's a great pleasure to be here and to be part of this wonderful centennial. In fact, I think with what you've heard from John I should really sit down now, as you have the whole story of why I'm going to talk today. It's really very special to me, because I do have a very special relationship to Linus Pauling, not only that I sit in his chair at Caltech, but for a variety of reasons. As a matter of fact, I fly tomorrow morning from here because at Caltech we also are celebrating the centennial of Linus Pauling, and I do chair the session in the morning.
Everybody knew about Pauling, and everybody knew about Pauling's achievements, but I think, as Steve said, there's something about Pauling in terms of his personality and his way of doing things that people really don't know about. It comes out in the biographies much better, but from my daughters to people I have seen at Caltech who are in the late stage of their lives, who just get so excited when they meet Linus Pauling. He was a fair man, a very decent human being, and the word I use around our house, he was a very civilized human being. When you sit with Linus, you can talk about the world, and you can enjoy talking to him about the world, but I always find in him civility and decency.
His scientific contributions are very well known, but I would like to make a link between some of the work that Pauling has achieved in chemistry and biology. It is appropriate to tell you that I didn't know of Pauling when I arrived at Caltech, all that I knew was his name; I arrived at Caltech in 1974. But this is one point that I want to touch on, especially since we have two historians with us today who wrote about the biography of Linus; that's his relation to Caltech. When I arrived in 1974, there were rumors around the campus that Linus was not too happy about his departure from Caltech, and of the incident that took place. Things get messed up in the public press a lot and people say things that are not really true, because I dug into some of the historical minutes of the faculty and what Lee DuBridge said. But as a young assistant professor I didn't know any better so I thought it just didn't make any sense, that Pauling, who really was and still is a giant in chemistry, was not visiting Caltech. I was fortunate to be tenured very shortly at Caltech, when I became part of the establishment, so I thought we should do something about it.
And so, I had the great honor to chair and to organize…I like this photo of Linus, actually, giving a lecture at Caltech; this was February 28, 1986, and that was his 85th birthday. I thought that this was a great occasion to bring Linus to campus. He came back and was just so excited; Linus Jr. was with us, and could see his father speaking from the heart. I gave him the stage in the evening to say whatever he wanted to say about Caltech, and he did give it to us. It was just a very special occasion, and I also organized the 90th birthday for Linus at Caltech, and I think if Linus did not come back to Caltech to share his great moments, it would have been a mistake in the history of Caltech and science. I even crowned him the Pharaoh of Chemistry, and I believe that he loved this picture. I think it's here, isn't it? It cost me about $500 to do this, because I had to go to Hollywood and try to fit his face into one of Ramses II.
If you want to learn about all of this, I also edited a book on Linus called The Chemical Bond: Structure and Dynamics, and that is really the focus of my lecture today. I want to take you from the wonderful and important era, which Jack Dunitz will be talking to you about, of structures that are static, meaning there is no way of looking at the movement of the atoms in these structures, into a case where we try to understand the dynamics of how these atoms move as a bond breaks and forms. By the way, the highest honor I received was from Linus, when he wrote to me saying he considered me his friend. Linus Jr. must recognize his father's handwriting. In 1991, he sent me a book, and wrote to me "To my friend Ahmed." I treasure this. [14:39]
In his lecture at Caltech, Linus showed the structure of sodium chloride that was believed to be by Mr. [William] Barlow in 1898. This is the structure of table salt. I am showing this because there is an analogy between the beginning of Linus' work on structural chemistry, and what we were trying to do in dynamic chemistry. Looking at the structure of sodium chloride with two atoms probably seems trivial, especially during the era of the Braggs and the work that was done with x-ray crystallography. When we started it was also with table salt, sodium chloride, sodium iodide, to try and understand the dynamics of this chemical bond, but also we received a lot of criticism in the early days, that this was trivial; these are two atoms, and not much is going to be found from this. Of course, you know the work on structure by Linus has led, for example, to the structure of hemoglobin. The picture is not as pretty as you normally see, but this is the protein structure of deoxyhemoblogin. And in fact you will see at the end of my talk that we can also look at the dynamics of proteins, DNA and the like. [16:24]
It is remarkable to me that I read very recently - I didn't know that Linus wrote this - but he received the Nobel Prize in 1954 and shortly after the award he was asked to reflect on what chemistry will be in the coming 50 years. Remember, he spent all this time studying the structure, the architecture, of molecules, but he had the vision, and I think that this was the first time that anybody has seen this Scientific American article, that "the half century we are just completing has seen the evolution of chemistry from a vast but largely formless body of empirical knowledge into a coordinated science. The new ideas about electrons and atomic nuclei were speedily introduced into chemistry, leading to the formulation of a powerful structural theory which has welded most of the great mass of chemical fact into a unified system. What will the next 50 years bring?" This is Pauling speaking 50 years ago, mid-century. "We may hope that the chemists of the year 2000 will have obtained such penetrating knowledge of the forces between atoms and molecules that he will be able to predict the rate of any chemical reaction."
So even though Pauling was immersed in static structures and their study, his eye was on the next century, and maybe the next century Nobel Prize. That is precisely what I want to talk to you about today, is how we look at the forces that control the motions of atoms and molecules, whether it's in chemical or biological systems, and can we distill some concepts out of this, just like Pauling was trying to understand hybridizations and so on, that teach us something about the dynamics of chemical bonding, and hence the forces and how to predict rates and the dynamics. [19:05]
The work I will talk about relates to the Nobel Prize that we received in 1999, and incidentally, there is an incident in history here that I think I told some of the speakers at dinner last night. When we received the Nobel Prize, Caltech had a big party, as they did for Linus, and one of my colleagues, Vince McKoy, noted that Linus was born February 28, 1901, and I was born February 26, 1946, so two days away from Linus. He received the prize in 1954, we received the prize in 1999, and both in October, so we were both almost 50 years of age when we received the Nobel Prize. This is remarkable where Caltech is concerned, because both Pauling and myself started at Caltech as Assistant Professors, they did not "buy us from the outside," as they say. I thought you would be intrigued by this: when the prize was announced, it was everywhere because our work also touches on physics, and so many of the physics magazines wrote detailed articles, and here is a distinguished one from England; it surprised me, as you will see, that England would write something like this. "Ahmed Zewail receives the 1999 Nobel Prize for Chemistry…" and it goes on to say "laser-based techniques that allow the motion of atoms inside molecules to be followed during chemical reactions." It goes on, very complimentary. And then it says "Zewail was born in Egypt in 1496." I told the Nobel committee in Sweden that it is really remarkable that it took them 500 years to give me the Nobel Prize.
Here is the journey in time. I think for the people in the audience who are not used to seeing this number, you certainly hear and you know by now… Here could be 12 or 15 billion years of the Big Bang, and then you come down to our lifespan, which is about 100 years or so - your heart beats in one second. But to go from here [present day] to there [Big Bang] is about 1015, and I am going to take you from the heart into a molecule inside the heart, or eye specifically, and you have to decrease by 15 orders of magnitude to see the beats of this molecule, as you see the beats of your heart. The timescale is fast, and I wrote a review which is probably useless, but I thought it's interesting that if you go from this age of the universe, and you count back from the age of the Earth to the human lifespan to your heart (1 second), and then you go to the microscopic world (sub-second), into how molecules rotate, vibrate, and how the electrons move. In this whole microscopic world here, we reach 10-15 or so seconds, where on the opposite end you reach 1015, and the remarkable thing is the heart is the geometric average of the two. As humans, we are in a very unique position. [23:50]
It's very difficult to study history; I thought it was very easy, but it turns out it is very difficult, and it takes some time to get the facts. I showed this in the Nobel lecture, because it gives five milestones, snapshots, of an evolution over about six millennia. The first time we know of the concept of measuring time is the calendar, which was developed in 4200 BC. In fact some historians say that it is 4240 BC, so it is almost 6000 years ago, which we knew what a year is, what a day is, what a month is. What an hour is was measured using sundials in about 1500 BC. You can use a shadow from an obelisk or a sundial and know what the time of day is. The mechanical clock was developed in Europe around 1500 AD, and all of this was still in seconds or longer, so up until then we could only measure as accurately as a second.
One of the most famous experiments in the history of measurements was done by Edward Muybridge in 1887. Mr. [Leland] Stanford, the Governor of California, was very interested in horses and hired Muybridge to find out the actual nature of the motion of the horse as it gallops. Mr. Muybridge designed this camera, which had a shutter speed of about 1/1000th of a second. This was done in Palo Alto, where there is a plaque commemorating it. In so doing, as the horse galloped the shutter was opened, and Muybridge was able to take a snapshot of the horse proving the contention of Mr. Stanford, that all four legs are off the ground at the same time. I believe this was the very beginning of fast photography and motion pictures. In the 1800s, this was a very important development.
To go into the word of molecules, and to be able to see transition states, just as Muybridge saw the transition state of the horse moving, you need an increase of about 12 orders of magnitude in time resolution, and we only can do this with these lasers, as you will see in a few minutes. This was done in 1980, and it may be called femtoscopic. [27:16]
So, why is it that this is important to chemistry and biology - the dynamics of the chemical bond? It turns out, the dynamics of any chemical bond, whether it is in a chemical system or a protein or DNA, is totally controlled by the fundamental vibrational time scale, as you heard from John. This vibrational time scale, and there is some philosophy that one can dwell on here, determined by Planck's constant and so on, but fundamentally, two atoms connected by a spring will move in a spring-motion on a time scale of 10-13 seconds, and the shortest possible time for any molecule in this universe will be 10-14 seconds. All the way from hydrogen molecules to protein molecules, the time scale will be from 10-14 seconds to 10-12 seconds. Molecules can rotate in the laboratory without you seeing it in a picosecond 10-12 to 10-9, and everything longer than this we consider in the Stone Ages; it's not interesting to us. So on that time scale you will see many interesting phenomena in chemistry and biology that happen.
This is the end of time resolution for chemistry and biology, because if you look here, even molecules that are linking undergo collisions on a time scale of 10-14 seconds. A molecule can break a bond and make a bond on this time scale as well. The eye has a molecule called rhodopsin which divides and allows you to see, and that happens in 200 femtoseconds. The way we get photosynthesis to work, and the electron to transfer inside the green plant, is on the order of femtoseconds. So this is the fundamental time scale, and if we were to understand the dynamics of the chemical bond we must understand this time scale. Luckily, that is the reason behind all of this, and to be totally truthful, this is the way we were thinking about it. We were making real approximations - what I like to call the "back of the envelope" type of calculation - and we didn't go to big computers and do quantum calculations and all of that stuff, because I always feel that there is beauty in the simplicity of science, and if we are not able to distill it into the essence of science, it seems to me that we are fuzzy about it.
What I just told you is general to any molecular system; if you think of the chemical binding energies that we have, that Pauling would have calculated with his slide ruler, and then if I activate a chemical bond, I can calculate the energy in this bond. And if I have the energy I can calculate the speed at which the atoms should move in principal. For all chemical and biological systems, you'll find that this speed is about one kilometer per second. Even in the spring that I mentioned, the two atoms would collide with each other at about one kilometer per second, or 105 centimeters per second. If we're going to look on the molecular scale to understand the dynamics of the chemical bond, we have to understand that the distance scale is about an Angstrom, or 10-8. You can see, without any fancy quantum mechanics at this point, potentials and forces and all of that, that if you put these two numbers together (105 and 10-8) you come up with a time resolution of 10-13 seconds.
Therefore, our idea is simple: if we can reach a time resolution just like x-ray crystallography and electron diffraction, if we can get the de Broglie wavelengths or the special resolution less than the bond distance, you would be able to see bonds and atoms. Here, we can also say that if our time resolution is 10-14 seconds, then we can freeze this motion, we can do what Muybridge did, because now this time scale will be shorter by a factor of ten than the time it takes the atoms to move in the molecule. More scientifically, we say that the time resolution here is shorter than the vibrational period of the molecule, or the rotational period of the molecule. And that, in its totality, defines the field of femtochemistry, and now there is something called femtobiology, and so on. [33:07]
So this is a simple picture of the problem, but for the physicists in the audience the problem is much more intriguing and much more fundamental. It is remarkable that Pauling recognized, right after Schrödinger wrote his famous wave equation in 1926 in Zurich - which we shall celebrate in April - very quickly recognized that if you think of matter as particles, you can also think of them as waves. De Broglie, with brilliant intuition, used the quantization equation and Einstein equation and came up with this very simple thesis that lambda is equal to h over p, which Einstein reviewed. So for each momentum of a particle, there is an associated wave, lambda. When people study matter as light, and we made this mistake too, they are always thinking of Schrödinger's Eigen states, the wave character of the system, wave mechanics, Schrödinger wave equation. But that is not what I am interested in, because what I am interested in is to be able to see the atoms in motion, and to understand the fundamentals of these dynamics. I don't want to go to the delocalized wave picture and calculate Schrödinger equations and come at the end without a simple picture describing the particle behavior of these atoms in motion, moving from one configuration into the other.
So how can we make matter behave in a fundamental way as a particle and not as a wave? If we look at what is known as the uncertainty principle, which is seen here, that you cannot do precise measurements of the position of the particle and momentum simultaneously. Similarly, you cannot make a precise measurement of time and energy simultaneously. We understand this from Mr. Heisenberg, there is no problem. But the chemist had difficulty understanding how we can shorten the time, Δt, because this would be at the expense of the energy, ΔE, and therefore we ruined the quantum state of the molecule, the nature of this molecule.
What was not recognized, was that this is in fact a beautiful way of locating atoms in space, because in a crude way if you think of how ΔtΔE=hbar and ΔxΔp=hbar, don't think of this alone, just think of the two together, and say that the momentum and energy are related (which they are). If you combine the two, you'll find that if you shorten the time, Δt, you can make Δx very small. That is the only way in this universe that we know of to localize atoms in space. We are not in violation of the uncertainty principle here, but if you do it cleverly, combining time and energy, you can show clearly that you will localize this nuclei to better than a tenth of an Angstrom. The chemical bond distance is about an Angstrom, so we have a factor of ten in space and should be able to see the motion of these atoms as the bond breaks and forms.
It has a long history in physics, and in fact it goes back to the 1800s - there is an experiment in 1801 by Mr. [Thomas] Young on light, not on matter. If you take two slits, with light behind them, then you will see these fringes, and it is very similar to what we can now do with matter and molecules. We can take a femtosecond pulse, which is acting like the light, and we can take all these wave functions of the molecules - this, by the way, is two atoms and a spring moving - and we can take these waves and try to do this interference experiment on this matter, and all of a sudden you see what's called a "wave packet," meaning that we localize all of these vibration states in one place, which becomes a classical picture. We can speak of a spring in this point in space, and that it is coming back, and we can see this motion.
Early in the field, it was thought that this energy is too broad and ugly, and there are occasionally objections over the uncertainty principle. Actually, without the uncertainty principle, it would be impossible to see the nuclei moving on this time scale.
This, by the way, was also of concern to the big minds in 1926. This letter was from [Ernest] Lawrence to Schrödinger, pointing out that this idea of connecting the classical mechanics of Newtons, and understanding the particle instead of the wave behavior, "you will be unable to construct wave packets." This was 1926, and in 1980 not only can you do it on chemical systems and create this wave packet, but the field has expanded so much that you can also do it on liquids, solids, gas phase, and clusters - this has been observed by many people around the globe. [40:45]
The way we do this experimentally is tricky, because there are no cameras like Muybridge's that will allow you to open and close mechanically in a femtosecond; it's just impossible. The way we do this is to use an idea of creating pulses of light in femtoseconds and then delay, as a series of pulses - so this red here is delayed from the yellow - and the first one, the yellow, will set off the clock and give us a time zero for the change. With a whole series of pulses we can "photograph" what is happening here as a function of time. We do it this way, using the speed of light, because it turns out that when we delay this pulse in relation to this one by one micron in the laboratory, it's equivalent to 3.3 femtoseconds because of the speed of light: 100,000 km per second. [41:54]
Watch Other Videos
Session 1
Session 2
Return to Main Page |
0192da323c174237 | Quantum Monte Carlo
Quantum Monte Carlo
Quantum Monte Carlo
to get instant updates about 'Quantum Monte Carlo' on your MyPage. Meet other similar minded people. Its Free!
All Updates
Quantum Monte Carlo is a large class of computer algorithms that simulate quantum systems with the idea of solving the quantum many-body problem. They use, in one way or another, the Monte Carlo method to handle the many-dimensional integrals that arise. Quantum Monte Carlo allows a direct representation of many-body effects in the wave function, at the cost of statistical uncertainty that can be reduced with more simulation time. For bosons, there exist numerically exact and polynomial-scaling algorithms. For fermions, there exist very good approximations and numerically exact exponentially scaling quantum Monte Carlo algorithms, but none that are both.
In principle, any physical system can be described by the many-body Schrödinger equation as long as the constituent particles are not moving "too" fast; that is, they are not moving near the speed of light. This includes the electrons in almost every material in the world, so if we could solve the Schrödinger equation, we could predict the behavior of any electronic system, which has important applications in fields from computers to biology. This also includes the nuclei in Bose–Einstein condensate and superfluids such as liquid helium. The difficulty is that the Schrödinger equation involves a function of three times the number of particles and is difficult to solve even using parallel computing technology in a reasonable amount of time. Traditionally, theorists have approximated the...
Read More
No feeds found
Posting your question. Please wait!...
No updates available.
No messages found
Suggested Pages
Tell your friends >
about this page
Create a new Page
Create a new Page
Find your friends
Find friends on MyPage from |
27a103958c99f46a | Divine Neutrality, Blog. Science, Philosophy
Measurement Problem
December 11th, 2007
A commonplace computational practice in quantum mechanics generates the most profound conceptual challenge to the theory. The challenge is called the measurement problem. Here are some quotes summarizing the problem.
“The quantum measurement parodox.. stated succinctly… In quantum mechanics all possibilities… are left open whereas in … experience a definite outcome always (occurs).”
A. J. Leggett in Foundations of Physics. 18, 939 (1988)
“How is the measuring instrument proded into making up its mind which value it has observed?”
Bryce S. Dewitt, Physics Today 23, 30 (1970)
“Some explanation must be provided for the fact that the Hilbert—space vector… collapses onto a certain eigenvector during a measurement process…”
J. Bub, Nuovo Cimento v. 57, Nr.2, 503 (1968)
The probability amplitudes evolve deterministically until a measurement is made: the measurement stops the evolution. What is the essential element that changes the evolution of the system from being in a state
|S> = (superposition sum of many states |n>),
into being in a state, say, |n=3>, one from among the many in the superposition?
Marvin Chester, never published
In quantum mechanics amplitudes for events progress quite deterministically – until a measurement is made. Then the amplitudes for all but the measured event are simply discarded. And the universe begins anew starting with the conditions found in the measurement. So at each measurement the old universe disappears and a new universe begins!
I offer below an animation to portray the idea. Press the particle release button to inject a vertically polarized particle into the magnetic field gradient region. Each press of the release button yields a new particle.
Made visible, here by the red ball-and-pole icons, is something intrinsically invisible and not measurable; the computational element called the particle amplitude. The particle amplitude is split into two by the field gradient through which the particle passes. From this amplitude split arises the particle’s potential to materialize in one or the other detector. But even though it has a 50% chance of appearing in each detector, only one detector registers. That detection tells us that the particle is known, with 100% certainty, to be in the registering detector. So the universe must be reset – from 50/50 amplitude split before detection to certainty for one of the amplitudes at detection.
Thus, on measurement, the particle’s spatial probability distribution is revised and from these new initial conditions a new universe begins its deterministic evolution. In the figure each press of the release button repeats the experiment with a new particle. The counters accumulate the detection events thus revealing the statistics. On repeating the experiment many times one finds particles are registered as often in one detector as the other. This is how the pre-detection 50/50 amplitude split reveals itself experimentally.
The figure shows an idealized laboratory measurement. But measurement events are taking place interminably everywhere. When a photon of sunlight falling on a leaf gets absorbed in photosynthesis, that is a measurement event. The leaf is a photodetector. Every chemical reaction is a measurement event; the reactants disappear and the products appear. Isn’t every inelastic scattering a measurement event?
An important feature of the measurement problem is thus: What constitutes a measurement? Is any elemental process that procedes irreversibly a measurement? These are topics to be explored in further posts.
• Digg
• del.icio.us
• Google Bookmarks
• email
• Facebook
• Twitter
1. 1
I always thought that the “measurement problem” was simply a critique of the “projection postulate.” The act of measurement, whatever it is, must somehow be describable as a process taking place according to the rules of QM, but since these dynamics are unitary and hence reversible, how do you get out of them the irreversible effect of projection? All attempts to derive the projection/collapse postulate seem to founder. My problem with it is not so serious since I do not believe in the universal applicability of the Schrödinger equation – time must also be quantized so p.d. equations like that of Schrödinger can at best be only semi-classical approximations – nor the concomitant universality of irreversibility: though I realize this begs the question.
- Stephen @
2. 2
What a pleasure to read your comment. It is at the core of the issue. I would summarize what I understand you to say thusly:
1. The ‘projection postulate’: measurement suspends the reversible dynamics of state evolution forcing the state to become a mere projection of its original self – an intrinsically irreversible event.
2. How can we understand the world if quantum mechanics does not have “universal applicability”?
I have some questions to offer for consideration that might, at least, let us acquire a physical intuition on the matter.
- marvin chester @
3. 3
I concur with 1. As for 2 I have to admit upon reflection that I do not believe that the dynamical portion of QM yet has “universal applicability.”
If one assumes that the temporal evolution of a quantum results in the action of a one-parameter group via unitary operators on the Hilbert space of “states” of the quantum, then abstract theorems of von Neumann et. al. will force the usual form of Schrödinger dynamics. But what an idealization this is, even without relativity! Firstly, how can time be a continuum in the quantum deep? Secondly, could the Hilbert space itself not change from instant to instant, making moot the entire contraption?
The operationalist view of the Hilbert space is that its vectors represent descriptors of experimental acts performable upon the quantum and are not strictly speaking attributes of the quantum itself. Quanta are generally not objects in the sense that they have objective states of being. For instance, a superposition does not describe the state of an ordinary object but rather a particular experimental arrangement. Nothing is observed or measured (indeed nothing is observable or measurable) until some such superposition is resolved or collapsed. So we don’t see any dials or meters registering anything until this happens. And this happens after the Schrödinger ball is over, assuming it even started. This collapse or contraction of the Hilbert space is just a change in the experimenter’s repertoire of available experiments concomitant with that particular “run” of the experiment: it is the weeding out of possible outcomes rather than the promiscuous multiplexing of worlds. Another experimenter, or the same one on a different occasion, might find a different eigenspace is selected.
This view of collapse appears to divorce it from dynamics. It is a more primitive sort of kinematical phenomenon possibly underlying actual dynamics, of which Schrödinger’s is just a semiclassical approximation. I think this is not far from some views of Penrose.
- Stephen @
4. 4
If time weren’t continuous would that solve the measurement problem?
What would it mean physically for the Hilbert-space to “change from instant to instant”?
You raise such fabuously interesting issues.
- marvin chester @
Leave a Reply |
91719c64b2f85724 | Online Test Banks
Score higher
See Online Test Banks
Learning anything is easy
Browse Online Courses
Mobile Apps
Learning on the go
Explore Mobile Apps
Dummies Store
Shop for books and more
Start Shopping
Measuring the Energy of Bound and Unbound Particles
In quantum physics, you can solve for the allowable energy states of a particle, whether it is bound, or trapped, in a potential well or is unbound, having the energy to escape.
Take a look at the potential in the following figure. The dip, or well, in the potential, means that particles can be trapped in it if they don’t have too much energy.
A potential well.
A potential well.
The particle’s kinetic energy summed with its potential energy is a constant, equal to its total energy:
If its total energy is less than V1, the particle will be trapped in the potential well, as you see in the figure; to get out of the well, the particle’s kinetic energy would have to become negative to satisfy the equation, which is impossible according to classical mechanics.
Quantum-mechanically speaking, there are two possible states that a particle with energy E can take in the potential given by the figure — bound and unbound.
Bound states happen when the particle isn’t free to travel to infinity — it’s as simple as that. In other words, the particle is confined to the potential well.
A particle traveling in the potential well you see in the figure is bound if its energy, E, is less than both V1 and V2. In that case, the particle moves between x1 and x2. It is possible to discover the particle outside this region.
A particle trapped in such a well is represented by a wave function, and you can solve the Schrödinger equation for the allowed wave functions and the allowed energy states. You need to use two boundary conditions (the Schrödinger equation is a second-order differential equation) to solve the problem completely.
Bound states are discrete — that is, they form an energy spectrum of discrete energy levels. The Schrödinger equation gives you those states. In addition, in one-dimensional problems, the energy levels of a bound state are not degenerate — that is, no two energy levels are the same in the entire energy spectrum.
If a particle’s energy, E, is greater than the potential (V1 in the figure), the particle can escape from the potential well. There are two possible cases: V1 < E < V2 and E > V2.
Case 1: Particles with energy between the two potentials (V1 < E < V2)
If V1 < E < V2, the particle in the potential well has enough energy to overcome the barrier on the left but not on the right. The particle is thus free to move to negative infinity, so its classically allowed x region is between
Here, the allowed energy values are continuous, not discrete, because the particle isn’t completely bound. The energy eigenvalues are not degenerate — that is, no two energy eigenvalues are the same.
The Schrödinger equation,
is a second-order differential equation, so it has two linearly independent solutions; however, in this case, only one of those solutions is physical and doesn’t diverge.
The wave equation in this case turns out to oscillate for x < x2 and to decay rapidly for x > x2.
Case 2: Particles with energy greater than the higher potential (E > V2)
If E > V2, the particle isn’t bound at all and is free to travel from negative infinity to positive infinity.
The energy spectrum is continuous and the wave function turns out to be a sum of a wave moving to the right and one moving to the left. The energy levels of the allowed spectrum are therefore doubly degenerate.
• Add a Comment
• Print
• Share
blog comments powered by Disqus
Inside Sweepstakes
Win $500. Easy. |
b9e7e5b501f523f0 | Take the 2-minute tour ×
Is randomness based on lack of knowledge or behavior of universe is true random?
Or in other words,
are the allegation by EPR about hidden variable in the QM theory justifiable? What evidence can disprove/prove EPR?
share|improve this question
The question is very vast and more philosophical than physical in nature. It also mixes some concepts through each other like randomness, probability and determinism. It needs a clear exposition of the various terms and a narrowing down of the question like "Are current models of the universe deterministic?" otherwise the question is too broad and not really about physics specifically. In the meantime, I advise reading this text to clear up ideas a bit. – Raskolnikov Dec 30 '10 at 14:51
The question body doesn't quite make sense... besides, is this asking about randomness in general, or non-determinism in quantum mechanics specifically? In the former case it's off topic and in the latter case it really needs to be reworded. – David Z Dec 30 '10 at 23:20
Does the sun rise in the east? Do eggs harden when boiled? You have to be a little more precise in your question. Otherwise you're just being poetic. – user346 Jan 4 '11 at 15:47
4 Answers 4
up vote 12 down vote accepted
This is a very general question, and can be answered from several perspectives. I shall try to give an overview so you can perhaps research the areas that interest you a bit more.
Firstly, the most fundamental interpretation of probability (as considered by most mathematicians) is Bayesian probability. This effectively states that probability measures state of knowledge of an observer.
This view has interesting ties with physics, in particular quantum mechanics. One could consider the random outcome of a QM measurement (wavefunction collapse) from a frequentist approach, but it is often more appealing philosophically to consider it as a state of knowledge. (The famous thought experiment of Schrodinger's cat is a good example - until one opens the box, we can only say it is an "alive-dead" cat!)
Interestingly, Bayesian probability does not explicitly preclude determinism (or non-determinism). Our current understanding of quantum mechanics does however. In other words, even knowing perfectly the state of a system at a given time, we cannot predict the state of the system at a future time. This most famous upset Albert Einstein, who spent many years of his life looking for a more fundamental deterministic theory - a so-called hidden-variables theory. Since then, however, we have learnt of Bell's theorem, which implies the non-existence of local hidden variables, suggesting that there is no more fundamental theory that "explains away" the non-determinism of QM. This is however a very contentious issue, and in any case does not rule out the existence of non-local hidden variable theories - the most famous of which is Bohm's interpretation.
In summary, this issue is far from settled, and creates a lot of contention between different groups of physicists as well as philosophers today.
share|improve this answer
The "local" weasel in Bell's theorem leaves open the possibility that everything is predestined, but makes it in principle impossible to every know all the underling data needed to resolve any given "random" event (because it might be stored in some location outside your historical lightcone and still matter. Grrrr...). So there is a metaphysical escape for those who don't like God throwing dice, but us mere mortals are stuck with randomness in our lives. – dmckee Dec 30 '10 at 21:34
Well, Shroedinger equation is deterministic, so the whole QM is a nonlocal hidden variable theory. – mbq Dec 31 '10 at 10:28
@mbq: The Schrödinger equation might be deterministic, but wavefunction collapse is not. The operators representing quantum observables project onto a single eigenvalue of the observable. The information about the rest of the unobserved state is lost, and the loss happens randomly. – Jerry Schirmer Jan 1 '11 at 17:44
@Jerry This is because (as always) measurement ruins isolation -- yet the larger system composed of measured system and measuring system stays deterministic. – mbq Jan 1 '11 at 17:52
@Sklivvz Sure, but there is no contradiction here. – mbq Jan 1 '11 at 23:04
There is a fundamental randomness in the universe but we can often treat things as deterministic. For example, we can accurately predict the path of a projectile provided we know the initial velocity and the gravitational acceleration. However, every measurement has uncertainty due to the accuracy and precision of the instruments used to make the measurements. From these measurements, our predictions also have uncertainty.
Uncertainty becomes a fundamental problem at extremely small scales. You should read up on the Uncertainty Principle for a detailed explanation of this but I will attempt to put it simply. To make a measurement, you actually have to interact with the object. For example, to see in the dark you may use a torch. This will shine light at objects, which will be scattered and reflected and your eyes will detect the reflected light. Here light interacts with the object you are observing. At a large scale this doesn't change much, but at extremely small scales the energy carried by light is enough to change the system significantly. So the action of observing necessarily implies that you are changing the system so that you can never measure something exactly. It is important to realise that this is a fundamental law of the universe, not just that our equipment is not good enough. I recommend searching for Dr Quantum videos - it is an animated series that explains these concepts. Due to these limitations, we have to model things like the position of a particle as a probability distribution.
Another important thing with regards to determinism is radioactive decay. We can predict very well how much of a radioactive substance will decay in a certain time. It is simply an exponential decay. However, if we extract a single atom, we have no idea when it will decay. This is completely random - the decay of an atom is indeterministic and not at all affected by environmental factors. Again, our models are reduced to probability.
share|improve this answer
I think there is another level at which this question is being asked. Randomness of a symbol string means there is no formal data compression algorithm that reduces the string to some small form. The shortest description of a string is the level at which its complexity is reduced to an extremum, and that it can be executed by a Turing machine which halts. This is the Kolmogoroff complexity of a symbol string. So to emulate the string there exists some Turing machine which has a "tape" and a stack, where the complexity of the string can't be significantly more (longer length, more bits etc) than the Turing machine to guarantee a halting condition. A set of $n$ coin tosses will produce $N~=~2^n$ possible binary configurations. For $n$ large a smaller percentage of the $N$ binary strings are likely to satisfy this condition. The Kolmogoroff complexity is then a form of the Chaitan halting probability, which is itself not computable in general, which does give a bound on the number of strings which are "halting."
This leads to the indefinability of randomness. To define randomness you need some algorithm which can compute that a string is random. So given a string $S$ there must exist a Turning machine which determines $Rand(S)~=~T\vee F$. However, that is not mathematically possible. This means randomness is not computable.
share|improve this answer
Ok, what if we were to use some sort of metaphysical observation (no measurement) and then use this observation to choose when to measure thus perhaps partially aligning the colapse with some higher order? I would assume the waveform would collapse according to the same 'random' order but perhaps because of the partial alignment we could somehow influence the outcome of said measurement.
share|improve this answer
Your Answer
|
ca616993b3de9b9f | Friday, March 29, 2019
Proving the Periodic Table
The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.
In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.
Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$
The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.
The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)
Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.
For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones. But this sounds more like a day dream than a controlled approximation.
In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).
But for atoms, I don't see how you would invoke such RG arguments.
So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):
$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$
where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.
On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.
The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.
Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.
Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.
Saturday, March 16, 2019
Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"
Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.
Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.
Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.
Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?
Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.
Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.
Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.
Wednesday, March 06, 2019
Challenge: How to talk to a flat earther?
Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).
These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.
Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.
So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).
My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.
How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.
Tuesday, February 12, 2019
Bohmian Rapsody
Visits to a Bohmian village
Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language.
Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud.
Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today.
Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories.
"But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has a conserved current
$$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$
So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field
$$v = j/\rho = 2\Im(\nabla \psi/\psi).$$
What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$.
What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well. If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ from what the Copenhagen prescription predicts.
The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern.
What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much.
But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example:
You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval.
From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference.
From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach).
The trajectory of the second particle depending on its initial position
This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way.
And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not.
Bohmians tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state.
As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system).
My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend.
PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.
Thursday, January 17, 2019
Has your password been leaked?
Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are).
These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago.
To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub.
Friday, October 26, 2018
Interfere and it didn't happen
I am a bit late for the party, but also wanted to share my two cents on the paper "Quantum theory cannot consistently describe the use of itself" by Frauchiger and Renner. After sitting down and working out the math for myself, I found that the analysis in this paper and the blogpost by Scot (including many of the the 160+ comments, some by Renner) share a lot with what I am about to say but maybe I can still contribute a slight twist.
Coleman on GHZS
My background is the talk "Quantum Mechanics In Your Face" by Sidney Coleman which I consider as the best argument why quantum mechanics cannot be described by a local and realistic theory (from which I would conclude it is not realistic). In a nutshell, the argument goes like this: Consider the three qubit state state
$$\Psi=\frac 1{\sqrt 2}(\uparrow\uparrow\uparrow-\downarrow\downarrow\downarrow)$$
which is both an eigenstate of eigenvalue -1 for $\sigma_z\otimes\sigma_z\otimes\sigma_z$ and an eigenstate of eigenvalue +1 for $\sigma_x\otimes\sigma_x\otimes\sigma_z$ or any permutation. This means that, given that the individual outcomes of measuring a $\sigma$-matrix on a qubit is $\pm 1$, when measuring all in the z-direction there will be an odd number of -1 results but if two spins are measured in x-direction and one in z-direction there is an even number of -1's.
The latter tells us that the outcome of one z-measurement is the product of the two x-measurements on the other two spins. But multiplying this for all three spins we get that in shorthand $ZZZ=(XXX)^2=+1$ in contradiction to the -1 eigenvalue for all z-measurments.
The conclusion is (unless you assume some non-local conspiracy between the spins) that one has to take serious the fact that on a given spin I cannot measure both $\sigma_x$ and $\sigma_z$ and thus when actually measuring the latter I must not even assume that $X$ has some (although unknown) value $\pm 1$ as it leads to the contradiction. Stuff that I cannot measure does not have a value (that is also my understanding of what "not realistic" means).
Fruchtiger and Renner
Now to the recent Nature paper. In short, they are dealing with two qubits (by which I only mean two state systems). The first is in a box L' (I will try to use the somewhat unfortunate nomenclature from the paper) and the second in in a box L (L stands for lab). For L, we use the usual z-basis of $\uparrow$ and $\downarrow$ as well as the x-basis $\leftarrow = \frac 1{\sqrt 2}(\downarrow - \uparrow)$ and $\rightarrow = \frac 1{\sqrt 2}(\downarrow + \uparrow)$ . Similarly, for L' we use the basis $h$ and $t$ (heads and tails as it refers to a coin) as well as $o = \frac 1{\sqrt 2}(h - t)$ and $f = \frac 1{\sqrt 2}(h+f)$. The two qubits are prepared in the state
$$\Phi = \frac{h\otimes\downarrow + \sqrt 2 t\otimes \rightarrow}{\sqrt 3}$$.
Clearly, a measurement of $t$ in box L' implies that box L has to contain the state $\rightarrow$. Call this observation A.
Let's re-express $\rightarrow$ in the x-basis:
$$\Phi =\frac {h\otimes \downarrow + t\otimes \downarrow + t\otimes\uparrow}{\sqrt 3}$$
From which one concludes that an observer inside box L that measures $\uparrow$ concludes that the qubit in box L' is in state $t$. Call this observation B.
Similarly, we can express the same state in the x-basis for L':
$$\Phi = \frac{4 f\otimes \downarrow+ f\otimes \uparrow - o\otimes \uparrow}{\sqrt 3}$$
From this once can conclude that measuring $o$ for the state of L' one can conclude that L is in the state $\uparrow$. Call this observation C.
Using now C, B and A one is tempted to conclude that observing L' to be in state $o$ implies that L is in state $\rightarrow$. When we express the state in the $ht\leftarrow\rightarrow$-basis, however, we get
$$\Phi = \frac{f\otimes\leftarrow+ 3f\otimes \rightarrow + o\otimes\leftarrow - o\otimes \rightarrow}{\sqrt{12}}.$$
so with probability 1/12 we find both $o$ and $\leftarrow$. Again, we hit a contradiction.
One is tempted to use the same way out as above in the three qubit case and say one should not argue about contrafactual measurements that are incompatible with measurements that were actually performed. But Frauchiger and Renner found a set-up which seems to avoid that.
They have observers F and F' ("friends") inside the boxes that do the measurements in the $ht$ and $\uparrow\downarrow$ basis whereas later observers W and W' measure the state of the boxes including the observer F and F' in the $of$ and $\leftarrow\rightarrow$ basis. So, at each stage of A,B,C the corresponding measurement has actually taken place and is not contrafactual!
Interference and it did not happen
I believe the way out is to realise that at least from a retrospective perspective, this analysis stretches the language and in particular the word "measurement" to the extreme. In order for W' to measure the state of L' in the $of$-basis, he has to interfere the contents including F' coherently such that there is no leftover of information from F''s measurement of $ht$ remaining. Thus, when W''s measurement is performed one should not really say that F''s measurement has in any real sense happened as no possible information is left over. So it is in any practical sense contrafactual.
To see the alternative, consider a variant of the experiment where a tiny bit of information (maybe the position of one air molecule or the excitation of one of F''s neutrons) escapes the interference. Let's call the two possible states of that qubit of information $H$ and $T$ (not necessarily orthogonal) and consider instead the state where that neutron is also entangled with the first qubit:
$$\tilde \Phi = \frac{h\otimes\downarrow\otimes H + \sqrt 2 t\otimes \rightarrow\otimes T}{\sqrt 3}$$.
Then, the result of step C becomes
$$\tilde\Phi = \frac{f\otimes \downarrow\otimes H+ o\otimes \downarrow\otimes H+f\otimes \downarrow\otimes T-o\otimes\downarrow\otimes T + f\otimes \uparrow\otimes T-o \otimes\uparrow\times T}{\sqrt 6}.$$
We see that now there is a term containing $o\otimes\downarrow\otimes(H-T)$. Thus, as long as the two possible states of the air molecule/neuron are actually different, observation C is no longer valid and the whole contradiction goes away.
This makes it clear that the whole argument relies of the fact that when W' is doing his measurement any remnant of the measurement by his friend F' is eliminated and thus one should view the measurement of F' as if it never happened. Measuring L' in the $of$-basis really erases the measurement of F' in the complementary $ht$-basis.
Wednesday, October 17, 2018
Bavarian electoral system
Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).
Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.
Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.
Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).
A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.
Sunday's election resulted in the following distribution of seats:
After the whole procedure, there are 205 seats distributed as follows
• CSU 85 (41.5% of seats)
• SPD 22 (10.7% of seats)
• FW 27 (13.2% of seats)
• GREENS 38 (18.5% of seats)
• FDP 11 (5.4% of seats)
• AFD 22 (10.7% of seats)
You can find all the total of votes on this page.
Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as
• CSU 85 (40.8%)
• SPD 22 (10.6%)
• FW 26 (12.5%)
• GREENS 40 (19.2%)
• FDP 12 (5.8%)
• AFD 23 (11.1%)
You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.
But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:
221 seats
• CSU 91 (41.2%)
• SPD 24 (10.9%)
• FW 28 (12,6%)
• GREENS 42 (19.0%)
• FDP 12 (5.4%)
• AFD 24 (10.9%)
That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!
The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.
The perl script I used to do this analysis is here.
The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:
Seats: 220
• CSU 91 41.4%
• SPD 24 10.9%
• FW 28 12.7%
• GREENS 41 18.6%
• FDP 12 5.4%
• AFD 24 10.9%
Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields
Seats: 217
• CSU 90 41.5%
• SPD 23 10.6%
• FW 28 12.9%
• GREENS 41 18.9%
• FDP 12 5.5%
• AFD 23 10.6%
Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:
Seats: 210
• CSU 87 41.4%
• SPD 22 10.5%
• FW 27 12.9%
• GREENS 40 19.0%
• FDP 11 5.2%
• AFD 23 11.0% |
46a0e4fbb0df9ebd | I don't understand something about the Heisenberg and interaction picture, in my notes the time evolution of operators for the Heisenberg and interaction picture is derived, by inserting them into the Schrödinger equation...
My question is: Doesn't the SE only give the time evolution of states and not operators?
In my notes the time-dependent state $U .\psi (x,t=0)$ is inserted into the SE, and the time evolution of the $U$ operator is derived ($i \hbar \frac{ \partial U}{\partial t}=H \space U$), the following argument is given at the end:
"Because this ($i \hbar \frac{\partial \space U \psi}{\partial t} = H\space U\psi$) holds for any wavefunction, the equation above must hold also for the operators themselves."
$\uparrow$ I don't understand this.
It is a change of view. In the Schrödinger picture your operator is not time dependet (or only explicitly time dependent) but your states evolve with time. On the other side in the Heisenberg picture your states are fixed but the operators depend on time. There you use replace the Schödinger equation with the Heienberg equation
$$\frac{d}{dt} A = \frac{i}{\hbar} [H,A] +\partial_t A $$
which describes the same physics.
You can think of it by an classical analogue:
From David J. Toms "The Schwingers action principle and effective action":
There is an analogy between the two pictures in quantum mechanics and the treatment of classical mechanics in a rotating frame which might make things a bit clearer. A vector in a rotating frame can be considered from two points of view. We may either consider the vector to be rotating with respect to a set of fixed axes (analogous to the Schr¨odinger picture with the state vector evolving in time but with the base kets fixed) or consider the vector to be fixed but take the coordinate axes to be rotating (analogous to the Heisenberg picture with the state vector fixed). In order that these two different views describe exactly the same physics, in the second case the axes must rotate in the opposite direction to that of the vector in the first case.
• $\begingroup$ I get that... I don't understand how the heisenberg equation is derived... $\endgroup$ – Luka8281 Mar 3 '17 at 10:09
• $\begingroup$ Look at: en.wikipedia.org/wiki/…. But to get this time evoultuion operator you follow he way you described. Since it should not depend on the choice of $\psi (t_0)$ it gives an equation for $U$. In other words the time evoultion should be a basic principle depending on the system, i.e. $H$, not on a specific wave function. $\endgroup$ – Alpha001 Mar 3 '17 at 10:26
Your Answer
|
dcd8c99042d3b947 | Abraham Meets Abraham from a Parallel Universe
And he [Abraham] lifted up his eyes and looked, and, lo, three men stood over against him… (Gen. 18:2) On this blog, we often discuss the collapse of the wavefunction as the result of a measurement. This phenomenon is called by some physicists the “measurement problem.” There are several reasons, why the collapse of the wavefunction – part and parcel of the Copenhagen interpretation of quantum mechanics – is called a problem. Firstly, it does not follow from the Schrödinger equation and is added ad hoc. Secondly, nobody knows how it happens or how long it takes to collapse the wavefunction. This is not to mention that any notion that the collapse of the wavefunction is caused by human consciousness, as proposed by Von Newman, leading to Cartesian dualism is anathema to physicists. [...] |
77e36b92ab31a881 | Introduction to Doug's RSt
Discussion of Larson Research Center work.
Moderator: dbundy
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Re: Introduction to Doug's RSt
Post by dbundy » Wed Oct 19, 2016 2:10 pm
Now that our RSt has a basis for calculating the discrete levels of energy transitions observed in Hydrogen, we need a scalar motion model of the atom that serves to explain the changes inducing the transistions, as does the vector motion model of LST theory. For right or wrong, whether it's the Bohr model of electron particles, orbiting a nucleus, or the Schrödinger model of electron waves, inhabiting shells around a nucleus, the LST's atomic model provides the LST community with a physical interpretation of the word "transition." However, this is much more difficult to achieve in a scalar motion model.
In our RSt, where the unit of elementary scalar motion from which higher combinations are derived, the S|T unit, is an oscillating volume of space and time, understanding what accounts for the observed atomic energy transitions in the combinations identified as atoms is not easy.
This is especially challenging given that, in our theory, the electron has no identity and consequently no properties, as an electron, until it is created in the disintegration of the atom, or in the process of its ionization. Fortunately, however, we have the scalar motion equation to help.
Recall, that the basic scalar motion equation is,
S|T = 1/2+1/1+2/1 = 4|4 num (natural units of motion),
And our basic energy, or inverse motion equation is,
T|S = 1/2+1/1+2/1 = 4|4 num (natural units of inverse motion),
Now, this terminology and notation will be a complete mystery to those who have not read the previous posts, so reading and understanding those posts first is a prerequisite for the study of what follows, and while we are on the subject, let me emphasize the tentative nature of all the conclusions presented thus far. It may be necessary to eat a lot of humble pie from time to time, during the development of our RSt, for several reasons, but if so, it won't be the first time. It has been said before and bears repeating: It takes courage to develop a physical theory, not to mention a new system of physical theory. Larson was an incredibly courageous man, as well as an intelligent and honest investigator. Perhaps those of us who try to follow his lead appreciate that fact more than most.
With that said, we have taken on the challenge of dealing with units of energy, with dimensions E=t/s, as well as motion, with dimensions v=s/t, and have dared to cross over the line of LST physics, which cannot brook the existence of entities over the speed of light, which are known as "tachyons," in that system. Nevertheless, the T units in our RSt are just such units, but because the dimensions of these units are actually the inverse of less than unit speed units, no known laws of LST physics are broken.
In retrospect, venturing into this unexplored realm of apparent over-unity, which seems so iconoclastic, appears to be the natural and compelling evolution of physical thought. So much so that one marvels that the world had to wait so long for the Columbus-like pioneer Larson to show science the way. However, incredible as it is, the entire LST community, save just a few, has no idea yet that our understanding of the nature of space and time has been revolutionized. They cannot, as yet, recognize that time is the inverse of space, even though it's as plain as the nose on your face, as soon as someone points out how it can be.
By the same token, the mathematics of the new system is just as iconoclastic. As we consider the basic scalar motion equation, n(S|T)=n4|n4, and its inverse, n(T|S)=n4|n4, and graph their simple magnitudes, we find that its also possible to formulate a basic scalar energy equation, where
S*T = n2.
In the previous posts above, I've explained how S|T units combine into entities identified with the observed first family of the LST's standard model (sm), and these combos combine into entities that are identified as protons and neutrons, which combine into elements of the periodic table, or Wheel of Motion:
The symbolic representation of the S|T units that, as preons, combine to form the fermions and bosons of our RSt, are a reflection of the S|T equation, making it possible to graphically represent them and their combinations as protons and neutrons along with their respective magnitudes of natural units of motion. On this basis, the S|T magnitudes for the proton, neutron and electron combos are:
P = 46|46 num,
N = 44|44 num,
E = 18|18 num
The magnitude of the Hydrogen atom (Deuterium isotope) is then the sum of these three:
H = 46+44+18 = 108|108 num.
At this point, however, representing the constituents of the atom as combos of S|T triplets (see previous posts above), becomes cumbersome and we need to condense the symbols from 20 triangles to 4 triangles, in the form of a tetrahedron, as shown below:
The top triangle of the tetrahedron is the odd man out for the nucleons, so that for the proton this is the down quark, but for the neutron it is the up quark.
The numbers in the four triangles are the net magnitudes of the quarks and the electron. So, the number of the down quark at the top of the proton is -1, because the magnitude of the inner term of its S|T equation is 2/1, (-2+1 = -1), whereas the number of its two up quarks is 2, because the inner terms of their S|T equations are both 3/5, (-3+5=2). For the neutron, the number of the single up quark at the top is 2, while the magnitude of the two down quarks below it is -1, given the inner terms of their S|T equations.
The inner term of the electron's S|T equation is 6/3, or -3, (-6+3=-3), so that the net motion of the three entities combined as the Deuterium atom balance out at 3-3 = 0, or neutral in terms of charge, as show in the graphic above.
In this way, each element of the Wheel could be represented by the numbers of its tetrahedron symbol, if there were some need to do so, but what is more useful is the S|T equation itself. Expanding the equation for Deuterium:
D = 27/54+27/27+54/27 = 108|108 num,
but factoring out 33, we get:
D = 33(1/2+1/1+2/1) = 108|108 num.
To take advantage of this factorization we can represent it by making the S|T symbols of the notation bold:
S|T = 33(1/2+1/1+2/1) = 108|108 num
D = S|T= 108; He = 2(S|T) = 216; Li = 3(S|T) = 324, etc.
This way, we can easily write the S|T equation for any element, X, given its atomic number, Z:
XZ = z(S|T)
However, while this should prove to be quite helpful as compact notation, there is still more to consider. Recall that the units on the world-line chart actually represent the expanding/contracting radius of an oscillating volume, and as such, its magnitude is the square root of 3, not 1. This factor expands the relative magnitudes involved considerably, which we will investigate more later on.
For now, I want to draw your attention to the scalar energy equation,
S*T = n2.
Recall that for the S*T unit,
S*T = 1/(n+1) * (n*n) * (n+1)/1 = n2,.
So when n = 1,
S*T = (1/2)*(1*1)*(2/1) = ((1/2)(2/1)(1*1)) = ((2/2)(1*1)) = 1*1 = 12,
but, if we invert the multiplication operation, we get:
S/T = (1/2)/(1/1)/(2/1) = ((1/2)(1/1))(1/2) = (1/2)(1/2) = 1/22,
which we want to do when we wish to view the S&T cycles, in terms of energy, so that:
E = hv ---> T/S = 1/S/T = n2,
where n is the number of cycles in a given S|T unit.
Now, this may seem to be contrived, and perhaps it is, especially when we invert the operation of the inner term of the S*T equation from division (n/n) to multiplication (n*n). However, if we don't invert it, the equation will always equal 1, while if we do invert it, then dividing 'T' cycles, by 'S' cycles (T/S) yields the correct answer, T/S = n2.
Now, this brings us to something else that needs to be clarified. In the chart showing the correlation between the quadratic equation of the S|T units and the line spectra of Hydrogen, given the Rydberg equation, the frequency of the S|T units is shown as decreasing with increasing energy, rather than increasing as it should. This is problematic to say the least, but I think it can be resolved, when we consider that the "direction" reversals of each S and T unit always remains at the 1/2 and 2/1 ratio, even though their combined magnitude is greater in the absolute sense; that is, while the space/time (time/space) ratio of 1/2, 2/4, 3/6, 4/8, ...n/2n, remains constant at n/2n = 1/2, the absolute magnitude of 2n - n increases, as n: 1, 2, 3, 4, ...2n-n.
Therefore, the length of the reversing "direction" arrows, shown in the graphic, as increasing in length, as the absolute magnitude increases, is an incorrect representation of the physical picture. The correct representation would show the number of arrows increasing, as S|T units are combined, with their lengths (i.e.their periods) remaining constant, so that, as the quadratic energy increases, the number of 1/2 periods in a given S|T combo increases. The frequency of the unit then is that of a frequency mixer, containing both the sum and difference frequencies of its constituent S|T units..
Of course, this is not exactly what is observed, but upon further investigation, we may be able to resolve the discrepancy. At least it is consistent with our theoretical development.
In the meantime, for the energy conversion of the S|T equation of the Hydrogen atom, where n = 1, with 33 factored out, we get:
S/T = (1/33n)2 = 1/272, and T/S = (33n)2 = 272 = 729,
when we put the 33 factor back in, so we get the actual number of 1/2 and 2/1 cycles, or S and T units, contained in the Hydrogen atom.
There are a lot of tantalizing clues to follow in the investigation of these equations and their relation to the conventions of the LST particle physics community, which uses units of electron volts for energy, dividing those units by the speed of light to attain units of momentum, and dividing them by the speed of light squared to attain units of mass. They even divide the reduced Planck's constant by eV to attain a unit of time, and that result times c to get the unit of space, according to Wikipedia:
Measurement--Unit---------SI value of unit
Energy----------eV--------------1.602176565(35)×10−19 J
Mass------------eV/c2-----------1.782662×10−36 kg
Momentum-----eV/c ------------5.344286×10−28 kg⋅m/s
Time-------------ħ/eV------------6.582119×10−16 s
Distance---------ħc/eV----------1.97327×10−7 m
It'll take a while to untangle these units and see how they correspond to the S|T units of motion, but in the meantime, we can use the progress achieved so far to analyze the periodic table of elements, showing why Larson's four, 4n2, periods define it, using the LRC model of the atom explained so far.
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Re: Introduction to Doug's RSt
Post by dbundy » Thu Oct 20, 2016 10:12 am
Hi Jan,
Thanks for the comment. The math is so far over my head that I could never discuss it intelligently. You said,
Following Nehru's approach, the Schrodinger equation was assimilated into RS and spectroscopic calculations are possible right now without changes to the methods used today.
I'm not sure what you are referring to here. Who assimilated the wave equation into the RS? Where are these RS calculations using today's methods? I don't understand.
You also wrote:
to develop RS's own methods to calculate spectroscopy the connection between wave-function and the Larson's triplets has to be understood first. Have you dig into this so far?
The wave-function is based on 3D spherical harmonics, in other words, vector motion on a surface, while scalar motion is not any kind of vector motion. By definition, it is a change in scale, or size. If you are referring to the preon triplets as Larson's triplets in the LRC's RSt, then I would say that there is a remote connection that can be seen in the Lie algebras employed in QM, but, again, the disconnect is fundamental, because of the difference in the definitions of motion.
Using the wave equation, LST physicists are at a loss to understand the nature of quantum spin. By their own admission, they haven't a clue how to account for it, and when it comes to iso-spin, it's even worse. However, the root of their problem is, again, the definition of motion, which manifests itself in the fundamental understanding of the relation between numbers, geometry and physics.
When Larson changed our understanding of the nature of space and time, he changed everything, including our understanding of the nature of numbers, because the same principle of reciprocity that is not recognized as fundamental in LST physics is also not recognized as fundamental in LST algebra (meaning numbers). Thanks to Larson, we can now see the connection between 3D geometry and 3D numbers, and as Raul Bott so famously proved, Larson's postulate, that there are no physical phenomena beyond the third dimension, is established.
What this means is that the use of the modern algebra, while able to weave sophisticated and intricate, dare I say baroque, edifices out of fundamental concepts like magnitudes, dimensions and directions, cannot help us to get where we need to get because of errors of definition.
The clearest example I think I can point to is the vain attempt to use octonions in the string theory of QM. John Baez co-authored a 2011 article in Scientific American, entitled, The Strangest Numbers in String Theory, with the subtitle: "A forgotten number system invented in the 19th century may provide the simplest explanation for why our universe could have 10 dimensions."
He is referring to the attempt to use 7 imaginary numbers, with a real number, 8 numbers in all, to invent a 3D number system. The problem is, of course, the use of even 1 imaginary number, to compensate for the lack of understanding that Larson provides us, has taken us down the wrong path in understanding magnitudes, dimensions and directions. To be sure, it has been very fruitful, and, together with the concept called "real" numbers, has lead to Western society's undreamed of advances in technology.
Nevertheless, it has also led to the continuous/discrete impasse now plaguing the LST community, which string theory's attempt to go beyond three dimensions, was hoped to solve. However, nothing but a correction in the fundamental understanding of motion and numbers can do that. A non-pathological 1, 2 and 3D algebra is possible, but only if they are based on the correct understanding of points, lines, areas and volumes, generated by scalar motion over time (space).
The reason Baez et. al. think octonions are needed, is because the vector space of the Lie algebra associated with the 3D Lie group runs out of dimensions to use, after two dimensions. In other words, they can't use complex numbers in the SU(3) group, because it takes two dimensions for one complex number, (I'm ignoring the non-geometric meaning of "dimension," used by mathematicians.)
Now, Larson's reciprocal system, when applied to numbers, opens up a whole new world, where the three (four counting zero) dimensions of physical magnitudes, in two "directions" replace the foundation of modern vector algebra, based on imaginary numbers, with a scalar algebra, based on real (i.e. integer) numbers, which correspond to geometric points, lines, areas and volumes, generated over time (space).
These algebras do not lose the vital properties of 0D algebra, as they increase in dimension, because the dimensions of the unit itself change, going from 0 to 1 to 2 to 3 dimensions, in a completely different manner than the vector algebra does, which goes from real (0D), to complex (1D), to quaternions (2D), to octonions (3D), via imaginary numbers, losing the properties of algebra in the process.
In other words, unlike the vector algebras, our higher-dimensional scalar algebras are each as ordered, commutative and associative as our 0D scalar algebra. This is a huge change in the foundation of the mathematics employed in the two systems of theory.
Of course, that doesn't mean that we can't employ the LST algebras and calculus to advantage in the RST, to give us insight into the physics of vector motion that we can use in the development of the physics of scalar motion, but it means that we always have to understand the difference.
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Re: Introduction to Doug's RSt
Post by dbundy » Mon Nov 07, 2016 8:35 am
One of the LST challenges that Larson's RSt cannot meet is the diminishing size of the atom from the beginning of an energy level to the end, where it terminates in the noble element, So, not only does a viable physical theory have to be able to account for the line spectra of the elements, which organizes the 117 elements into the 4n2 periods we find, but it also has to account for this shrinkage in the diameter of the atom, as it gets more and more massive, within each period.
The LST theory accounts for it, by asserting that the orbiting electrons are pulled in towards the nucleus, as its mass increases, which seems very reasonable. However, there is no atomic nucleus in RST-based atoms, and also no orbiting electrons to pull closer in.
In the atom of Larson's RSt, there are two magnetic rotations and one electrical rotation (m1-m2-e (my notation, not his), and since the electrical rotation is one-dimensional, it takes n2 electrical rotations to equal one, two-dimensional m2 magnetic unit.
This structures the elements into the four periods quite nicely, without resorting to the spectral data at all. And, as it turns out, the LST's QM theory of spectral lines gets the number of elements in the periods wrong, as shown by Le Cornec's distribution of atomic ionization potentials.
The good news is that the LRC's RSt does appear to account for the decreasing size of the atom, as its mass increases in each period, even as we hope to unlock the mystery of the actomic spectra, as well, but more on the mass and size issue later.
At this point, the immediate challenge is how to account for the atomic spectra, given the scalar motion model that has no nucleus and no cloud of electrons surrounding it. In the LRC's RSt, the scalar motion model of the atom consists of combinations of motion called S|T units, which act as preons to the standard model (sm) of observed particles, namely the fermions, classified as quarks and leptons, and the bosons, as discussed previously in the above posts.
With the equation of motion, we can write the total motion of the Hydrogen atom as,
S|T = 27/54+27/27+54/27 = 108|108 num,
where the proton & neutron contribution of total motion is,
S|T = 21/42+21/24+48/24 = 90|90,
and the contribution of the electron is,
S|T = 6/12+6/3+6/3 = 18|18.
Assuming an electron absorbs a photon of lowest energy, we get:
e + γ = (6/12+6/3+6/3) + (3/6+3/3+6/3) = 9/18+9/6+12/6 = 30|30,
which is only 12 num higher than the electron, but the middle term, 9/6, is now 6-9 = -3. So, while the relative motion imbalance (the electrical charge) of the excited electron (as part of the atom) is unchanged, the scalar energy has increased by 3, a quantum jump in the energy (9x6 = 54, divided by 6x3 =18, given our scalar energy equation discussed above).
This is, in effect, equivalent to the quantum energy transition of the LST's Bohr model, where the ground state electron is excited to the next higher orbit, at least qualitatively speaking. It remains to be understood quantitatively, at this point, but it looks promising.
Nevertheless, our earlier analysis of the energy transitions were based on the sequence of single S|T units (12, 22, 32, ... n2), which worked out quite well, but here, the sequence has to be based on the assumption of basic boson triplets, as the unit, where the sequence is:
3, 6, 9, ...3n
so the photon motion equation is actually:
S|T = 3(1/2+1/1+2/1) = 3/6+3/3+6/3 = 12|12 num.
Hence, the magnitude of n quantum transitions is,
e + nγ = (6/12+6/3+6/3) + n(3/6+3/3+6/3),
So for n =
0, the middle term of e + nγ = 6/3 = -3, and energy = 6x3 = 18
1, the middle term of e + nγ = 9/6 = -3, and energy = 9x6 = 54
2, the middle term of e + nγ = 12/9 = -3, and energy = 12/9 = 108
3, the middle term of e + nγ = 15/12 = -3, and energy = 15x12 = 180
4, the middle term of e + nγ = 18/15 = -3, and energy = 18x15 = 270
in natural units of energy, we might say,
One would expect that this would lead to a very simple, Hydrogen-like explanation of the line spectra of the elements, but, of course, this is far from the case. In fact, the LST community's much hyped solution, using Schrödinger's wave equation, works only in principle, since they can't use the separation of variables technique to solve the equation analytically.
However, according to the work of amateur investigator Franklin Hu, "If you create a simple graph of the line
frequencies and intensities for Helium, a striking and predictable pattern appears which suggests that the spectra and the intensity can be calculated using simple formulas based on the Rydberg formula. This pattern also appears in lithium and beryllium."
Unfortunately, Hu lacks a suitable physical theory to explain the pattern, but we hope to supply that part.
Stay tuned for more developments coming soon.
Posts: 30
Joined: Thu Jan 17, 2013 1:36 am
Re: Introduction to Doug's RSt
Post by rossum » Tue Nov 08, 2016 2:59 am
Hello Doug,
thanks for answer and sorry for a my late answer, I have been extremely busy these days.
You wrote:
In my post I was referring to K.V.K. Nehru's article “Quantum Mechanics” As the Mechanics of the time region. In this article he says:
...[h]ence the Schrödinger equations can be admitted as legitimate governing
principles for arriving at the possible wave functions of an hypothetical particle of mass m traversing
the time region, with or without potential energy functions as the case may be.
Nehru's argumentation for the Schrodinger eq. in RS seems to me pretty solid, so I took it for generally accepted among RS community, though I'm aware I may be wrong at this point.
In the same article he proposed several potentials, namely
so it is (technically) possible to solve the Schrodinger equation for this potential and get both the spectral lines (as energy differences of the solutions) and their intensities (overlaps between the solutions). Although this potential removes the need for renormalization it is unfortunately incorrect: I analysed it and the solutions give incorrect energies for any combination of coefficients. The potential curve has simply a wrong shape. As a result e.g. chemical bonds would not break under high temperature etc.
What I really meant was that anyone can now calculate the spectra (even for molecules) using this Nehru's approach, but probably nobody except me tried.
You also wrote:
I don't quite understand: I would say, that on the contrary, nature of the spin is to certain point quite well understood: the best way to see the nature of the spin in modern physics is to use the Foldy-Wouthuysen's transformation of the Dirac Hamiltonian. In this way one gets a number of terms in which the magnetic field (intensity) generated by an electron is a result of the wavy nature of the "particle". The same is true for the composite particles like neutrons as quarks have charge (naively said). It all gets obscured only when relativity comes into play: people want to attribute the Lorentz invariance to the relativity instead of the nature of waves. (Waves in general are Lorentz invariant using their speed - a fact people tend to ignore or don't know). In short it is not possible to have "moving charge" or general electromagnetic wave without the magnetic component i.e. spin.
To the rest of your post: I have many comments but they are quite long for posting - maybe a skype talk would be more appropriate some day. One however I need to mention here is that string theory is definitely not a main-stream theory and it is far from being accepted. I think the whole theory is fundamentally flawed. For now the main-stream is quantum field theory (kind of opposite to the string theory).
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Re: Introduction to Doug's RSt
Post by dbundy » Tue Nov 15, 2016 10:45 am
Hi rossum.
Thanks for getting back to me on this. I knew you were referring to K.V. K.'s article, but he was never able to make any progress along the lines he suggested, but, as you write:
This is news to me. Have you posted the calculations somewhere? If so, please point me to them.
As far as the nature and origin of quantum spin goes, the LST community doesn't have a clue. All they know is that it is a conundrum yet to be solved. How can a particle with no spatial extent possess "intrinsic angular momentum?" It can't, but even if it could, the need to rotate its spin axis through 720 degrees to return the particle to its original state, is a complete mystery.
In the LRC's RSt, however, there is no spin axis, as the 3D oscillation is not a spherical wave, but a pulsation of volume, if you will, and the 720 degree cycle is easily explained.
But, again, the important thing for us to understand is how Larson has revolutionized the nature of space and time and the phenomenon of motion. Until we recognize that a repetitive change in scale constitutes motion and follow the consequences, the science of theoretical physics will ever be bound to the science of vector motion and mathematics will forever be hampered by imaginary numbers.
Posts: 30
Joined: Thu Jan 17, 2013 1:36 am
Re: Introduction to Doug's RSt
Post by rossum » Thu Nov 24, 2016 10:28 am
Hello Doug,
dbundy wrote: This is news to me. Have you posted the calculations somewhere? If so, please point me to them.
No I didn't, but I can show you the problem here. On the following image you can see arbitrarily scaled Nehru's potential, the classical electrostatic potential (this one is usually used to calculate the hydrogen atom energies) and harmonic potential (usually used as an approximation in molecular dynamics etc.).
pot.png (6 KiB) Viewed 8408 times
As Nehru didn't write how to calculate the coefficients in his potential, but if there are some correct coefficients, it should be at least possible to fit them to experimental data. For this we could use the first and the last line in the Balmer series from the experiment, just to check the feasibility of this potential. However the lines in the Nehru's potential do not converge to certain value but rise to infinity instead. Note that in harmonic potential (i.e. x^2) the solutions are spaced equally whereas in classical potential (i.e. frac{1}{x}) converge to some value. Below are images graphically show the energies for these potentials.
Harmonic potential levels:
harmonic_levels.jpeg (26.48 KiB) Viewed 8408 times
Classical electrostatic potential levels:
levels.png (40.61 KiB) Viewed 8408 times
From the images above it should be obvious that the levels in the Nehru's potential will be closer to the harmonic potential than the classical potential and will diverge instead of converging. I even used an online differential equation solver to get the eigenfunctions in analytical form, but after seeing the result I didn't bother to calculate the eigenvalues (relative energies in atomic units). Here is the input and output from the
npot_diff.gif (1.46 KiB) Viewed 8408 times
npot_s.gif (4.48 KiB) Viewed 8408 times
Where U(a,b) is the Kummer's function of the second kind and L_n^m(a) are Laguerre function. Now the solution in the classical potential is N_{nl}e^{-\frac{\rho}{2} \rho^lL_{n-l-1}^{2l+1}(\rho)} whre L_{n-l-1}^{2l+1}(\rho)} is Laguerre polynomial \rho=r\frac{2}{na_l} and N_nl=\frac{2}{n^2}\sqrt{\frac{n-l-1}{[(n+l)!]^3}}. If we cut away everything we possibly could by adjusting constants and focus only on the "main quantum number" (i.e. set the Laguerre polynomial/function to be a constant) we see that in case of the classical potential the energy increases from a negative value to 0 but in case of Nehru's potential rises to infinity very similarly to the harmonic potential.
Although I did't calculate the energies for the hydrogen atom using Nehru's potential I was able to analyse the possible solutions and show that they must have in any case a wrong tendency. However this is relevant only to the part which was proposed to the 'electronic' part. I didn't analyse the rest of the potentials so they may or may not be correct.
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Re: Introduction to Doug's RSt
Post by dbundy » Wed Nov 30, 2016 8:46 am
Thanks for the detailed explanation, rossum. I've been out of pocket for weeks now and don't know when I'll be able to reply, but will try to as soon as possible.
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Our RSt Goes Cosmic
Post by dbundy » Wed Feb 01, 2017 7:28 am
I finally have more time to devote to this discussion, and explanation of the LRC's RSt. The development of the topic to this point has been focused on the material sector of the theoretical universe, leading to rossum's post on the utility of applying the LST community's equations to obtain the atomic spectra.
As he indicates in the above post, this is a problematic approach, to say the least. We would like to find a scalar motion based solution, using the new scalar math, and I think we will eventually, as I have made some progress that is encouraging, along that line.
The trouble is, however, it's been months since I've been able to give it any attention and in the meantime, to my delight, a presentation of the work of Randell Mills and his energy generator, based on his Hydrino theory, which has a very interesting energy spectra, has been taken to the road. He first presented it in Washington, D.C., then in London, and this month, he will present it in California. It's very impressive (see:
I corresponded with Randell a year or so ago, discussing his hydrino theory somewhat, proffering some insight into the phenomenon in RST terms, but, of course he was not interested and I can certainly understand why. However, his theory invokes a new model of the electron, using the same spherical harmonics as QM does to derive the atomic orbitals we've discussed above, but he does so by turning the orbitals into electrons!
That's right. In his theory, the electron is treated as a two-dimensional surface of a sphere, surrounding the nucleus, called an "orbitsphere," with charges and currents flowing upon the surface, according to spherical wave equations. Fortunately, this approach eliminates the acceleration problem of orbiting electrons, as did the Bohr model, but without the position vs. momentum issue arising from the dual wave/particle concept of the electron, inherent in quantum mechanics.
This enables Mills to use the classical laws of Newton and the equations of Maxwell to solve the quantum mechanical experiments such as the dual slit phenomena giving rise to the particle/wave enigma. However, Mills' triumph of classical vs quantum mechanics is based on a photon and electron momentum interaction, which is necessarily problematic for "point-like" entities that actually do (must) have extent of some kind, and consequently cannot carry "charge" without flying apart, unless something ad hoc, like "Poincaré stresses," are postulated to hold them together.
It is the enigma of how elementary particles can exist, which have no extent (radius = 0), but yet can have something called "quantum spin" and angular momentum, which is the most fundamental mystery plaguing the LST community. If these "particles" can't exist in the theory, then arguing how their properties interact to produce observed phenomena is a little like putting the cart before the horse.
Nevertheless, the electron in Mills' theory, not only has extent, it changes form from a two-dimensional surface of a sphere, in its bound form, to a two-dimensional disk in its free form. Electron spin is then conceptualized as a disk flipping like a tossed coin.
The Hydrino theory enables Mills to postulate that many inverse excited states of 1/n exist for electrons in the atom, in addition to the known n excited states, found in the spectroscopy field. However, since these new states are viewed as fractional, rather than inverse states, the LST community rejects the idea categorically.
Of course, inverse states are readily acceptable to the RST community, and the experimental/engineering evidence indicating that they are real, is a highly motivating factor in our research.
The first thought is that the anti-Hydrogen atom of our RSt accounts for Mills' results. Recall that the standard model-like chart of S|T unit combos, includes the anti-particles (read "inverse particles") of the leptons and quarks, which combine to form the proton, neutron and electron, making up the Hydrogen atom.
Accordingly, these inverse versions of the the leptons and quarks combine to form inverse protons, neutrons and electrons, making up the inverse Hydrogen atom, as shown below:
In this image I just underlined, instead of overlined the particle labels to indicate the inverse nature of the particle, but, as can easily be seen, the inverse Hydrogen atom is formed from the inverse-quarks and the inverse-electron (positron) particles. Thus, the excited states of the positron in the inverse-Hydrogen atom would conform to the same calculations as shown for the Hydrogen atom, but in the inverse "direction."
We will follow the RST community's convention and designate these inverse entities as c (for cosmic sector) entities. We will also need to transform the "S|T Periodic Magnitudes" chart, used to graph the S|T combinations for Hydrogen excited states, into its c version, the "T|S Periodic Magnitudes" chart, as shown below:
This operation reverses the "direction" of the unit progression's plot, in order to show that the space and time oscillations of the S (red) units and the T (blue) units are progressing inversely.
While this graphical representation of the transformed ms S and T units into the cs S and T units is straightforward enough, it's not true for the equivalent transformation of the mathematics. Because our conventional mathematics treats inverse integers as fractions of a positive whole, confusion results when we try to use it for calculations in the cs.
Consequently, we have to modify the conventional mathematics once again, if we want consistent results. For instance, normally, if we want to express the difference between two inverse integers, such as 1/22 - 1/12, the result would be .25 - 1 = -.75 on our calculators. While this is the correct answer, given the fractional view of inverse integers, it's unsuitable for our purpose, where the correct answer is the inverse of 4-1 = 3 and that inverse is not a fraction of a positive unit, but three whole inverse units.
As shown in the graphic above, we will modify the conventional mathematics by coloring negative integers red and dispensing with the denominator above the numerator notation, just as is common practice for conventional, non-inverse, positive integers, where the denominator below the numerator is omitted.
On this basis, 4-1 = 3 is equivalent to (1/4) - (1/1) = 1/3 = -3 inverse units, not one third of 1 non-inverse unit. To be consistent, we could also color positive numbers blue and limit the plus and minus signs to indicating addition and subtraction operations only, but we won't normally do that, if the meaning is clear without it.
So, with this much understood, we can see that our RSt works out as nicely for the c sector as it does for the m sector, except there is a one obvious difference: The T|S units are superluminal; that is, they constitute the dreaded "tachyons" of the LST community, which are usually fatal for their theories.
However, they are an integral part of RST-based theories, like ours, because the fundamental definition of motion, as an inverse relation between space and time, defined in a universal space and time expansion, changes everything. Scalar motion can be motion in time as well as space, but it's not the motion of things, or vector motion.
In the case of our theory, i.e. the LRC's RSt, all physical entities consist of units of both space and time motion in combination. The difference between the S|T units of the ms and the T|S units of the cs is reflected in the number line of mathematics, as we have been discussing it: On one side of the unit datum, space/time ratios are the inverse of the other side, time/space ratios, and magnitudes increase in the opposite "direction." However this can be misleading, when we don't realize the reciprocal nature of ratios, and its effect on our perspective.
The bottom line is that increasing from 0 to 1/1 (light speed) in magnitude, in the ms, is no different than increasing from 0 to 1/1 in magnitude, in the cs, except for "direction." The increase from 0 to unit speed in the ms is in the "direction" of decrease from unit speed, in the cs and vice-versa:
0 --> 1/1 <-- 0
But when we don't understand this, the two, inverse, units of magnitude appear to us as two units of increasing magnitude:
0 --> 1/1 --> 2/1
and this leads us to mistakenly conclude that no superluminal (i.e. > 1/1) velocities are possible. They are possible all right, but only when the ratio of space and time is inverted, as preposterous as that may sound to the uniformed.
However, because Larson's new Reciprocal System of Physical Theory unveils the mystery of the true nature of space and time, it enables us to understand the exciting, life-changing phenomena Mills calls Hydrinos, where the 1/n2 magnitudes of atomic spectra become n2 magnitudes, with earth-shaking consequences.
Too bad we weren't fast enough to predict it, let alone produce it.
(More on this later)
Posts: 150
Joined: Mon Dec 17, 2012 9:14 pm
Our RSt Goes Cosmic
Post by dbundy » Thu Feb 02, 2017 11:03 am
It's really a shame that Randell Mills wasn't a student of Larson. He has developed a new general theory of physics under the LST, the "Grand Unified Theory of Classical Physics (GUT-CP)," which he claims, as the name implies, unifies the "forces" of physics, and even postulates a fifth "force."
However, as Larson insisted, the LST scientists ignore the fact that the definition of force eliminates the possibility of so-called autonomous forces, such as appear in the legacy system. Force, by definition, is a quantity of acceleration, and acceleration is a time (space) rate of change in the magnitude of motion.
This is a critical point to understand, but one which cannot be acknowledged, without destroying the foundation of the LST, the research program of which is to identify the fewest number of interactions (forces) among the fewest number of particles, which constitute physical reality.
This is typical of the challenges the LST community faces. They recognize that they are "stuck," and that they need a revolution in their understanding of the nature of space and time, but unless they can see what is hidden in plain sight, that time is the reciprocal of space, the definition of motion, and that physical reality in nothing but motion, they will never get "unstuck." It's a classic example of "you can't get there from here," type of crisis. They've painted themselves into a theoretical corner.
But the pitifully few current followers of Larson's work do not have the wherewithal of understanding or physical resources to conceive of, and conduct, a crucial experiment that would convince the LST community, in part or in whole, that space/time reciprocity is the key to understanding fundamental physical reality. It's the human nature of things, I suppose, but in the meantime, LST scientists like Mills come along and do remarkable things with the old system of theory.
Recall that in introducing the LRC's RSt, we hearkened back to Balmer and Rydberg to show the mysterious role of the number 4 in their breakthrough discoveries, and how that same number is fundamental in our RSt (S|T = 4|4).
But now, the insight it gives us into the Rydberg equation, goes to the heart of Mills work as well, because the empirically derived constant of Balmer's, that started it all, as re-configured by Rydberg, in his formula for the atomic spectra of Hydrogen, holds for the Hydrinos, too, but in inverted form.
The Rydberg formula,
1/λ = R(1/n12 - 1/n22),
which was inverted for convenience, can be re-inverted to give us a formula for the cosmic sector,
λ = 1/R(n12 - n22).
This is why the formula given for calculating the energy of Hydrino reactions is just 13.6 Ev times the difference between the 1/n2 "fractional" levels of the Hydrino. This so-called Rydberg energy is just the ionization energy of Hydrogen, the wavelength of which is the inverse of the R term in the formula, the Rydberg constant, which he obtained by dividing Balmer's constant by the number four.
But instead of explaining it this way, physicists write it as,
forcing us poor dummies to dig out the meat for ourselves, provided we can deal with their intimidation! Grrrr!
There is much more to say about this, but I want to stick to our proposition that the fractional values of n, in Mills' Hydrino theory, are actually the n values of inverse, or c-Hydrogen, as explained in the previous post. Of course, neither the point-like electron of the Bohr model, nor Mills' orbitsphere modification of it in his work, both of which are based on the vector motion of the LST community's theories, can be used in our work, but, at the same time, the scalar motion model of our RSt has to accommodate the experimental results of Mills' work.
In his SunCell, the Hydrinos are formed when atomic Hydrogen transfers energy to a catalyst, in what are called "resonant collisions." In this rare instance of atomic collision, the two atoms momentarily orbit one another, exchanging energy harmonically, like the two rods of a tuning fork. This extracts the energy from the Hydrino, shrinking the size of the orbit of its orbitsphere, in quantum decrements of 13.6 Ev times n2.
Of course, because the LST has no inverse cosmic sector, and because they have no notion of scalar motion, the vector motion magnitudes of their system are limited to c speed, and the velocity of the orbitsphere, surrounding the nucleus, is thus limited to c speed.
The interesting aspect of this situation, however, is that the size of the Hydrino is consequently reduced to a small fraction of stable (i.e. n = 1 or unit) Hydrogen, reaching a limit equal to 1/137 = α. Moreover, though I don't understand the logic behind it yet, theoretically, the nucleus of the Hydrino is transformed into a positron at this point!
This is very interesting for us, because, for the c-Hydrogen to form in our model, a transformation from m-particles to c-particles has to occur, and I have no idea how that could happen, but maybe there is a clue waiting for us in Mills' GUT.
(stay tuned)
Posts: 66
Joined: Sun Jul 17, 2011 5:50 am
Re: Introduction to Doug's RSt
Post by Sun » Tue Jun 27, 2017 6:50 am
Hello Doug,
Thank you for your presentation.
Let me use a notation of a-c-b for my own convenient to represent your equation.
Am i correct that you assume everything starts from one net displacement, 1/2 and 2/1? Particles are consequences that combine variable numbers of 1/2 and 2/1 with variable numbers of 1/1? 1/1 represent for unit motion? a, c, b stand for the each dimension of motion?
How did you get 2S|T = 2/4 + 2/1 + 2/1? Why it is not S|T+SUDR=1/2+1/1+2/1+2/1?
Post Reply |
8a2b56667aa6011f | Leveling Up
August 7, 2019
If there’s anyone left still following along after all the extended silences and affirmative responses to the “are you still in school” question, it’s time to check in with what I’m doing now. While I did, in fact, graduate with some degrees a while back there’s still a long way to go on the path from “person with a degree” to…wherever it is I’m going. After the eight-year MD/PhD debacle I told myself “no more garbage hybrid programs,” so when I found out Child Neurology was kind of a combination of Pediatrics and Neurology, I immediately ran away decided to sign up. Algorithmic matchmaking subsequently placed me in the Child Neurology residency program at UCSD.
Now, I’ve put another year experience point into this whole doctor-doctor career, which is to say, I’ve been in San Diego for over twelve months now. How “real” of a doctor I am largely depends on perspective. While I’m no longer a student, as a resident I am not quite autonomous, making me something of an apprentice. Is a Padawan still a Jedi? They carry lightsabers…
Sitting in the hospital with a badge that says “MD” while having no idea what to do in the electronic medical record (EMR), writing orders that other, more experienced doctors are constantly checking, but still being the primary physician for the patient…is the essence of intern year, also known as the first year of residency. Accordingly, mine went by in a blur of hospital workrooms and progress notes, leaving behind miscellaneous projects-in-progress and about half a dozen unfinished posts.
In the breakdown of how the years of life are traded after medical school, medical degree, medical license, and medical specialty are all separate but related things. A medical degree is worth very little on its own, but is a prerequisite for the other two. A medical license is administrated by a state medical board and grants the ability to do things like write and sign prescriptions and see patients independently—essentially to “practice medicine” in the most general sense. Obtaining a license requires a medical degree, passing all of those Step exams I used to talk about, and completing some amount of clinical training working under someone else’s license, the specifics of which vary by state.
Every “type” of doctor you might have heard about is a specialty or subspecialty, and a license to practice medicine does not a specialist make. For that distinction one must trade additional years of life by completing a residency (the first year of which is called internship, just in case this all was making too much sense). Regardless of the licensing requirements, newly-minted medical school graduates are nearly always signed up to be in a residency for longer: Residencies have a variable length starting at three years and increasing from there. Those years qualify one to take the specialty boards of whatever residency was just completed: Family, Internal Medicine, etc. Then there’s subspecialties like Cardiology or Neurosurgery, which require more training called a fellowship and passing yet another exam, the subspecialty boards.
As for me personally, I passed the Steps, finished intern year, and the next, uh, step, is to apply for a medical license in the short term, and complete Child Neuro residency in the longer term—the specifics of which will require a wholly separate explanation.
Type I Hypersensitivity
November 6, 2018
I had another allergic reaction, once again after specifically inquiring about a dish’s peanut content. I noticed suspiciously peanut-like “macadamia nut” pieces that tasted an awful lot like peanuts to my companion, but after being assured they were not peanuts, the first bite determined that was a lie. Perhaps it was the sauce? I haven’t seen poke with peanut sauce before but hey, there’s always a first time.
One might think it would be obvious that someone asking about a a food ingredient due to a peanut allergy would be interested to know if some other component of the dish was, say, peanut. I’m beginning to wonder if restauranteurs in San Diego know what peanuts are.
Regardless, having been reassured that there were no peanuts, I took a big, delicious bite, telling my friend, “I would know by now if there was…” Which is precisely when I began noticing the reaction. It was fast and bad. I didn’t have my EpiPen with me because of style: my messenger bag is breaking down, starting with the zippers. Instead, I slammed two diphenhydramine and excused myself to the bathroom to throw up while we waited for the check.
Flashback: one month ago
I did an elective rotation in Allergy and Immunology, in part due to self-interested curiosity. One of the many fantastic attending physicians I worked with was Dr. Stephanie Leonard, who specializes in food allergies, and has a peanut allergy herself (one of the many cool things she’s involved in is a research study about de-sensitizing kids to peanuts). She was incredibly patient, giving me the chance to ask years’ worth of allergy questions. It turns out items like peanuts, peas, and lentils are more closely related in their antigenicity than their classification taxonomy, a factoid that finally explains my off-limits list
As allergy kindred spirits, we talked about some of our recent dining mishaps. I told her the story of my first week in San Diego, innocently admitting that I have never used my EpiPen, only diphenhydramine. Her subsequent scolding was swift and culminated in me getting four new EpiPens (actually a different type of epinephrine autoinjector, but same idea).
Back home and throwing up several more times in the bathroom, I heard Dr. Leonard scolding me in my mind. Already flushed with shame from ruining dinner (and, I suppose, that whole anaphylaxis thing), I found an EpiPen. Ever the scientist, I did use one that was three years expired. Jabbed it into my thigh…click…count ten seconds…withdraw. I noticed that somehow I had managed to bend the needle while it was in my leg. A drop of blood pooled from the injection site as I waited. It grew larger and began to run. And, miraculously, my symptoms started to subside: the constricting airway, vomiting, cramping, and prickling of early hives all faded away.
About ninety minutes later, we got Wendy’s.
Note: Type 1 hypersensitivity is an immune system response that involves an immediate allergic reaction provoked by exposure to a specific antigen, such as a peanut. It’s also the process involved in milder allergies, like hay fever.
Pager Trouble
August 20, 2018
Some unfortunate soul
Intern Insecurities
August 11, 2018
Now that I’ve finished my first month as an intern, I can admit I was afraid that the first time a nurse asked me a question, I would freeze, burst into tears, and run away. So far, so good.
Perhaps the most awkward experience this month was not, in fact, showing first-time mothers how to breast feed an infant, but when my attending insisted that I introduce myself as “Doctor” Castello. It felt immediately pretentious and I shied away from it like a horse from a particularly imposing shrub. I quickly realized there was a practical element to this proscription: When I introduced myself as a doctor I stopped getting dirty looks from moms who apparently thought I was about to steal their newborn. Still, it’s weird. I don’t feel like I know anything.
Game. Set. Match.
March 21, 2018
Game news dominated my Facebook feed on Friday night, when, after years of “up and comingstatus, my alma mater UMBC secured a place in history by knocking out UVA in March Madness. If, like my med school colleagues, you managed to miss what will likely be the biggest sportsball upset of the year, the Retrievers are now the first 16th-seeded team to beat a number 1 seed in the NCAA men’s college basketball tournament (a feat accomplished a mere twenty years earlier by the Harvard women’s team). As a fringe benefit, excited UMBC-related updates effectively obliterated the slew of posts concerning another moderately significant event from earlier that day. Checkmate!
Set up for a St. Patrick’s Day get-together occupied most of Friday. Hosting something is sort of tradition for us, and this would be the final incarnation prior to an impending major life change. We made a concerted effort to avoid re-watching The Boondock Saints yet again and in doing so, neglected to find a relevant substitute and wound up not watching anything at all. Consequently we found ourselves playing games, inadvertently amplifying social interaction.
Match Day played out a little differently for me than most, resolving the Schrödinger equation not with a bang but a whimper. To say that a loud party for this particular piece of news “wasn’t my thing” is a gross understatement, which shouldn’t come as a surprise given the set-up in my previous post on the matter. Sometime soon after waking I developed a resting hand tremor that didn’t resolve until later that evening. I got the email in the car on the way back from Costco, immediately showed Rachel, and spent the next eight hours cleaning the house, ignoring my phone, and mentally preparing to break the news to our family. After limping through the family conversations, I told my friends at the party, and finally began to triage the vast number of notifications that had accumulated–including a voicemail from the program I’ll belong to for the next five years. Sorry about that.
…and now via this most circuitous route we arrive at the part you already skipped ahead to read because it was set as a heading in its own paragraph and highlighted in bold:
I matched to UCSD for child neurology.
While I’m not interested in disclosing the particulars of my actual rank list (feel free to stop prying, post-interview surveys…), I will say that this program was among my top choices. In addition to working at an institution with a fantastic AI division, I’m going to be living in what I think is one of the most beautiful cities in the country. If you didn’t know I was applying to child neurology, well, let’s chalk it up to that whole five years of silence thing for the time being.
A Match Made to Profession
February 28, 2018
You finished medical school. Time to get a job, right? Wrong! Finishing allopathic med school gets one an MD but does not allow one to actually practice medicine. For that you have to complete what is more or less a paid apprenticeship, termed a “residency” because it involves essentially living at the hospital for the duration. The first year of residency is known as an “internship,” in order to keep things nice and straightforward for everyone following along at home. Intern year is a trial-by-fire Immersive Learning Experience where your decisions suddenly matter; in between breakdowns you take Step 3 and collect a medical license. How many additional residency years are required after the internship depends on the specialty you are pursuing (this is the big-category stuff: family, pediatrics, internal medicine, surgery, psychiatry, and so on). After that, you can go on to do a “fellowship” where you get additional training in a particular subspecialty (this is the specific stuff: cardiology, endocrinology, colorectal surgery, child psychiatry, and so on).
For my video gaming friends, it’s a tech tree, where the points you spend to level up are earned by trading years of your life. Once you’ve knocked out a bunch of the low-level skills you unlock class selection (mage, warrior, etc.). Once you’ve committed to a class you then have additional options to further adjust the class to your playstyle (berserker, battlemage, 1H vs 2H weapons, etc.).
Okay, back to the whole “getting a job” thing. You need an internship to take the last test to get a license, the internship comes bundled with a residency in some medical specialty, the residency unlocks the possibility of a fellowship. Getting a residency involves picking a specialty, filling out a centralized application, paying to send it to a bunch of places, waiting for those places to invite you to interview, and going on a bunch of interviews. Here is where a reasonable person might expect job offers to arrive, but no! That is not what happens.
Instead of simply getting a job offer, there is a process known as OkCupid “The Match” where all the applicants in a specialty create a list ranking the residency programs where they interviewed, and all those places create a list ranking the applicants. In March, some people press “go” on a legally-binding algorithm that attempts to connect all the applicants to a residency spot. Hopefully one of your top places also thinks you’re a top applicant so you get lucky have a place to work come July 1st.
Are there enough spaces in the medical specialties for everyone who wants one? No, no there are not. As a result it is possible to fail to match, in which case you win the right to participate in The Scramble, rebranded by People Who Are Not Medical Students as the “Supplemental Offers and Acceptance Program (SOAP).” This involves spending the next several days frantically applying to everywhere you can think of in the hopes of getting an open residency spot in your specialty of choice at some place that also didn’t match a student, or failing that, an open spot for any specialty anywhere.
Finally, at the end of the week, The Match sends out an email telling the people who did match exactly where they matched. After that we all go back to our regularly scheduled rotations because it’s still March and there are several months of school remaining, only now we also can fret about moving to wherever. At some point after graduation, the no-longer-students take their shiny new (but useless) MDs to the residency program, where on July 1st they become interns and realize they know absolutely nothing…but that’s another story. |
a111277dcebb527c | Thursday, April 25, 2019
Yes, scientific theories have to be falsifiable. Why do we even have to talk about this?
The task of scientists is to find useful descriptions for our observations. By useful I mean that the descriptions are either predictive or explain data already collected. An explanation is anything that is simpler than just storing the data itself.
An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues:
1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis?
If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work.
My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk.
2. How practical should a falsification be?
Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda.
Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm.
3. What even counts as a hypothesis?
In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions.
To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on.
In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example.
Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories.
This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models.
4. Falsifiability is necessary but not sufficient.
A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here.
5. Not all aspects of a hypothesis must be falsifiable.
It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable.
There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.
This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine.
1. You might find interesting Lee McIntyre's book The Scientific Attitude (see my review: which spends quite a lot of time on the demarcation issue (either between science and non-science or science and pseudoscience).
2. Still, if proposed mechanism as a hypothesis was especially odd but there were no other reasonable explanation yet, would also crazy ideas be considered as science? (Susskind's words adapted).
1. If the crazy ideas pass through experimental verification, then those ideas are considered proven science.
3. How would you classify an analysis of this kind? A nice argument to account for the Born rule within MWI ( "Less is More: Born's Rule from Quantum Frequentism" ). I believe it's fair to say the paper concludes that the experimental validity of the Born rule implies the universe is necessarily infinite. If we never observe a violation of the Born rule, would this hypothesis qualify as science?
1. This seems to be a perfect case of falsifiability. If a Born rule violation is never observed this is not proven, but there is a confidence level, maybe some form of statistical support, for this theory. If a Born rule is found to be violated the theory is false, or false outside some domain of observation. Since a quantum gravity vacuum is not so far well defined, and there are ambiguities such as with Boulware vacua, it could be the Born rule is violated in quantum gravity.
2. Science has defied categories since the start. If anyone is responsible for defining science in the modern context it is probably Galileo. Yet we have different domains of science that have different criteria for what is meant by testable. A paleontologist never directly experiences the evolution of life in the past, but these time capsules called fossils serve to lead to natural selection as the most salient understanding of speciation. Astronomy studies objects and systems at great distances, where we will only ever visit some tiny epsilon of the nearest with probes. So we have to make enormous inferences about things. From parallax of stars, to Cepheid variables, to red-shift of galaxies to the luminosity of type I supernova we have this chain of meter sticks to measure the scale of the universe. We measure not the Higgs particle or the T-quark, but the daughter products that infer the existence of these particles and fields. We do not make observations that are as direct as some purists would like.
As Eusa and Helbig point out there are aspects of modern theories which have unobservable aspects. Susskind does lean heavily on the idea of theores that are of a nature "it can't be any other way." General relativity predicts a lot of things about black hole interiors. That is a big toughy. No one will ever get close to a black hole that could be entered before being pulled apart, the closest is SgrA* at 27k light years away. Even if theoretical understanding of black hole interiors is confirmed in such a venture that will remain a secret held by those who entered a black hole. It is plausible that aspects of black hole interiors can have some indirect physics with quantum black holes, but we will not be generating quantum black holes any day soon.
Testability and falsifiability are the gold standard of science. Theories that have their predictions confirmed are at this top. quantum mechanics is probably the modern physics that is the most confirmed. General relativity has a good track record, and the detection of gravitational radiation is a big feather in the GR war bonnet. Other physics such as supersymmetry are really hypotheses and not theories in a rigorous sense. Supersymmetry also is a framework that one puts phenomenology on. So far all that phenomenology of light SUSY partners looks bad. When I started graduate school I was amazed that people were interested in SUSY at accelerator energies. At first I thought it was properly an aspect of quantum gravitation. I still think this may be the case. At best some form of split SUSY Arkani Hamed proposes may play a role at low energy, which I think might be 1/8th SUSY or something. So these ideas are an aspect of science, but they have not risen to the level of a battle tested theory. IMO string theory really should be called the string hypothesis; it is not a theory --- even if I might think there may be some stringy aspect to nature.
There is a certain character in a small country sandwiched between Austrai, Germany and Poland who has commented on this and ridicules the idea of falsifiability. I just checked his webpage and sure enough he has an entry on this. I suppose his pique on this is because he holds to an idea about the world that producing 35 billion tons of carbon in CO_2 annually into the atmosphere has no climate influence. He upholds a stance that has been falsified; the evidence for AGW is simply overwhelming, and by now any scientific thinker should have abandoned climate denialism. Curious how religion and ideology can override reason, even with the best educated.
3. Lawrence Crowell wrote:
This assumption may not be the case. The theory of Hawking radiation has been verified in supersonic wave based analog black holes in the lab. Yes, entangled virtual items have been extracted from the vacuum and made real.
The point to be explored in the assumptions that underlie science is can such a system using Hawking radiation be engineered to greatly amplify the process of virtual energy realization to the point where copious energy is extracted from nothing. When does such a concept become forbidden as a violation of the conservation of energy to consider as being real? In this forbidden case, it is not so much the basic science of the system, but the point where the quantity of its energy production becomes unthinkable since the conservation of energy is inviolate.
4. There are optical analogues of black holes and Hawking radiation. Materials that trap light can be made to appear black hole like. This property can be tuned with a reference beam of some type. There is no "something from nothing" here. The energy comes from the energy employed to establish the BH analogue. Black holes have a time-like Killing vector, which in a Noether theorem sense means there is a constant of motion for energy. Mass-energy is conserved.
Another example: GR says a lot about what goes on inside the event horizon of a black hole, which (classically) is by definition non-observable. But of course this is not a mark against GR. Similarly, the unobservability of other universes in (some types of) the multiverse is not a mark against the theories which have the multiverse as a consequence, as long as they are testable in other ways.
It is not GR per se, that is responsible for the event horizon (or the singularity) of the modern 'relativistic' black hole. Rather it is the Schwarzschild solution to the GR field equations that produces both of those characteristics.
If Schwarzschild had incorporated the known fact that the speed of light varies with position in a gravitational field we probably wouldn't be talking about black holes.
2. Here another culprit: The renormalisation group itself as David Tong says in here (pdf p.62): “The renormalisation group isn't alone in hiding high-energy physics from us. In gravity, cosmic censorship ensures that any high curvature regions are hidden behind horizons of black holes while, in the early universe, inflation washes away any trace of what took place before. Anyone would think there's some kind of conspiracy going on....”
3. Phillip,
I believe I have said this before but here we go again:
1) What happens inside a black hole horizon is totally observable. You just cannot come back and tell us about it.
2) We have good reason to think that the inside of a black hole does play a role for our observations and that, since the black hole evaporates, it will not remain disconnected.
For these reasons the situation with black holes is very different from that of postulating other universes which you cannot visit and that are and will remain forever causally disconnected.
4. I think the comparison of other pocket cosmologies and black hole interiors is actually fairly comparable. The interior of a black hole probably has some entanglement role with the exterior world. We might have some nonlocal phenomena with other pocket worlds or these pockets may interact.
There is some data coming about that could upend a fair amount of physics and cosmology. The CMB data is compatible with a Hubble parameter H = 67km/sec-Mpc and data from galaxies out to z > 8 indicates H = 74km/sec-Mpc. The error bars on these data sets do not overlap. Something odd is happening. This could mean possibly three things, four if I include something completely different we have no clue about.
The universe is governed by phantom energy. The evolution of the vacuum energy dρ/dt = -3H(p + ρ) > 0 with p = wρ, and for w < -1 we have dρ/dt = -3H(1 + w)ρ > 0. This means the observable universe will in time cease to primarily exponentially expand, but will asymptote to some value in a divergent expansion. This is the big rip.
One possibility is this pocket world interacted with another at some point. If the two regions had different vacuum energy then maybe some of that from the other pocket spilled into this world. The region we observe out to around 12 billion light years and beyond the cosmic horizon then had this extra vacuum energy fill in sometime in the first few hundred million years of this observable world.
Another is that quantum states in our pocket world have some entanglement with quantum states in the inflationary region or in other pocket regions. There may then be some process similar to the teleportation of states that is increasing the vacuum energy of this pocket. It might be this happens generally, or it occurs under different conditions the pocket is in within the inflationary spacetime. Susskind talks about entangled black holes, and I think more realistically there might be entanglement of a few quantum states on a black hole with some quantum states on another, maybe in another pocket world or cosmology, and then another set entangled with a BH elsewhere and there is then a general partition of these states that is similar to an integer partition. If so then it is not so insane to think of the vacuum in this pocket world entangled with vacua elsewhere.
The fourth possibility is one that no one has thought of. At any rate, we are at the next big problem in cosmology. This discrepancy in the Hubble parameter from CMB and from more recent galaxies is not going away.
5. Regarding the forth possibility...
The CMB tells us about the state that the universe existed in when it was very young. There is no reason to assume that the expansion of the universe is constant. The associated projections about the proportions of the various types of matter and energy that existed at that early time are no longer reliable since the expansion rate of the universe has increased. It is likely that the associated proportions of the various types of matter and energy that exist now have changed from its primordial CMB state. This implies that there is a vacuum based variable process in place that affects the proportions of the various types of matter and energy as an ongoing activity that has always existed and that has caused the Hubble parameter derived from the CMB to differ from its current measured value.
6. We ultimately get back to this problem with what we mean by energy in general relativity. I wrote the following on stack exchange on how a restricted version of FLRW dynamics can be derived from Newton's laws
The ADM space plus time approach to general relativity results in the constraints NH = 0 and N^iH_i = 0 that are the Hamiltonian and momentum constraints respectively. The Hamiltonian constraint, or what is energy on a contact manifold, means there is no definition of energy in general relativity for most spacetimes. The only spacetimes where energy is explicitly defined is where there is an asymptotic flat region, such as black holes or Petrov type D solutions. In a Gauss's law setting for a general spacetime there is no naturally defined surface where one can identify mass-energy. Either the surface can never contain all mass-energy or the surface has diffeomorphic freedom that makes it in appropriate (coordinate dependent or non-covariant etc) to define an observable such as energy.
The FLRW equations though are a case with H = 0 with kinetic and potential parts
E = 0 = ½må^2 - 4πGρa^2/3
for a the scale factor on distance x = ax_0, where x_0 is some ruler distance chosen by the analyst and not nature. Further å = da/dt for time t on the Hubble frame. From there the FLRW equations can be seen. The density has various dependencies for matter ρ ~ a^{-3}, radiation ρ ~ a^{-4} and for the vacuum ρ is generally assumed to be constant.
The question is then what do we mean by a vacuum. The Hamiltonian constraint has the quantum mechanical analogue in the Wheeler-DeWitt equation HΨ[g] = 0, which looks sort of like the Schrödinger equation HΨ[g] = i∂Ψ/∂t, but where i∂Ψ/∂t = 0. The time-like Killing vector is K_t = K∂/∂t and we can think of this as a case where the timelike Killing vector is zero. This generally is the case, and the notable cases where K_t is not zero is with black holes. We can however adjust the WDW equation with the inclusion of a scalar field φ and the Hamiltonian can be extended to include this with HΨ[g, φ] = 0, such that there is a local oscillator term with a local meaning to time. This however is not extended everywhere, unless one is happy with pseudotensors. The FLRW equation is sort of such as case; it is appropriate for the Hubble frame. One needs a special frame, usually tied to the global symmetry of the spacetime, to identify this.
However, transformations can lead to troubles. Even with black holes there are Boulware vacua, and one has no clear definition of what is a quantum vacuum. I tend to think this may be one thing that makes quantum gravitation different from other quantum fields.
5. But isnt it advantageous for proponents of something like string theory to not have anything that is falsifiable..and continue with the hope the "results" are just around the corner..and for the $$ to keep flowing..forever ?
6. >1. How long should it take to make a falsifiable prediction or postdiction ...
>2. How practical should a falsification be?
It doesn't make much difference how long it takes, the real question is how much work, and/or time, and/or money it should take to develop an executable falsifiable outcome.
In the final analysis this comes down to whether we should pay person A, B, or C to work on hypotheses X, Y or Z. It is a relative-value question, and at times it is very difficult to rank hypotheses in a way that lets us sort them.
This is especially true when "beauty" and "naturalness" can generate enthusiasm among researchers; those can render the people that know the most about the prospects for hypotheses X, Y or Z incapable of properly ranking them; their bias is to vote on the hypothesis most pleasing if it were true, instead of the hypothesis most likely to be true or most testable, or that would take the fewest personnel-hours to pursue.
In the end there is a finite amount of money-per-year, thus a finite number of personnel-hours, equipment, lab space, computer time and engineering support. In the end it is going to be portioned out, one way or another.
The problem is in judging the unknowns:
1) How many $ are we away from an executable falsifiable proposal?
2) How much time and money will it cost?
3) How likely is a proof/refutation?
4) How much impact will a proof/refutation of the hypothesis have on the field in question?
Ultimately we need stats we are unlikely to ever develop!
In such a case, one solution is to sidestep the reasoning and engage in something like the (old) university model: Professors get paid to work on whatever they feel like, as long as they want, in return for spending half their week teaching students. That can include some amount for experimentation and equipment. "Whatever they want" can include the work of other researchers; so they can collaborate and pool resources. This kind of low-level "No Expectations" funding can be provided by governments.
Additional funding would not be provided until the work was developed to the point that the above "unknowns" are plausible answered; meaning when they DO know how to make a falsifiable proposal for an experiment.
As for the thousands of dead-ends they might engage in: That's self-regulating; they would still like to work on something relevant with experiments. But if their goal is just to invent new mathematics or whatever that bear no relationship to the real world; that's fine. Not all knowledge requires practical application.
7. It's nice that you toy in your own terms with the familiar philosophical notion of under-determination by experience (remarked on by physicist Duhem as early as 19th century, and leveraged against Popper and positivist philosophies in the 1950's). Maybe the problem is more widespread than you think, and I would be tempted to add a (6): coming up with clear-cut falsification criteria requires assuming an interpretative and methodological framework.
To take just the most extreme cases, one should exclude the possibilities that experimenters are systematically hallucinating, and other radical forms of skepticism. But this also includes a set of assumptions that are part of scientific methodology on how to test hypotheses, what kinds of observations are robust, what statistical analysis or inductive inferences are warranted, etc.
This things are shared by scientists, because they belong to the same culture, and the general success of science brings confidence into them. Yet all these methodological and interpretative principles are not strictly speaking falsifiable.
Now the problem is: of what counts as falsification rests on non-falsifiable methodological assumptions, how can anything be absolutely falsifiable? And I think the answer is that nothing is strictly falsifiable, but only relative to a framework that is acceptable for its general fruitfulness.
1. " one should exclude the possibilities that experimenters are systematically hallucinating,"
Yes, we're all hallucinating that are computers, which confirm the quantum behaviour of the electron quadrillions of times a second, are working; and that are car satnavs which confirm time dilation in GR trillions of times a second, are working.
*Real* scientists are the only people who are *not* hallucinating.
2. Steven Evans,
You're missing the point. I'm talking about everything that you have to implicitly assume to trust experimental results, in your example, the general fiability of computers and the fact that they indeed do what you claim they do.
I personally don't doubt it. It seems absurd of course. The point is that any falsification ultimately rests on many other assumptions, there's no falsification simpliciter.
3. @Steven Evans maybe you're under the impression that I'm making an abstract philosophical point that is not directly relevant to how science works out should work. But no: take the OPERA experiment that apparently showed that neutrino travel faster than light. It tooks several weeks for scientists to understand what went wrong, and why relativity was not falsified. If anything, this shows that falsifying a theory is not a simple recipe, just a matter of observing that the theory is false. (And the bar can be more or less high depending on how will the theory is established so pragmatic epistemic cost considerations enter into the picture).
My point is simply this: what counts as falsification is not a simple matter, a lots of assumptions and pragmatic aspects come in. Do you disagree with this?
4. @Quentin Ruyant
You are. Take 1 kilogram of matter, turn it into energy. Does the amount of energy = c^2? Put an atomic clock on an orbiting satellite. Does it run faster than an atomic clock on the ground by the amount predicted by Einstein? Building the instruments is presumably tricky, checking the theories not so much. OPERA was a mistake, everybody knew it was a mistake.
Where there is an issue, the issue is not a subtle point about falsifiability, it is far more mundane - people telling lies about their being empirical evidence to support universal fine-tuning or string theory. Or people claiming the next gen collider is not a hugely expensive punt. The people saying this are frauds. In the medical or legal professions they would be struck off and unable to practise further.
8. "To make predictions you always need a concrete model..."
The problem is that qualitative (concrete) modeling is a lost art in modern theoretical physics. All of the emphasis is on quantitative modeling (math). The result is this:
It is a waste of time to develop more quantitative model variants on the same old concrete models, but what is desperately needed are new qualitative models. All of the existing quantitative models are variations on qualitative models that have been around for the better part of a century (the big bang and quantum theory). The qualitative models are the problem.
Unfortunately, with its mono-focus on quantitative analysis, modern theoretical physics does not appear to have a curriculum or an environment conducive to properly evaluating and developing new qualitative models.
I want to be clear that I am not suggesting the abandonment of quantitative for qualitative reasoning. What is crucial is a rebalancing between the two approaches, such that in reflecting back on one another, the possibility of beneficial, positive and negative feedback loops is introduced.
The difficulty in achieving such a balance lies in the fact that qualitative modeling is not emphasized, if taught at all, in the scientific academy. Every post-grad can make new mathematical models. Nobody even seems to think it necessary to consider, let alone construct, new qualitative models.
At minimum, if the qualitative assumptions made a century ago aren't subject to reconsideration, "the crisis in physics" will continue.
9. Thanks for stating this so clearly.
10. Some non-falsifiable hypotheses are not optional. These are known as axioms or assumptions (aka religion) and no science is possible without them. For instance, cosmology would be dead without the unverifiable assumption (religious belief) that the laws of physics are universal in time and space.
Science = Observation + Assumptions, Facts Selection, Extrapolations, Interpretations…
Assumptions, Facts Selection, Extrapolations, Interpretations… = Sum of Axiomatic Beliefs
Sum of Axiomatic Beliefs = Religion …therefore,
Science = Observation + Religion
11. The demarcation problem of science versus pseudo science was of course pondered long before Karl Popper. Aristotle for one was quite interested in solving it. No indication that this quandary will ever be satisfactorily resolved. Though I do consider Popper’s “falsifiability” heuristic to be reasonably useful, I’m not hopeful about the project in general.
I love when scientists remove their science hats in order to put on philosophy hats! It’s an admission that failure in philosophy causes failure in science. And why does failure in philosophy cause failure in science? Because philosophy exists at a more fundamental level of reality exploration than science does. Without effective principles of metaphysics, epistemology, and value, science lacks an effective place to stand. (Apparently “hard” forms of sciences are simply less susceptible than “personal” fields such as psychology, though physics suffers here as well given that we’re now at the outer edges of human exploration in this regard.)
I believe that it would be far more effective to develop a new variety of philosopher rather than try to define a hard difference between “science” and “pseudo science”. The sole purpose of this second community of philosophers would be to develop what science already has — respected professionals with their own generally accepted positions.
Though small initially, if scientists were to find this community’s principles of metaphysics, epistemology, and value useful places from which to develop scientific models, this new community should become an essential part of the system, or what might then be referred to as “post puberty science”.
1. One problem with this proposal is the use of the word "metaphysics". To me this carries connotations of God, religion, angels, demons and magic. It means "beyond physics," and in the world today it is synonymous with the "supernatural" (i.e. beyond natural) and used to indicate faith which is "beyond testable or verifiable or falsification".
I hear "metaphysics" and I run for the hills.
Unless their position on metaphysics is there are no metaphysics, I cannot imagine why I would have any professional respect for them. Their organization would be founded on a cognitive error.
I think it is likely possible to develop a "science of science" by categorizing and then generalizing what we think are the failures of science; and why.
From those one might derive or discover useful new axioms of science, self-evident claims upon which to rest additional reasoning about what is and is not "science".
Part of the problem may indeed be that we have not made such axioms explicit; and instead we rely on instinct and absorption of what counts as self-evident. That is obviously an approach ripe for error, and difficult to correct without formal definitions. Having something equivalent to the family tree of logical fallacies could be useful in this regard.
But that effort would not be separate from science, it would just be a branch of science, science modeling itself. That should not cause a problem of recursiveness or infinite descent; and we have an example of this in nature: Each of us contain a neural model of ourself, which we use for everything from planning our movements to deciding what we'd enjoy for dinner, or what clothing we should buy, or what career to pursue.
Science can certainly model science, without having to appeal to anything above or beyond science. To some extent this has already been done. Those efforts could be revisited, revised, and expanded.
2. Dr. Castaldo,
I think you’d enjoy my good friend Mike Smith’s blog. After reading this post of Sabine’s he wrote an extensive post on the matter as well, and did so even before I was notified that Sabine had put up this one! I get the sense that you and he are similarly sharp. Furthermore I think you’d enjoy extensively delving into the various mental subjects which are his (and I think my) forte. Anyway I was able to submit the same initial comment to both sites. He shot back something similarly dismissive of philosophy btw.
On metaphysics, I had the same perspective until a couple years ago. (I only use the “philosophy” modifier as a blogging pseudonym.) Beyond the standard speech connotation, I realized that “metaphysics” is technically mean to refer to what exists before one can explore physics… or anything really. A given person’s metaphysics might be something spiritual for example, and thus faith based. My own metaphysics happens to be perfectly causal. The metaphysics of most people seems to fluctuate between the two.
Consider again my single principle of metaphysics, or what I mean to be humanity’s final principle of metaphysics: “To the extent that causality fails (in an ontological sense rather than just epistemically mind you), nothing exists for the human to discover.”
All manners of substance dualist populate our soft sciences today. Furthermore many modern physicists seem to consider wave function collapse to ontologically occur outside of causality, or another instance of supernaturalism. I don’t actually mind any of this however. Some of them may even be correct! But once (or if) my single principle of metaphysics becomes established, these people would then find themselves in a club which resides outside of standard science. In that case I’m pretty sure that the vast majority of scientists would change their answer in order to remain in our club. (Thus I suspect that very few physicists would continue to take an ontological interpretation of wave function collapse, and so we disciples of Einstein should finally have our revenge!)
Beyond this clarification for the “metaphysics” term, I’m in complete agreement. Science needs a respected community of professionals with their own generally accepted principles of how to do science. It makes no difference if these people are classified as “scientist”, “philosopher”, or something else. Thus conscientious scientists like Sabine would be able to get back to their actual jobs. Or they might become associated professionals if they enjoy this sort of work. And there’s plenty needed here since the field is currently in need of founders! I hope to become such a person, and by means of my single principle of metaphysics, my two principles of epistemology, and my single principle of axiology.
I don't get the distinction. There is much to be said for the "shut up and compute" camp; though I don't like the name.
It is an approach that works, and has worked for millennia. We never had to know the cause of gravity in order to compute the rules of gravity. We may still not know the cause of gravity; there may be no gravitons, and I admit I am not that clear on how a space distortion translates into an acceleration.
Certainly when ancient humans were building and sculpting monoliths, they ran a "shut up and compute" operation; i.e. it makes no difference why this cuts stone, it does. The investigation can stop there.
Likewise, I don't have to believe in magic or the supernatural to believe the wavefunction collapses for reasons that appear random to me, or truly are random, or in principle is predictable but would require so much information to predict that prediction is effectively impossible.
That last is the case in predicting the outcome of a human throwing dice: Gathering all the information necessary to predict the outcome before the throw begins would be destructive to the human, the dice, and the environment!
"Shut up and compute" says ignore why, just treat the wavefunction collapse as randomized according to some distribution described by the evolution equations, and produce useful predictions of the outcomes.
Just like we can ignore why gravity is the way it is, why steel or titanium is the way it is, why granite is the way it is. We can test all these things to characterize what we need to know about them in order to build a skyscraper. Nor do we need to know why earthquakes occur. We can characterize their occurrence and strength statistically and successfully use that to improve our buildings.
Of course I am not dissing the notion of investigating underlying causations and developing better models of what contributes to material strength, or prevents oxidation, or lets us better predict earthquakes or floods.
But I am saying that real science does not demand causality; it can and has progressed without it. Human brains are natural modeling machines. I don't need a theory of why animals migrate on certain paths to use that information to improve my hunting success, and thus my survival chances. We didn't need to be botanists or geneticists to understand enough to start the science of farming and selective breeding for yields. It is possible to know that some things work reliably without understanding why they work reliably.
To my mind, it is simply false to claim that without causality there is nothing to know. There is plenty to know, and a true predictive science can (and has) been built resting on foundations of "we don't know why this happens, but it does, and apparently randomly."
4. Well let’s try this Dr. Castaldo. I’d say that there are both arrogant and responsible ways to perceive wave function collapse. The arrogant way is essentially the ontological stance, or “This is how reality itself IS”. The responsible way is instead epistemological, or “This is how we perceive reality”. The first makes absolute causal statements while the second does not. Thus the first may be interpreted as “arrogant”, with the second “modest”.
I’m sure that there are many here who are far more knowledgeable in this regard than I am, and so could back me up or refute me as needed, but I’ve been told that in the Copenhagen Interpretation of QM essentially written by Bohr and Heisenberg, they did try to be responsible. This is to say that they tried to be epistemic rather than ontological. But apparently the great Einstein would have none of it! He went ontological with the famous line, “God does not play dice”. So what happens in a psychological capacity when we’re challenged? We tend to double down and get irresponsible. That’s where the realm of physics seems to have veered into a supernatural stance, or that things happen in an ontological capacity, without being caused to happen.
So my understanding is that this entire bullshit dispute is actually the fault of my hero Einstein! Regardless I’d like to help fix it by means of my single principle of metaphysics. Thus to the extent that “God” does indeed play dice, nothing exists to discover. And more importantly, if generally accepted then the supernaturalists which reside in science today, would find that they need to build themselves a club which is instead populated by their own kind! :-)
12. @Philosopher Eric
You seem to have an inflated view of philosophy and philosophers. I fully agree with you in so far as one ought not ignore what philosophers do and say. To excel in other fields will constrain one from interrogating the work of philosophers. Those who make that choice ought to accept their decision and refrain from the typical contemptuous language seen so often.
I have spent the last thirty years studying the foundations of mathematics. To be quite frank about it, I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience. From what I can tell, the main emphasis of philosophers in this arena over the last century has been to justify science as a preferred world view by crafting mathematics in the image of their belief systems.
Their logicians are even more pathetic. Hume's account of skepticism is good philosophy. It is also unproductive. To represent a metaphysical point of view and then invoke a distinction between syntax and semantics to claim one is not doing metaphysics is simply deceptive. We have a great deal of progress with no advancement.
You are correct that such matters cannot be sorted out without digging into the philosophical development of the subject matter. But what you are likely to find are people running around saying, "I don't believe that!". So what one has are contradictory points of view and different agendas.
That is what philosophers and their logicians have given to mathematics.
Should you disagree with me, what is logic without truth? One can claim that one is only studying "forms". But once one believes they have identified a correct form, one defends one's claims from the standpoint of belief. Philosophers and their logicians can never get away from metaphysics whether they care to admit it or not. But their pretensions to the contrary are simply lies.
Science fails because of naive beliefs with respect to truth, reality, and the inability to accept epistemic limitations. Philosophers have shown just as much willingness to fail along those same lines.
1. mls,
Thanks for your reply. I’ve dealt with a number of professional philosophers online extensively, and from that can assure you that they don’t consider me to inflate them. Unfortunately most would probably say the opposite, and mind you that I try to remain as diplomatic with them as possible. Your disdain for typical contemptuous language is admirable. They’re a sensitive bunch. Aren’t we all?
What I believe must be liberated in order to improve the institution of science, is merely the subject matter which remains under the domain of philosophy. Thus apparently we’ll need two distinct forms of “philosophy”. One would be the standard cultural form for the artist in us to appreciate. But we must also have a form that’s all about developing a respected community with it’s own generally accepted understandings from which to found the institution of science.
So you’re a person of mathematics, and thus can’t stand how various interests defile this wondrous language — this monument of human achievement — by weaving it into their own petty interests? I hear you there. But then consider how inconsistent it would be if mathematics were instead spared. I believe that defiling things to our own interests needs to become acknowledged to be our nature. I thinks it’s standard moralism which prevents us from understanding ourselves.
I seek to “fix science” not for that reason alone, but rather so that it will be possible for the human to effectively explore the nature of the human. I’d essentially like to help our soft sciences harden. Once we have a solid foundation from which to build, which is to say a community of respected professionals with their own associated agreements, I believe that many of your concerns would be addressed.
What is logic without truth? That’s exactly what I have. I have various tools of logic (such mathematics) but beyond just a single truth, I have only belief. The only truth that I can ever have about Reality, is that I exist. It is from this foundation that I must build my beliefs as effectively as can.
2. " I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience."
Wiles proved, via an isomorphism between modular forms and semi-stable elliptic curves, that there are no positive integer solutions to x^3 + y^3 = z^3.
Now, back in "reality", take some balls arranged into a cube, and some more balls arranged into another cube, put them all together and arrange them into a single cube. You can't. Why is that do you think?
3. Steven,
It seems to me that two equally sized cubes stacked do not, by definition, form a cube. Nor do three. Four of them however, do. It’s simple geometry. But I have no idea what that has to do with lms’ observation about the lunacy of people who believe that mathematics exists beyond subjective experience, or the mathematical proof that you’ve referred to. I agree entirely with lms — I consider math to merely be a human invented language rather than something that exists in itself (as platonists and such would have us believe.) Do you agree as well? And what is the point of your comment?
4. @Steven Evans
Should you take the time to learn about my views, you would find that I am far more sympathetic with core mathematics than not. Get a newsgroup reader and load headers for sci.logic back to January 2019. Look for posts by "mitch".
I doubt that you will have much respect for what you read, but, you will find an account of truth tables based upon the affine subplane of a 21-point projective plane. Since there is a group associated with this affine geometry, this basically marrys Klein's Erlangen program with symbolic logic in the sense of well-formedness criteria (that is, logical constants alone do not make a logical algebra). But this is precisely the kind of thing committed logicists will reject.
Now, Max Black presented a critical argument against mathematical logicians based upon a "symmetric universe". My constructions are similarly based upon symmetry considerations -- except that I am using tetrahedra oriented with labeled vertices.
Who knew that physicists had been inventing all sorts of objects on the basis of similar ideas, although they use continuous groups because they must ultimately relate to physical measurement?
For the last two weeks I have been associating collineations in that geometry with finite rotations in four dimensions using Petrie polygon projections of tesseracts.
And, as other posts in that newsgroup show, any 16 element group which carries a 2-(16,6,2) design can be mapped into this affine geometry.
So, I happen to think that logicians and philosophers have turned left and right into true and false. You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up.
5. @Philosopher Eric
I don't mean stack the cubes(!), I mean take 2 cubes of balls of any size, take all the balls from both cubes and try to rearrange them into a single cube of balls. You can't, whatever the sizes of the original 2 cubes. The reason we know you can't do this is because of Wiles' proof of Fermat's Last Theorem: there are no positive integer solutions to x^3 + y^3 = z^3
The point is this that this is maths existing in reality, in contradiction to what you wrote - whether you know Wiles' theorem or not, you can't take 2 cubes-worth of balls and arrange them into a single cube.
There are 2 reasons that this theorem applies to reality:
1) The initial abstraction that started mathematics was the abstraction of number. So it is not a surprise when mathematical theorems, like Wiles', can be reapplied to reality.
2) Wiles' proof depends on 350 years-worth of abstractions upon abstractions (modular forms, semi-stable elliptic curves) from the time of Fermat, but the reason Wiles' final statement is still true is because mathematics deals with precise concepts. (Contrast with philosophy, which largely gets nowhere because they try to write "proofs" in natural language - stupid idea.)
Tl;DR Maths often applies to reality because it was initially an abstraction of a particular characteristic of reality.
6. "You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up."
Fair criticism, and it is the criticism of the blog author's "Lost In Math" book. But that's not what you wrote originally. You wrote originally that it was lunacy to consider any maths as being real. O.K., arrange 3 balls into a rectangle. How did it go? Now try it with 5 balls, 7 balls, 11 balls, 13 balls,.. What shall we call this phenomenon in reality that has nothing to do with maths? Do you think there is a limit to the cases where the balls can't be arranged into a rectangle? My money is on not. But maths has nothing to do with reality. Sure.
7. Okay Steven, I think that I now get your point. You’re saying that because the idea you’ve displayed in mathematics is also displayed in our world, that maths must exist in reality, or thus be more than a human construct. And actually you didn’t need to reference an esoteric proof in order to display your point. The same could be said of a statement like “2 + 2 = 4”. There is no case in our world where 2 + 2 does not equal 4. It’s true by definition.
But this is actually my point. Mathematics exists conceptually through a conscious mind, and so is what it is by means of definition rather than by means of causal dynamics of this world. It’s independent of our world. This is to say that in a universe that functions entirely differently from ours, our mathematics would still function exactly same. In such a place, by definition 2 + 2 would still equal 4.
We developed this language because it can be useful to us. Natural languages such as English and French are useful as well. It’s interesting to me how people don’t claim that English exists independently of us, even though just as many “true by definition” statements can be made in it.
I believe it was Dr. Castaldo who recently implied to me that “Lost in Math” doesn’t get into this sort of thing. (My own copy of the book is still on its way!) In that case maybe this could be another avenue from which to help the physics community understand what’s wrong with relying upon math alone to figure out how our world works?
8. " that maths must exist in reality,"
You've got it the wrong way round. Maths is an abstraction of a property in physical reality. Even before humans appeared, it was not possible to arrange 5 objects into a rectangle.
" And actually you didn’t need to reference an esoteric proof "
The point is that modular forms and elliptic curves are still related to reality, because the axioms of number theory are based on reality.
"2 + 2 would still equal 4."
The concept might not arise in another universe. In this universe, the only one we know, 2+2=4 represents a physical fact.
"what’s wrong with relying upon math alone to figure out how our world works"
It's a trivial question. Competent physicists understand you need to confirm by observation.
9. Steven,
If you’re not saying that maths exists in reality beyond us, but rather as an abstraction of a physical property, then apparently I had you wrong. I personally just call maths a language and don’t tie it to my beliefs about the physical, though I can see how one might want to go that way. As long as you consider it an abstraction of reality then I guess we’re square.
13. The title of this column and the second paragraph appear to conflate theories and hypotheses. Theories can generate hypotheses, and hopefully do, but it is the hypothesis that should be falsifiable, and the question remains whether even a robustly falsified hypothesis has any impact on the validity of a theory. Scientists work in the real work, and in that real world, historically, countless hypotheses have been falsified -- or have failed tests -- yet the theories behind them were preserved, and in some cases (one thinks immediately of Pasteur and the spontaneous generation of life) the theory remains fundamental to this day.
At the same time, I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal. As Maxwell noted, in a strict Popperian test, you'd have to find an immortal human to falsify the hypothesis, and you'll wait a looooong time for that.
1. " I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal."
And yet no-one so far has made it past about 125 years old, even on a Mediterranean diet. What useful people philosophers are.
2. I don't understand how 'all humans are mortal' is a useful hypothesis. It is pretty obvious to anybody reaching adulthood that other humans are mortal, and to most that they themselves can be hurt and damaged, by accident if nothing else. We see people get old, sick and die. We see ourselves aging. I don't understand how this hypothesis is useful for proving anything. It would not even prove that all humans die on some timescale that matters. It doesn't tell us how old a human can grow to be; it doesn't tell us how long an extended life we could live with technological intervention.
A hypothesis, by definition, is a supposition made as a starting point for further investigation. Is this even a hypothesis, or only claimed to be a hypothesis?
I will say, however, that in principle it is a verifiable hypothesis; because it doesn't demand that all humans that will ever exist be mortal, and there are a finite number of humans alive today. So we could verify this hypothesis by bringing about the death of every human on Earth, and then killing ourselves; and thus know that indeed every human is mortal. Once a hypothesis is confirmed, then of course it cannot be falsified. That is true of every confirmed hypothesis; and the unfalsifiability of confirmed hypotheses is not something that worries us.
3. Dr. Castaldo: "useful to prove anything" is not a relevant criterion for being good science. That said, much of human existence entails acting on the assumption that all humans are mortal, so I think that Maxwell's tongue-in-cheek example is of a hypothesis that us extremely useful. Your comment about how the hypothesis is in principle verifiable (because there are a finite number of humans) is, forgive me, somewhat bizarre -- the classic examples of good falsifiable hypotheses, such as "all swans are white" would be equally verifiable for the same reason, yet those examples were invented to show that it is the logical form of the hypothesis that Popper and falsificationists appeal to, not the practicalities of testing. Moreover, while it could be arguable that the number of anything in the universe is finite, one issue with humans (and swans) is that the populations are indefinite in number -- as Maxwell commented, you don't know if the next baby to be born will be immortal, or the 10 millionth baby to be born.
@Steven Evans: while your observation about human longevity is true (so far), Maxwell's humorous point -- which, by the way, was a critique of Popper -- was that you cannot be absolutely certain that the next child born will not be mortal, just as Popper insisted that the next swan he encountered could, just possibly, be black. Maxwell's point was about how you would establish a test of this hypothesis. In Popper's strange world of absolutes, you'd have to find an immortal human. Maxwell noted that here in in the real world of actual science, no one would bother, especially since markers of mortality pile up over the lifespan.
4. @DKP: I am not the one that claimed it was a useful hypothesis. Once that claim is made, it should be provable: What is it useful for? The only thing a hypothesis can be useful for is to prove something true or false if it holds true or fails to hold true; I am interested in what that is: Otherwise it is not a useful hypothesis. In other words, it must have consequences or it is not a hypothesis at all.
Making a claim that is by its nature is unprovable does not make it a hypothesis. I can't even claim every oxygen atom in the universe is capable of combining with two hydrogen atoms, in the right conditions, to form a molecule of water. I can't claim that as a hypothesis, I can't prove it true for every oxygen atom in the universe, without that also being a very destructive test. UNLESS I rely on accepted models of oxygen and hydrogen atoms, and their assertions that these apply everywhere in the universe, which they also cannot prove conclusively.
Maxwell's "hypothesis" is likewise logically flawed; but if we resort to the definition of what it is to be human, than it is easily proven, because it is not a hypothesis at all but a statement of an inherent trait of being human; just like binding with hydrogen is a statement of the inherent trait of oxygen and the atom we call oxygen.
I know Maxwell's point was about how you would establish a test of this hypothesis; MY point was that Maxwell's method is not the only method, is it? If all living humans should die, then there will be no future humans, and we will have proved conclusively that all humans are mortal. In fact, in principle, my method of confirming the truth of the hypothesis is superior to Maxwell's method if falsifying it, because mine can be done in a finite amount of time (since there are a finite number of humans alive at any given time, and it takes a finite amount of time to kill each one of us). And confirmation would obviously eliminate the need for falsification.
Of course, I am into statistics and prefer the statistical approach; I imagine we (humanity, collectively throughout history) have exceed an 8 sigma confirmation by now on the question of whether all humans are mortal; so I vote against seeking absolute confirmation by killing everyone alive.
5. @DKP
" In Popper's strange world of absolutes, you'd have to find an immortal human. "
Or kill all humans.
The point is that you can apply falsifiability in each instance - run a test that confirms the quantum behaviour of the electron. Then carry out this test 10^100000000000 times and you now have an empirical fact, which is certainly sufficient to support a business model for building a computer chip based on the quantum behaviour of the electron.
By the standards of empirical science, there will never be an immortal human as the 2nd law will eventually get you, even if you survive being hit by a double-decker:
As a society, we would better off giving most "philosophers" a brush and tell them to go and sweep up leaves in the park. They could still ponder immortal humans and other irrelevant, inane questions while doing something actually useful.
6. @Steven Evans: Perhaps you missed the point of Maxwell's example, which was to suggest that at least one particular philosopher was irrelevant, by satirizing his simplistic notion of falsification. As a scientist myself, and not a philosopher, I found myself in agreement with Maxwell, and 50 years later I still find historians of science to offer more insight into the multiple ways in which "science" has worked and evolved -- while philosophers still wrestle, as Maxwell satirized, with simplistic absolutes.
More seriously, your proposed test of the behavior of the electron makes the point I started with in my first comment: theories are exceedingly difficult to falsify in the way that Sabine's article here suggests; efforts at falsification focus on hypotheses.
14. There is an intriguing name (proposal) for a new book by science writer Jim Baggott (@JimBaggott): A Game of Theories. Theory-making does seem to form a kind of game, with 'falsifiability' just one of the cards (among many) to play.
And today (April 26) is Wittgenstein's (language games) birthday.
15. Very manipulative article. All the traditional attempts of theoreticians to dodge the question are there.
But to me it was even more amusing to see an attempt to bring in Popper and not to oppose Marx. But since Popper was explicitly arguing against Marx' historicism they had to make up "Stalinist history" (what would it even be?).
16. Hi Sabine, You claim that string theory makes predictions, which prediction do you have in mind? Peter Woit often claims that string makes no predictions ... "zip, zero, nadda" in his words.
1. Thanks, that FAQ #1 is a little short on specifics. As a result I am still puzzled. As far as string cosmology goes I would question whether it is so flexible you can get just about anything you want out of it.
2. String cosmology is not string theory. You didn't ask for specifics.
17. Sabine Said… debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.
Of course you are correct, I’m wondering if you’ve also gotten the impression some scientists may even be using non-observable interpretations as a basis for their research?
18. Thank you. Your writing is clear and amusing, as usual.
I'm glad to see that you allow for some nuance when it comes to falsifiability. There is a distinction between whether or not a non-falsifiable hypothesis is "science", and whether or not the practice of a particular science requires falsifiability at every stage of its development, even over many decades. I am glad string theory was pursued. I am also glad, but only in retrospect, that I left theoretical physics after my undergraduate degree and did not waste my entire career breaking my brain doing extremely difficult math for its own sake. Others, of course, would not see this as a waste. But how much of this will be remembered?
Or to quote Felix Klein:
"When I was a student, abelian functions were, as an effect of the Jacobian tradition, considered the uncontested summit of mathematics and each of us was ambitious to make progress in this field. And now? The younger generation hardly knows abelian functions."
19. Dr. Hossenfelder,
So a model, e.g. string cosmology, is a prediction?
1. Korean War,
A model is not a prediction. You make a prediction with a model. If the prediction is falsified, that excludes the model. Of course the trouble is that if you falsify one model of string cosmology, you can be certain that someone finds a fix for it and will continue to "research" the next model of string cosmology. That's why these predictions are useless: It's not the model itself that's at fault, it's that the methodology to construct models is too flexible.
2. Dr. Hossenfelder,
Thanks for your response, I thought that was the case. If this comment just shows ignorance, please don't publish it.
If it might be of use, my question arose because Jan Reimera asked for a specific string theory prediction to refute Peter Woit''s claim that none exist. After reading the faq, I couldn't see that it does this unless the string cosmology model is either sufficient in itself or can be assumed to reference already published predictions.
3. String theory generically predicts string excitations, which is a model-independent prediction. Alas, these are at too high energies to actually be excited at energies we can produce, etc etc. String cosmology is a model. String cosmology is not the same as string theory.
20. Hi, Sabine.
Ich vertraue darauf,dass alles
gut lauft.
I enjoyed your post.
All things considered,
Ich weiB nicht,wie du dass
I'll be plain,
I'm a experimental scientist. "
The term ' falsification '
(for me) belongs to the realm
of theory.
when I run an experiment
the result is obvious
- and observable.
When you brought up Karl (Popper) , we're you making a statement on ' critical rationalism' , I hope not.
(in the quantum realm, you will find a maze)
At any rate, You struck me
with the words ' I start working on an idea
and then ...
You know me. (2 funny)
In parting, for You
I have a new moniker
for Your movement.
- as a # , tee-shirts,
it's -- DISCERN.
(a play on words)
not to mean 'Dis-respect'
In the true definition
of the word:
' to be able to tell the
say, ... between a good idea
- and be a bad one.
Once again,
Love Your Work.
- All Love,
1. I did not "bring up Popper." How about reading what I wrote before commenting?
21. Wasn't the argument that atomism, arguably one of the most productive theories of all time, wasn't falsifiable? Of course it was ultimately confirmed, which is not quite the same thing - it just took 2000 plus years.
22. @Lawrence: Off the top of my head: Perhaps the statistical distributions are wrong, and thus the error bars are wrong. I don't know anything about how physicists have come to conclusions on distributions (or have devised their own), but I've done work on fitting about 3 dozen different statistical distributions, particularly for finding potential extreme values; and without large amounts of data it is easy to mistakenly think we have a good fit for one distribution when we know the test data was generated by another. Noise is another factor, if the data being fitted is noisy in any dimension; including time.
For example, in the generalized extreme value distribution, using IRL engineering to predict the worst wind speeds, flood levels, or in aviation, the extent of crack growth in parts do to aviation stressors (and thus time to failure), minor stochastic errors in the values can change things like the shape parameter that wildly skew the predictions.
Even computing something like a 100-year flood level: sorting 100 samples of the worst flood per year. The worst of all would be assigned the rank index (100/101), (i/(N+1) is its expected value on the probability axis) but that can be wrong.
The worst flood in 1000 years may have occurred in the last 100 years. There is considerable noise in both dimensions; the rank values and the measured values, even if we fit the correct distribution.
There is also the problem of using the wrong distribution; I believe I have seen this in medical literature. Weibull distributions can look very much like a normal curve, but they are skewed, and have a lower limit (a reverse Weibull has an upper limit). They are easily confused with Fréchet distributions. But they can give very different answers on exactly where your confidence levels (and thus error bars) are for 95%, 99%, or 99.9%.
One fourth possibility is the assumption of what the statistical distribution should even be is in error. It may depend upon initial conditions in the universe, or have too much noise in the fitting, or too few samples to rule out other distributions prevailing. In general, the assumptions made in order to compute the error bars may be in error.
1. I can't comment too much on the probability and statistics. To be honest this has been from early years my least favorite area of mathematics. I know just the basic stuff and enough to get me through.
With Hubble data this trend has been there for decades. Telescope redshift data for decades have been in the 72 to 74km/sec-Mpc range. The most recent Hubble data is 74.03±1.42km/sec-MpC. With the CMB data this is now based on the ESA Planck spacecraft data. It is consistent with the prior NASA WMAP spacecraft data. and this is very significantly lower around 67.66±0.42Km/sec-Mpc. Other data tend to follow a similar trend. There has been in the last 5 to 10 years this growing gap between the two.
I would imagine there are plenty of statistics pros who eat the subject for lunch. I question whether some sort of error has gotten through their work.
23. Each brain creates a model of the inside and outside. Each of us call that the reality. But it's just a model. Now we create models of parts of the model that might or not fit the first model. It's a bit of a conundrum.
Personally I believe that it's all about information. The one that makes a theory that takes all that into account will reap the nobel price. That's the next step.
24. "An hypothesis that is not falsifiable through observation is optional. You may believe in it or not."
One has no reason to think it is true as an empirical fact. Believing it in this case is just delusion (see religion).
The issue is simply honesty. People who claim there is empirical evidence for string theory, or fine-tuning of the universe, or the multiverse, or people who claim that the next gen collider at CERN is anything but a massively expensive, completely unprecedented punt are simply liars. It's easy to see when you compare with actual empirical facts, which in physics are often being confirmed quintillions of times a second in technology (quantum behaviour of electron in computer chips, time dilation in satnav, etc.)
How can someone honestly claim that universal fine-tuning is physics just like E=mc^2? They can't - they are lying. Where taxpayers' money is paying for these lies, it is criminal fraud.
25. I realise that the notion of "model" is too subtle for someone like me just fixated at playing with parsing guaranteed Noether symmetries with ward like identities upon field equations from action principles.... So the Equivalence Principle in itself is predictive in that it need not be supplemented with some Consitituive equations (like the model of susceptibility of medium in which a Maxwell field source resides say or model of a star) to describe the materiality of the inertial source?
26. @Philosopher Eric
Nice response. I think your initial remarks sparked a reaction rather than a response on my part. Your last paragraph expresses an essential problem. One's first assumption, then, ought to be that one is not alone. And, science as a community enterprise requires something along the lines of Gricean maxims.
This is completely undermined when, for the sake of a logical calculus, philosophers pretend that words are to be treated as mere parameters. Tarski explicitly rejected this methodology in his paper on the semantic conception of truth. Yet, those who invoke the distinction between semantics and syntax as some inviolable principle regularly invoke Tarski as the source of their views (one should actually look to Carnap as the source of such extreme views).
This is the kind of thing I find so disturbing where philosophy, logic, and mathematics intersect. There is a great deal of misinformation in the literature.
There is a great deal that needs "fixing". But the received paradigms are largely defensible. It is not as if they are not the product of highly intelligent practitioners.
27. The difficulty of detecting gravitons raises a related question: what counts as a detection? Saying that it must be detected in a conventional particle physics experiment is a rather ad hoc criterion. If all the knowledge we have today already implies the existence of the graviton, then that should count as it having been detected.
The same can be said about LIGO's detection of gravitational waves. The existence of gravitational waves was already implied by the detection of the decaying orbits of orbiting pulsars. Or one may argue that this was in turn a prediction of GR which had ample observational support before the observation of the orbiting pulsars.
28. Sean Carroll wrote a blogpost about this:
He is not a crackpot. Maybe you two could have a podcast / youtube-discussion about it?
1. In practice, calls to remove falsifiability are intended to support string theory, fine-tuning and the multiverse as physics. They are not physics, merely speculation, and the people claiming they are physics *are* crackpots. Remove falsifiability and just watch all the loonies swarm in with their ideas that "can't be disproved" and are "compatible with observations". There's nothing wrong with speculation but it is important that one is aware it is speculation otherwise you end up with the situation as in string theory where too much money and too many careers have been wasted on it. (Or philosophy where several thousand years have been wasted.)
29. @Steven Evans
I assure you that we are, for the most part, on the same side of these issues. Your arguments, however, are very much like those of the foundations community who challenge dissent by demanding that a contradiction to their views be shown. 1999 Pavicic and Megill (the latter known for the metamath program) showed that propositional logic is not categorical and that the model faithful to the syntactic structure of the logic is not Boolean. So the contradiction demand is silly and simplistic.
You are making arguments on the basis of 'abstractions'. Where exactly do these abstractions reside in time and space? Or, as many philosophers do, are you speaking of a realm of existence beyond time and space? Indeed, Tarski's semantic conception of truth properly conveys the intentions we ascribe to correspondence theories of truth. So, if we state that some abstraction is meaningful with respect to the truth of our scientific theories, we must account for the existence of the objects denoted by our language terms.
Either you are claiming realms of existence which I shall not concede to you, or, you can show me "the number one" as an existent individual.
Most of my acquaintances do not have formal education. When they ask me to explain my interest, I remind them of just how often one hears that "mathematics is the language of science". So, in a very crude sense, what is true in science depends on the nature of truth in mathematics. I expect that you will disagree with that view. But, I do not think you will be able to demonstrate the substantive existence of the abstractions you are invoking to challenge me.
You may have problems with the very publications I mentioned because we share a similar sense of what constitutes science. But I see the kernel of the problem in the very statements you are making about the nature of mathematics.
It is not so much that I disagree with you, it is that your positions are not defensible. You need to stipulate a theory of truth. You need to stipulate which conception of truth is applied under that theory. You need to stipulate logical axioms. You need to stipulate axioms for your mathematical theory. You need to decide whether or not you are following a formalist paradigm. If not, you will have to accommodate substitutions in the calculus with a strategy to warrant substitutions. If so, you will be faced with the problem of non-categoricity.
Dr. Hossenfelder discussed this last problem in her book when considering Tegmark's suggestion that all interpretations be taken as meaningful.
It is just not as simple as you would like it to be.
1. I've no idea what the correct logical terms are, but arithmetic is a physical fact. I can do arithmetic with physical balls, add them, subtract them, show prime numbers, show what it means for sqrt(2) to be irrational, etc., etc. This maths exists physically, and it is this physical maths that is the basis of abstract maths. Physical arithmetic obeys the axioms of arithmetic and the logical steps used to prove theorems are also embodied in physical arithmetic. Of course - because arithmetic and logic are observed in the physical world, that's where the ideas come from.
Of course, philosophers can witter on at length about theoretical issues with what I have written, but they will never be able to come up with a concrete counter-example. They will nit-pick. I will re-draft what I have written. They will nit-pick some more, I will re-draft some more. And 2,000 years later we will have got nowhere, yet still it will be physically impossible to arrange a prime number of balls into a rectangle.
Again, you've got it the wrong way round. Maths comes from the physical.
Anyway, the issue of this blog post, falsifiability, is in practice an issue with people trying to suspend falsifiability to support string theory, fine-tuning and the multiverse. In more extreme cases, it is about philosophers and religious loonies claiming they can tell us about the natural world beyond what physics tells us. These people trying to suspend falsifiability are all dishonest cranks. That is why falsifiability is important, not because of any subtleties. There are straight-up cranks, even amongst trained physicists, who want to blur the line between "philosophy"/"religion" and physics and claim Jesus' daddy made the universe. Falsifiability stops these cranks getting their lies on the physical record.
30. Hi Sabine,
sorry for a late reply.
(everyone's busy)
All apologies for the misunderstanding.
1) I did read your post.
2) I know you didn't mention him by name, but
(in my mind) I don't see how one can speak of
'falsifiability' and not
'bring up' Karl Popper.
3) In the intro to your post you said " I don't know why we should even be talking about this".
I agreed.
... and then wondered why
we were.
I thought you might be making a separate statement of some kind.
At any rate,
I'm off to view your new video while I have time.
(can't wait)
Once again,
Love Your Work
All love,
31. Maths does exist in reality beyond us. Of course it does, because Maths comes from a description of reality. 5 objects can't be arranged into a rectangle whether human mathematicians exist or not.
1. Steven,
I’m not going to say that you’re wrong about that. If you want to define maths to exist beyond us given that various statements in it are true of this world (such as 5 points cannot form a rectangle, which I certainly agree with), then yes, math does indeed exist. I’m not sure that your definition for “exists” happens to be all that useful however. In that case notice that English and French also exist beyond us given that statements can be made in these human languages which are true of this world.
The term “exists” may be defined in an assortment of ways, though when people start getting platonic with our languages, I tend to notice them developing all sorts of silly notions. Max Tegmark would be a prominent example of this sort of thing.
32. Sabine Hossenfelder posted (Thursday, April 25, 2019):
Are theories also based on principles for how to obtain empirical evidence, such as, famously, Einsteins requirement that »All our space-time verifications invariably amount to a determination of space-time coincidences {... such as ...} meetings of two or more of these material points.« ?
> To make predictions you always need a concrete model, and you need initial conditions.
As far as this is referring to experimentally testable predictions this is a very remarkable (and, to me, agreeable and welcome) statement; constrasting with (wide-spread) demands that "scientific theories ought to make experimentally testable predictions", and claims that certain theories did make experimentally testable predictions.
However: Is a principal reason for considering "[concrete] initial conditions" separate from "a [or any] concrete model", and not as part it ?
Sabine Hossenfelder wrote (2:42 AM, April 27, 2019):
> A model is not a prediction. You make a prediction with a model.
Are conrete, experimentally falsifiable predictions part of models ?
> if you falsify one model [...] someone [...] will continue to "research" the next model
I find this description perfectly agreeable and welcome; yet it also seems very remarkable because it appears to contrast with (wide-spread) demands that "scientific theories ought to be falsifiable", and claims that certain theories had been falsified.
> That's why these predictions are useless: [...]
Any predictions may still be used as rationals for economic decisions, or bets.
33. mls: Where exactly do these abstractions reside in time and space?
Originally the abstractions were embodied in mental models, made of neurons. Now they are also on paper, in textbooks, as a way to program and recreate such neural models.
Math is just recursively abstracting abstractions. When I count my goats, each finger stands for a goat. If I have a lot of goats, each hash mark stands for one finger. When I fill a "hand", I use the thumb to cross four fingers, and start another hand. Abstractions of abstractions.
Math is derived from reality, and built to model reality; but the rules of math can be extended, by analogy, beyond anything we see in reality. We can extend our two dimensional geometry to three dimensions, and then to any number of dimensions; I cluster mathematical objects in high dimensional space fairly frequently; it is a convenient way to find patterns. But I don't think anybody is proposing that reality has 143 dimensions, or that goats exist in that space.
So math can be used to describe reality, or because the abstractions can be extended beyond reality, it can also be used to describe non-reality.
If you are looking for "truth", that mixed bag is the wrong place to look. Even a simple smooth parabolic function describing a thrown object falling to earth is an abstraction. If all the world is quantized, there is no such thing: The smooth function is just an estimator of something taking quantum jumps in a step-like fashion, even though the steps are very tiny in time and space; so the progress appears to be perfectly smooth.
To find truth, we need to return to reality, and prove the mathematics we are using describe something observable. That is how we prove we are not using the parts of the mixed bag of mathematics that are abstractions extended beyond reality.
34. @ Steven Evans
In response to David Hume's "An Enquiry Concerning Human Understanding" Kant offered an account of objective knowledge grounded in the subjective experience of individuals. He distinguished between mathematics (sensible intuition) and logic (intelligible understanding). But to take this as his starting point he had to deny the actuality of space and time as absolute concepts. He took space to correspond with geometry and time to correspond with arithmetic. The relation to sensible intuition he claimed for these correspondences is expressed in the sentences,
"Time is the form of inner sense."
"Space, by all appearances, is the form of outer sense."
The qualification in the second statement reflects the fact that the information associated with what we do not consider as part of ourselves is conditioned by our sensory apparatus before it can be called a spatial manifold. Hence, external objects are only known through "appearances".
This certainly provides a framework by which mathematics can be understood in terms of descriptions related to the reality of experience. But it does not provide for a reality outside of our own. This, of course, is why I acknowledged Philosopher Eric's knowledge claim in his response to me. You seem to be assuming that an external reality substantiates the independent existence of your descriptions.
The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer.
Kant's position on geometry is one instance of misinformation in the folklore of mathematical foundations. But, that does not really affect many of the arguments used against him. Where in sensible experience, for example, can one find a line without breadth? Or, if mathematics is grounded in visualizations, what of optical illusions? These criticisms are not without merit.
Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined. Modern analytical philosophy recovers this sense of necessity by reducing mathematics to a priori stipulations presentable in formal languages with consequences obtained by rules for admissible syntactic transformations.
Any relationship with sensible intuition is eradicated. What is largely lost is the ability to account for the utility of mathematics in applications.
The issues are just not that simple. And they were alluded to by George Ellis in Dr. Hossenfelder's book.
1. @mls: "Time is the form of inner sense." / "Space, by all appearances, is the form of outer sense."
Kant sounds utterly ridiculous, and these sound like trying to force a parallelism that does not exist. These definitions have no utility I can fathom.
mls: Where in sensible experience, for example, can one find a line without breadth?
Points without size and lines without breadth are abstractions used to avoid the complications of points and lines with breadth. So our answers (say about the sums of angles) are precise and provable.
A line without breadth is the equivalent of a limit: If we reason using lines with breadth, we must give it a value, say W. Then our answer will depend on W. The geometry of lines without breadth is what we get as W approaches 0, and this produces precise answers instead of ranges that depend on W.
mls: Or, if mathematics is grounded in visualizations, what of optical illusions?
Mathematics began grounded in reality. Congenitally blind people can learn and understand mathematics without visualizations. Those are shortcuts to understanding for sighted people, not a necessity for mathematics, so optical illusions are meaningless. Thus contrary to your assertion, those criticisms are indeed without merit. Mathematics began by abstracting things in the physical world, but by logical inference it has grown beyond that in order to increase its utility.
mls: Any relationship with sensible intuition is eradicated.
Not any relationship. Mathematics can trump one's sensible intuition; that is a good thing. Our brains work by "rules of thumb," they work with neural models that are probabilistic in nature and therefore not precise. Mathematics allows precise reasoning and precise predictions; some beyond the capabilities of "intuition".
Dr. Hossenfelder recently tweeted an article on superconductivity appearing in stacked graphene sheets, with one rotated by exactly 1.1 degrees with respect to the other. This effect was dismissed by many researchers out of hand, their intuition told them the maths predicting something would be different were wrong. But it turns out, the maths were right; something (superconductivity) does emerge at this precise angle. Intuition is not precise, and correspondence with intuition is not the goal; correspondence with reality is the goal.
mls: What is largely lost is the ability to account for the utility of mathematics in applications.
No it isn't, mathematics has been evolving since the beginning to have utility and applications. I do not find it surprising that when our goal is to use mathematics to model the real world, by trial and error we find or invent the mathematics to do that, and then have successes in that endeavor.
What is hard to understand about that? It is not fundamentally different than wanting to grow crops and by trial and error figuring out a set of rules to do that.
mls: The issues are just not that simple.
I think they are pretty simple. Neural models of physical behaviors are not precise; thus intuition can be grossly mistaken. We all get fooled by good stage magicians, even good stage magicians can be fooled by good stage magicians. But the rules of mathematics can be precise, and thus precisely predictive because we designed it that way, and thus mathematics can predict things that test out to be true in cases where our "rule of thumb" intuition predicts otherwise; because intuition evolved in a domain in which logical precision was not a necessity of survival, and fast "most likely" or "safest" decisions were a survival advantage.
35. “Falsifiable” continues to be a poor term that I’m surprised so many people are happy using. Yeah, yeah, I know..Popper. It’s still a poor term. Nothing in empirical scientific inquiry is ever truly proven false (or true), only shown to be more or less likely. “Testable” is a far better word to describe that criterion for a hypothesis or a prediction. It renders a lot of the issues raised in this thread much less sticky.
36. "various statements in it are true of this world "
You keep getting it the wrong way round. The world came first. Human maths started by people counting objects in the physical world. Physical arithmetic was already there, then people observed it.
37. OK, so what I strictly mean but couldn't be bothered to write out, was that if you take a huge number of what appear to observation at a certain level of precision as quantum objects they combine to produce at the natural level of observation of the senses of humans and other animals enough discrete-yness to embody arithmetic. This discrete-yness and this physical arithmetic exist (are available for observation) for anything coming along with senses at the classical level. In this arena of classical discrete-yness, 5 discrete-y objects can't be arranged into a rectangle, for example. I am aware of my observations, so I'll take a punt that you are similarly aware of your observations, that what I observe as my body and your body exist in the sense that they are available for observation to observers like ourselves and now it makes no sense not to accept the existence of the 5 objects, in the sense that they are available for observation.
As I said, arithmetic exists in reality and human maths comes from an observation of that arithmetic. It is that simple.
"Where in sensible experience, for example, can one find a line without breadth?"
The reality of space at the human level is 3-D Euclideanesque. A room of length (roughly) 3 metres and breadth (roughly) 4 metres will have a diagonal of (roughly) 5 metres. For best results, count the atoms.
"The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer."
"God" doesn't exist - it's a story. However, 5 objects really can't be arranged into a rectangle - try it.
"Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined."
I would stake my life on the validity of the proof that sqrt(2) is irrational. Undermined by whom? Dodgy philosophers who have had their papers read by a couple of other dodgy philosophers? Meanwhile, Andrew Wiles has proved Fermat's Last Theorem for infinite cases.
Also known as proving from axioms as Euclid did over 2,000 years ago. And all originally based on our observations of the world.
38. Well, with Dr. Hossenfelder's permission, perhaps I might respond with a post or two that actually reflect my views rather than what one finds in the literature.
At the link,
one may find the free Boolean lattice on two generators. Its elements are labeled with the symbols typically taught in courses on propositional logic. If one really wants to argue that the claims of philosophers and their logicians are of questionable merit, thos is one of the places to start.
Let's see a show of hands. Who sees the tetrahedron?
In combinatorial topology, one decomposes a tetrahedron into vertices, edges, faces, and an interior. With exception for the bottom element, the order-theoretic representation of this decomposition is order-isomorphic with the lattice above. And, one need only hold that the bottom element denote the exterior to complete the sixteen set here.
Philosophers and their logicians hold that mathematics has been arithmetized. Even though the most basic representation of how their logical connectives relate to one another can be directly compared with a tetrahedron, they will insist that geometry has been eliminated from mathematics.
You can thank David Hilbert and the formalists for that. Say all that you want about Euclid, Hilbert's "Foundations of Geometry" reconstructs the Euclidean corpus without reference to motions or temporality.
Remember this the next time you want to recite some result from mathematical logic which is contrary to your beliefs about mathematics.
So, if logicians have simply put labels on a tetrahedron, one has just cause for questioning the relevance of their claims concerning the foundations of mathematics.
But, that bottom element is still bothersome because it is not typically addressed in combinatorial topology.
In the link,
one can find the 3-dimensional projection of a tesseract, although Wikipedia does not show the edges connecting the vertices to a point at infinty. When this is added, all of the elements are 4-connected as in the Boolean order. The bottom of the Boolean order would coincide with the point at infinity.
Amazing, is it not? Our logic words have a 4-dimensional character.
Let me repeat something I have maintained repeatedly in blog posts here. If the theory of evolution is among our best science, then we have no more facility for knowing the truth of reality than an earthworm. I do not need Euclid's axioms to make two paper tetrahedra with vertices colored so that they cannot be superimposed with all four colors matched. One can do a lot with that to criticize received views in the foundations community.
Ignoring their arguments because you believe differently just puts you in the queue of "he said, she said" that Steven Evans has used to discredit philosophers.
39. I read a preview of one of Smolin's books on Amazon in which he proclaims the importance of Leibniz' identity of indiscernibles. Since I have read Leibniz, I would tend to agree with him. However, Leibniz also motivated the search for a logical calculus. So, the principle is more often associated with logical contexts.
Leibniz attributes the principle to St. Thomas Aquinus to answer how God knows each soul individually. In keeping with Smolin's account, Leibniz does claim to be generalizing the principle to a geometric application. But in the debates over how Leibniz and Newton differed, the principle became associated with its logical application.
Steven Evans would like me to acknowledge the reality of arithmetic in some sense. Kant had probably been the first critic of the logical principle. He asserted that numerical difference is known through spatial intuition. In modern contexts, the analogous portrayal can be found in Strawson's book "Individuals". He uses a diagram with different shapes to explain the distinction between qualitative identity and quantitative identity. In other words, numerical difference is grounded by spatial intuition.
Since mathematician's make it a habit to work from axioms, I wrote a set of axioms intended to augment set theory by interpreting the failure of equality as topological separation.
In other words, two points in space are distinct if one is in a part of space that the other is not.
When you run around using a membership relation while thinking in terms of geometric incidence, keep in mind that this is not what a membership predicate means. One may say that the notion of a set is not yet decided, but the received view is one where geometry is deprecated because mathematics has been arithmetized. And, since numbers can be defined in logic, any relation of the membership predicate with numerical identity associated with spatial intuition has been lost.
My views on mathematics are far closer to those who study the physical sciences than not. So do not hold me accountable for a summary of what is the case in the foundations of mathematics.
You have physicists running around pretending that the mathematics is telling them truths about the universe and others using mathematics to say that they should be believed.
My point is that they are further enabled by what is going on in the foundations of mathematics.
40. @Steven Evans
" a story"
You have probably never heard of deflationary nominalism. It is one way of speaking of mathematical objects without committing to their reality:
Motivated by the fact that core mathematicians actually define their terms, I needed a logic that supported descriptions. Free logics do that, although the general discussion of free logics does not apply to my personal work. The logic I had to write for my own purposes is better compared with how free logics can be used for fictional accounts,
My logic is classical (rather than paraconsistent) and the method "works" because proofs are finite.
The standard account of formal systems relies on a completed infinity outside of the context of an axiom system. I doubt that Euclid ever had this in mind. David Hilbert turned his attention to arithmetical metamathematics with the objective of a finite consistency proof precisely because completed infinities are *NOT* sensibly demonstrable.
41. @Dr. Castaldo
I really have no reason to accept reductionist arguments in physics. If you can substantiate your claim, then do so. Words explaining words is how we get into these problems to begin with.
Having said that, a comment in another thread made some small reference to circularity. I forget the specifics right now, but I pointed out the result of a 2016 Science article about concept formation and hexagonal grid cells.
It is a beautiful circularity. Abstract concepts depend upon neural structures that exhibit hexagonal symmetry. A book on my shelf which explicitly classifies hexagons and relates them to tetrahedra. String theorists asking people to believe in six rolled up dimensions. And the need for physical theories to build the instruments and interpret the data so we can identify how hexagonal symmetries pertain to abstract concept formation.
You are a pragmatic gentleman. Thank you for your other replies as well. For what this is worth, I am certainly not looking for truth. When Frege retracted his logicism he suggested that all mathematics is geometrical. That is mostly what I have uncovered from my own deliberations. It really does not make sense to speak of truth and falsity in geometry.
42. @mls: What is "amazing" about that? I can do the same thing on paper better with bits; given 2 binary states there are 2^2 = 4 possible states. In binary we can uniquely number them, [0,1,2,3]. That is not "four dimensional" any more than 10 states by 10 states is "100" dimensional.
On your "earthworm" comparison, obviously that is wrong. We have far more facility than an earthworm for knowing the truth of reality, or earthworms wouldn't let us use them as live bait. And fish wouldn't fall for that and bite into a hook, if they could discern reality equally as well as us.
Humans understand the truth of reality well enough to manipulate chemistry on the atomic level, to build microscopic machines, to create chemical compounds and materials on a massive scale that simply do not exist in nature. Only humans can make and execute plans that require decades, or even multiple lifetimes, to complete. Where are the particle colliders built by any other non-human ape or animal?
I have no idea how you think the theory of evolution creates any equivalence between the intelligence of earth worms and that of humans. I suspect you don't understand evolution.
43. @mls
You don't address the point that maths exists in reality and came from reality. Obviously, the field of logic has something to say about maths and has a credible standard of truth and method like science and maths.
I do not need to discredit the field of philosophy as it discredits itself - there are professional philosophers who are members of the American Philosophical Association who publish "proofs of God"(!!); in the comments in this very blog a panpsychist professional philosopher couldn't answer the blog author's point that the results of the Standard Model are not compatible with panpsychism being an explanation of consciousness in the brain.
Philosophers can churn out such nonsense because the "standard" of truth in philosophy is to write a vaguely plausible-sounding natural language "proof". This opens the field to all kinds of cranks and frauds. And these frauds want to have their say about natural science, too, but fortunately the falsifiability barrier keeps them at bay.
It is not a "he said, she said" argument. I have explained why I think maths exists in reality.
Comment moderation on this blog is turned on. |
9d93d1a1916fa8ac | Quantum field theory
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics[1]:xi and is used to construct physical models of subatomic particles (in particle physics) and quasiparticles (in condensed matter physics).
QFT treats particles as excited states (also called quanta) of their underlying fields, which are—in a sense—more fundamental than the basic particles. Interactions between particles are described by interaction terms in the Lagrangian involving their corresponding fields. Each interaction can be visually represented by Feynman diagrams, which are formal computational tools, in the process of relativistic perturbation theory.
As a successful theoretical framework today, quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory — quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory.
Theoretical background[edit]
Magnetic field lines visualized using iron filings. When a piece of paper is sprinkled with iron filings and placed above a bar magnet, the filings align according to the direction of the magnetic field, forming arcs.
Quantum field theory is the result of the combination of classical field theory, quantum mechanics, and special relativity.[1]:xi A brief overview of these theoretical precursors is in order.
The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Newton is an "action at a distance" — its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact."[2]:4 It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields — a numerical quantity (a vector) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.[3]:18
Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.[2][4]:301[5]:2
The theory of classical electromagnetism was completed in 1862 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.[2]:19
Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths.[6] Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.[7]:Ch.2 Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles.[6]
In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave-particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances.[6] Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.[3]:22-23
In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformation, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.[3]:19 It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations.
Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity — it treats time as an ordinary number while promoting spatial coordinates to linear operators.[6]
Quantum electrodynamics[edit]
Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.[8]:1
Through the works of Born, Heisenberg, and Pascual Jordan in 1925-1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.[8]:1 With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.[3]:22
In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence, as well as non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.[6]:71
In 1928, Dirac wrote down a wave equation that described relativistic electrons — the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein-Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.[6]:71–72
The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for β decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.[3]:22-23
It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.[6]:72[3]:23 QFT naturally incorporated antiparticles in its formalism.[3]:24
Infinities and renormalization[edit]
Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields,[6] suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.[3]:25 It was not until 20 years later that a systematic approach to remove such infinities was developed.
A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Unfortunately, such achievements were not understood and recognized by the theoretical community.[6]
Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.[3]:26
In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.[6][3]:28 Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations.[6]
The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the initial, so-called "bare", parameters (mass, electric charge, etc.), which have no physical meaning, by their finite measured values. To cancel the apparently infinite parameters, one has to introduce additional, infinite, "counterterms" into the Lagrangian. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory.[6]
By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarisation. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities".[6]
At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.[8]:2 The latter can be used to visually and intuitively organise and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.[1]:5
It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.[8]:2
Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.[3]:30
The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.[3]:30
The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant α ≈ 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.[3]:31
With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.[3]:31
Standard Model[edit]
Elementary particles of the Standard Model: six types of matter quarks, four types of gauge bosons that carry fundamental interactions, as well as the Higgs boson, which endow elementary particles with mass.
In 1954, Yang Chen-Ning and Robert Mills generalised the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang-Mills theories), which are based on more complicated local symmetry groups.[9]:5 In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.[3]:32[10]
Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable.[11]
Peter Higgs, Robert Brout, and François Englert proposed in 1964 that the gauge symmetry in Yang-Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.[9]:5-6
By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,[11][9]:6 until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion.[11]
Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) [9]:11 Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.[3]:32
These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles.[12] The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.[8]:3 The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model.[13]
Other developments[edit]
The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov et al.. These objects are inaccessible through perturbation theory.[8]:4
Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973.[8]:7
Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory,[8]:6 itself a type of two-dimensional QFT with conformal symmetry.[14] Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity.[15]
Condensed matter physics[edit]
Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics.
Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter.[16]
Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticlephonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems.[17]
Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect.[17]
For simplicity, natural units are used in the following sections, in which the reduced Planck constant ħ and the speed of light c are both set to one.
Classical fields[edit]
A classical field is a function of spatial and time coordinates.[18] Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinite degrees of freedom.[18]
Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields.
Canonical quantisation and path integrals are two common formulations of QFT.[19]:61 To motivate the fundamentals of QFT, an overview of classical field theory is in order.
The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as ϕ(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field is
where is the time-derivative of the field, is the gradient operator, and m is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:[1]:16
we obtain the equations of motion for the field, which describe the way it varies in time and space:
This is known as the Klein–Gordon equation.[1]:17
The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows:
where a is a complex number (normalised by convention), * denotes complex conjugation, and ωp is the frequency of the normal mode:
Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ωp.[1]:21,26
Canonical quantisation[edit]
The quantisation procedure for the above classical field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator.
The displacement of a classical harmonic oscillator is described by
where a is a complex number (normalised by convention), and ω is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, which should not be confused with the spatial label x of a field.
For a quantum harmonic oscillator, x(t) is promoted to a linear operator :
Complex numbers a and a* are replaced by the annihilation operator and the creation operator , respectively, where denotes Hermitian conjugation. The commutation relation between the two is
The vacuum state , which is the lowest energy state, is defined by
Any quantum state of a single harmonic oscillator can be obtained from by successively applying the creation operator :[1]:20
By the same token, the aforementioned real scalar field ϕ, which corresponds to x in the single harmonic oscillator, is also promoted to an operator , while ap and ap* are replaced by the annihilation operator and the creation operator for a particular p, respectively:
Their commutation relations are:[1]:21
where δ is the Dirac delta function. The vacuum state is defined by
Any quantum state of the field can be obtained from by successively applying creation operators , e.g.[1]:22
Although the field appearing in the Lagrangian is spatially continuous, the quantum states of the field are discrete. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems.[20] The process of quantising an arbitrary number of particles instead of a single particle is often also called second quantisation.[1]:19
The preceding procedure is a direct application of non-relativistic quantum mechanics and can be used to quantise (complex) scalar fields, Dirac fields,[1]:52 vector fields (e.g. the electromagnetic field), and even strings.[21] However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary.
The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:[1]:77
where μ is a spacetime index, , etc. The summation over the index μ has been omitted following the Einstein notation. If the parameter λ is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory.
Path integrals[edit]
The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time t = 0 to some final state at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then[19]:10
Taking the limit N → ∞, the above product of integrals becomes the Feynman path integral:[1]:282[19]:12
where L is the Lagrangian involving ϕ and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transform. The initial and final conditions of the path integral are respectively
In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand.
Two-point correlation function[edit]
Now we assume that the theory contains interactions whose Lagrangian terms are a small perturbation from the free theory.
In calculations, one often encounters such expressions:
where x and y are position four-vectors, T is the time ordering operator (namely, it orders x and y according to their time-component, later time on the left and earlier time on the right), and is the ground state (vacuum state) of the interacting theory. This expression, known as the two-point correlation function or the two-point Green's function, represents the probability amplitude for the field to propagate from y to x.[1]:82
In canonical quantisation, the two-point correlation function can be written as:[1]:87
where ε is an infinitesimal number, ϕI is the field operator under the free theory, and HI is the interaction Hamiltonian term. For the ϕ4 theory, it is[1]:84
Since λ is a small parameter, the exponential function exp can be expanded into a Taylor series in λ and computed term by term. This equation is useful in that it expresses the field operator and ground state in the interacting theory, which are difficult to define, in terms of their counterparts in the free theory, which are well defined.
In the path integral formulation, the two-point correlation function can be written as:[1]:284
where is the Lagrangian density. As in the previous paragraph, the exponential factor involving the interaction term can also be expanded as a series in λ.
According to Wick's theorem, any n-point correlation function in the free theory can be written as a sum of products of two-point correlation functions. For example,
Since correlation functions in the interacting theory can be expressed in terms of those in the free theory, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.[1]:90
Either through canonical quantisation or path integrals, one can obtain:
This is known as the Feynman propagator for the real scalar field.[1]:31,288[19]:23
Feynman diagram[edit]
Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the λ1 term in the two-point correlation function in the ϕ4 theory is
After applying Wick's theorem, one of the terms is
whose corresponding Feynman diagram is
Phi-4 one-loop.svg
Every point corresponds to a single ϕ field factor. Points labelled with x and y are called external points, while those in the interior are called internal points or vertices (there is one in this diagram). The value of the corresponding term can be obtained from the diagram by following "Feynman rules": assign to every vertex and the Feynman propagator to every line with end points x1 and x2. The product of factors corresponding to every element in the diagram, divided by the "symmetry factor" (2 for this diagram), gives the expression for the term in the perturbation series.[1]:91-94
In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise,
is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ϕ4 interaction theory discussed above, every vertex must have four legs.[1]:98
In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.[1]:102-115
Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.[19]:44 Lines whose end points are vertices can be thought of as the propagation of virtual particles.[1]:31
Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities.
Parameters appearing in the Lagrangian, such as the mass m and the coupling constant λ, have no physical meaning — m, λ, and the field strength ϕ are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off Λ, obtain expressions for the physical quantities, and then take the limit Λ → ∞. This is an example of regularisation, a class of methods to treat divergences in QFT, with Λ being the regulator.
The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalised perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ϕ4 theory, the field strength is first redefined:
where ϕ is the bare field, ϕr is the renormalised field, and Z is a constant to be determined. The Lagrangian density becomes:
where mr and λr are the experimentally measurable, renormalised, mass and coupling constant, respectively, and
are constants to be determined. The first three terms are the ϕ4 Lagrangian density written in terms of the renormalised quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularisation scheme (such as the cut-off regularisation introduced above or dimensional regularization); call the regulator Λ. Compute Feynman diagrams, in which divergent terms will depend on Λ. Then, define δZ, δm, and δλ such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit Λ → ∞ is taken. In this way, meaningful finite quantities are obtained.[1]:323-326
It is only possible to eliminate all infinities to obtain a finite result in renormalisable theories, whereas in non-renormalisable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalisable QFT,[1]:719–727 while quantum gravity is non-renormalisable.[1]:798[19]:421
Renormalisation group[edit]
The renormalisation group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.[1]:393 The way in which each parameter changes with scale is described by its β function.[1]:417 Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.[1]:410-411
As an example, the coupling constant in QED, namely the elementary charge e, has the following β function:
where Λ is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases.[22] The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.[1]:420
The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following β function:
where Nf is the number of quark flavours. In the case where Nf ≤ 16 (the Standard Model has Nf = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.[1]:531
Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.)[23] Examples include string theory[14] and N = 4 supersymmetric Yang–Mills theory.[24]
According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off Λ, i.e. that the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalisable effective field theory.[1]:402-403 The difference between renormalisable and non-renormalisable theories is that the former are insensitive to details at high energies, whereas the latter do depend of them.[8]:2 According to this view, non-renormalisable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary.[19]:156
Other theories[edit]
The quantisation and renormalisation procedures outlined in the preceding sections are performed for the free theory and ϕ4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction.
As an example, quantum electrodynamics contains a Dirac field ψ representing the electron field and a vector field Aμ representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is:
where γμ are Dirac matrices, , and is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.[1]:78
Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of positrons, while those pointing backward in time represent the propagation of electrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg.
Gauge symmetry[edit]
If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant:
where α(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.[1]:482–483 Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations and is yet another symmetry transformation . For any α(x), is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.[1]:496 The photon field Aμ may be referred to as the U(1) gauge boson.
U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories).[1]:489 Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ψi, i = 1,2,3 representing quark fields as well as eight vector fields Aa,μ, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.[1]:547 The QCD Lagrangian density is:[1]:490-491
where Dμ is the gauge covariant derivative:
where g is the coupling constant, ta are the eight generators of SU(3) in the fundamental representation (3×3 matrices),
and fabc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation:
where U(x) is an element of SU(3) at every spacetime point x:
The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantisation, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density under a certain local transformation of the fields, the measure of the path integral may change.[19]:243 For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.[1]:705-707
The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group.[25]
Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.[1]:17-18[19]:73 For example, the U(1) symmetry of QED implies charge conservation.[26]
Gauge transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field Aμ, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarisation. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing Aμ can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but are a reflection of the "redundancy" of the chosen mathematical description.[19]:168
To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.[1]:512-515 A more rigorous generalisation of the Faddeev–Popov procedure is given by BRST quantization.[1]:517
Spontaneous symmetry breaking[edit]
Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.[1]:347
To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density:
where μ and λ are real parameters. The theory admits an O(N) global symmetry:
The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ϕ0 satisfying
Without loss of generality, let the ground state be in the N-th direction:
The original N fields can be rewritten as:
and the original Lagrangian density as:
where k = 1,...,N-1. The original O(N) global symmetry is no longer manifest, leaving only the subgroup O(N-1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.[1]:349-350
Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, O(N) has N(N-1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N-1) has (N-1)(N-2)/2. The number of broken symmetries is their difference, N-1, which corresponds to the N-1 massless fields πk.[1]:351
On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarised massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.[1]:743-744
In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.[19]:199 In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.[1]:690
All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesised the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.[1]:795[19]:443
The Standard Model obeys Poincaré symmetry, whose generators are spacetime translation Pμ and Lorentz transformation Jμν.[27]:58–60 In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Qα, called supercharges, which themselves transform as Weyl fermions.[1]:795[19]:444 The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, QαI, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.[1]:795[19]:450 Supersymmetry can also be constructed in other dimensions,[28] most notably in (1+1) dimensions for its application in superstring theory.[29]
The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.[19]:448 Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang–Mills theory,[19]:450 and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.[19]:444
If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity.[30]
Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model — why the mass of the Higgs boson is not radiatively corrected (under renormalisation) to a very high scale such as the grand unified scale or the Planck scale — can be resolved by relating the Higgs field and its superpartner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.[1]:796-797[31]
Nevertheless, as of 2018, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.[1]:797[19]:443
Other spacetimes[edit]
The ϕ4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime.
In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases.[32] In high-energy physics, string theory is a type of (1+1)-dimensional QFT,[19]:452[14] while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.[19]:428-429
In Minkowski space, the flat metric ημν is used to raise and lower spacetime indices in the Lagrangian, e.g.
where ημν is the inverse of ημν satisfying ημρηρν = δμν. For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used:
where gμν is the inverse of gμν. For a real scalar field, the Lagrangian density in a general spacetime background is
where g = det(gμν), and μ denotes the covariant derivative.[33] The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background.
Topological quantum field theory[edit]
The correlation functions and physical predictions of a QFT depend on the spacetime metric gμν. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.[34]:36 QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity.[35] Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.[36]:1–5
Perturbative and non-perturbative methods[edit]
Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton.[8] Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory[37] and the Thirring model.[38]
Mathematical rigour[edit]
In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally not rigorous.[39]
Since the 1950s,[40] theoretical physicists and mathematicians have attempted to organise all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,[41]:2 which has led to such results as CPT theorem, spin-statistics theorem, and Goldstone's theorem.[40]
Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms.[42]
Algebraic quantum field theory is another approach to the axiomatisation of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag-Kastler axioms.[41]:2-3 One way to construct theories satisfying Wightman axioms is to use Osterwalder-Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).[41]:10
Yang-Mills existence and mass gap, one of the Millenium Prize Problems, concerns the well-defined existence of Yang-Mills theories as set out by the above axioms. The full problem statement is as follows.[43]
See also[edit]
1. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0-201-50397-5.
2. ^ a b c Hobson, Art (2013). "There are no particles, there are only fields". American Journal of Physics. 81 (211): 211–223. arXiv:1204.4616. Bibcode:2013AmJPh..81..211H. doi:10.1119/1.4789885.
3. ^ a b c d e f g h i j k l m n o p Weinberg, Steven (1977). "The Search for Unity: Notes for a History of Quantum Field Theory". Daedalus. 106 (4): 17–35. JSTOR 20024506.
4. ^ John L. Heilbron (14 February 2003). The Oxford Companion to the History of Modern Science. Oxford University Press. ISBN 978-0-19-974376-6.
5. ^ Joseph John Thomson (1893). Notes on Recent Researches in Electricity and Magnetism: Intended as a Sequel to Professor Clerk-Maxwell's 'Treatise on Electricity and Magnetism'. Dawsons.
6. ^ a b c d e f g h i j k l m Weisskopf, Victor (November 1981). "The development of field theory in the last 50 years". Physics Today. 34 (11): 69–85. Bibcode:1981PhT....34k..69W. doi:10.1063/1.2914365.
7. ^ Werner Heisenberg (1999). Physics and Philosophy: The Revolution in Modern Science. Prometheus Books. ISBN 978-1-57392-694-2.
8. ^ a b c d e f g h i j Shifman, M. (2012). Advanced Topics in Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-19084-8.
9. ^ a b c d 't Hooft, Gerard (2015-03-17). "The Evolution of Quantum Field Theory". The Standard Theory of Particle Physics. Advanced Series on Directions in High Energy Physics. 26. pp. 1–27. arXiv:1503.05007. Bibcode:2016stpp.conf....1T. doi:10.1142/9789814733519_0001. ISBN 978-981-4733-50-2.
10. ^ Yang, C. N.; Mills, R. L. (1954-10-01). "Conservation of Isotopic Spin and Isotopic Gauge Invariance". Physical Review. 96 (1): 191–195. Bibcode:1954PhRv...96..191Y. doi:10.1103/PhysRev.96.191.
11. ^ a b c Coleman, Sidney (1979-12-14). "The 1979 Nobel Prize in Physics". Science. 206 (4424): 1290–1292. Bibcode:1979Sci...206.1290C. doi:10.1126/science.206.4424.1290. JSTOR 1749117. PMID 17799637.
12. ^ Sutton, Christine. "Standard model". britannica.com. Encyclopædia Britannica. Retrieved 2018-08-14.
13. ^ Kibble, Tom W. B. (2014-12-12). "The Standard Model of Particle Physics". arXiv:1412.4094 [physics.hist-ph].
14. ^ a b c Polchinski, Joseph (2005). String Theory. 1. Cambridge University Press. ISBN 978-0-521-67227-6.
15. ^ Schwarz, John H. (2012-01-04). "The Early History of String Theory and Supersymmetry". arXiv:1201.0981 [physics.hist-ph].
16. ^ "Common Problems in Condensed Matter and High Energy Physics" (PDF). science.energy.gov. Office of Science, U.S. Department of Energy. 2015-02-02. Retrieved 2018-07-18.
17. ^ a b Wilczek, Frank (2016-04-19). "Particle Physics and Condensed Matter: The Saga Continues". Physica Scripta. 2016 (T168): 014003. arXiv:1604.05669. Bibcode:2016PhST..168a4003W. doi:10.1088/0031-8949/T168/1/014003.
18. ^ a b Tong 2015, Chapter 1
19. ^ a b c d e f g h i j k l m n o p q r s t Zee, A. (2010). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 978-0-691-01019-9.
20. ^ Fock, V. (1932-03-10). "Konfigurationsraum und zweite Quantelung". Zeitschrift für Physik (in German). 75 (9–10): 622–647. Bibcode:1932ZPhy...75..622F. doi:10.1007/BF01344458.
21. ^ Becker, Katrin; Becker, Melanie; Schwarz, John H. (2007). String Theory and M-Theory. Cambridge University Press. p. 36. ISBN 978-0-521-86069-7.
22. ^ Fujita, Takehisa (2008-02-01). "Physics of Renormalization Group Equation in QED". arXiv:hep-th/0606101.
23. ^ Aharony, Ofer; Gur-Ari, Guy; Klinghoffer, Nizan (2015-05-19). "The Holographic Dictionary for Beta Functions of Multi-trace Coupling Constants". Journal of High Energy Physics. 2015 (5): 31. arXiv:1501.06664. Bibcode:2015JHEP...05..031A. doi:10.1007/JHEP05(2015)031.
24. ^ Kovacs, Stefano (1999-08-26). "N = 4 supersymmetric Yang–Mills theory and the AdS/SCFT correspondence". arXiv:hep-th/9908171.
25. ^ Veltman, M. J. G. (1976). Methods in Field Theory, Proceedings of the Les Houches Summer School, Les Houches, France, 1975.
26. ^ Brading, Katherine A. (March 2002). "Which symmetry? Noether, Weyl, and conservation of electric charge". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 33 (1): 3–22. Bibcode:2002SHPMP..33....3B. CiteSeerX doi:10.1016/S1355-2198(01)00033-8.
27. ^ Weinberg, Steven (1995). The Quantum Theory of Fields. Cambridge University Press. ISBN 978-0-521-55001-7.
28. ^ de Wit, Bernard; Louis, Jan (1998-02-18). "Supersymmetry and Dualities in various dimensions". arXiv:hep-th/9801132.
29. ^ Polchinski, Joseph (2005). String Theory. 2. Cambridge University Press. ISBN 978-0-521-67228-3.
30. ^ Nath, P.; Arnowitt, R. (1975). "Generalized Super-Gauge Symmetry as a New Framework for Unified Gauge Theories". Physics Letters B. 56 (2): 177. Bibcode:1975PhLB...56..177N. doi:10.1016/0370-2693(75)90297-x.
31. ^ Munoz, Carlos (2017-01-18). "Models of Supersymmetry for Dark Matter". EPJ Web of Conferences. 136: 01002. arXiv:1701.05259. Bibcode:2017EPJWC.13601002M. doi:10.1051/epjconf/201713601002.
32. ^ Morandi, G.; Sodano, P.; Tagliacozzo, A.; Tognetti, V. (2000). Field Theories for Low-Dimensional Condensed Matter Systems. Springer. ISBN 978-3-662-04273-1.
33. ^ Parker, Leonard E.; Toms, David J. (2009). Quantum Field Theory in Curved Spacetime. Cambridge University Press. p. 43. ISBN 978-0-521-87787-9.
34. ^ Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008-12-11). "Undergraduate Lecture Notes in Topological Quantum Field Theory". arXiv:0810.0344v5 [math-th].
35. ^ Carlip, Steven (1998). Quantum Gravity in 2+1 Dimensions. Cambridge University Press. p. 27-29. doi:10.1017/CBO9780511564192. ISBN 9780511564192.
36. ^ Carqueville, Nils; Runkel, Ingo (2017-05-16). "Physics of Renormalization Group Equation in QED". arXiv:1705.05734 [math.QA].
37. ^ Di Francesco, Philippe; Mathieu, Pierre; Sénéchal, David (1997). Conformal Field Theory. Springer. ISBN 978-1-4612-7475-9.
38. ^ Thirring, W. (1958). "A Soluble Relativistic Field Theory?". Annals of Physics. 3: 91–112. Bibcode:1958AnPhy...3...91T. doi:10.1016/0003-4916(58)90015-0.
39. ^ Haag, Rudolf (1955). "On Quantum Field Theories" (PDF). Dan Mat Fys Medd. 29 (12).
40. ^ a b Buchholz, Detlev (2000). "Current Trends in Axiomatic Quantum Field Theory". Quantum Field Theory. 558: 43–64. arXiv:hep-th/9811233. Bibcode:2000LNP...558...43B.
41. ^ a b c Summers, Stephen J. (2016-03-31). "A Perspective on Constructive Quantum Field Theory". arXiv:1203.3991v2 [math-ph].
42. ^ Sati, Hisham; Schreiber, Urs (2012-01-06). "Survey of mathematical foundations of QFT and perturbative string theory". arXiv:1109.0955v2 [math-ph].
43. ^ Jaffe, Arthur; Witten, Edward. "Quantum Yang-Mills Theory" (PDF). Clay Mathematics Institute. Retrieved 2018-07-18.
Further reading[edit]
General readers
Introductory texts
Advanced texts
External links[edit] |
ded120a2c084229e | Path integral formulation
This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path integral allows a physicist to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals), than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has been proved to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away.[1]
The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks.
The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion and Brownian motion.[2] This idea was extended to the use of the Lagrangian in quantum mechanics by P. A. M. Dirac in his 1933 article.[3][4] The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier in his doctoral work under the supervision of John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point.
Quantum action principle
In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit, i). For states with a definite energy, this is a statement of the de Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle.
The Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity relative to special relativity. The Hamiltonian indicates how to march forward in time, but the time is different in different reference frames. The Lagrangian is a Lorentz scalar, while the Hamiltonian is the time component of a four-vector. So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics.
The Hamiltonian is a function of the position and momentum at one time, and it determines the position and momentum a little later. The Lagrangian is a function of the position now and the position a little later (or, equivalently for infinitesimal time separations, it is a function of the position and velocity). The relation between the two is by a Legendre transformation, and the condition that determines the classical equations of motion (the Euler–Lagrange equations) is that the action has an extremum.
and if this is also interpreted as a matrix multiplication, the sum over all states integrates over all q(t), and so it takes the Fourier transform in q(t) to change basis to p(t). That is the action on the Hilbert space – change basis to p at time t.
Next comes
or evolve an infinitesimal time into the future.
Finally, the last factor in this interpretation is
which means change basis back to q at a later time.
L dt
, which is just the action function, which classical mechanics requires to be stationary for small variations in all the intermediate qs. This shows the way in which equation (11) goes over into classical results when h becomes extremely small.
Dirac (1933), p. 69
Dirac further noted that one could square the time-evolution operator in the S representation:
and this gives the time-evolution operator between time t and time t + 2ε. While in the H representation the quantity that is being summed over the intermediate states is an obscure matrix element, in the S representation it is reinterpreted as a quantity associated to the path. In the limit that one takes a large power of this operator, one reconstructs the full quantum evolution between two states, the early one with a fixed value of q(0) and the later one with a fixed value of q(t). The result is a sum over paths with a phase, which is the quantum action. Crucially, Dirac identified in this article the deep quantum-mechanical reason for the principle of least action controlling the classical limit (see quotation box).
Feynman's interpretation
Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.[nb 1] That is, the classical path arises naturally in the classical limit.
1. The probability for an event is given by the squared modulus of a complex number called the "probability amplitude".
3. The contribution of a path is proportional to eiS/ħ, where S is the action given by the time integral of the Lagrangian along the path.
Path integral in quantum mechanics
Time-slicing derivation
One common approach to deriving the path integral formula is to divide the time interval into small pieces. Once this is done, the Trotter product formula tells us that the noncommutativity of the kinetic and potential energy operators can be ignored.
For a particle in a smooth potential, the path integral is approximated by zigzag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position xa at time ta to xb at time tb, the time sequence
This process is called time-slicing.
An approximation for the path integral can be computed as proportional to
where L(x, v) is the Lagrangian of the one-dimensional system with position variable x(t) and velocity v = (t) considered (see below), and dxj corresponds to the position at the jth time step, if the time integral is approximated by a sum of n terms.[nb 2]
In the limit n → ∞, this becomes a functional integral, which, apart from a nonessential factor, is directly the product of the probability amplitudes xb, tb|xa, ta (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at ta in the initial state xa and at tb in the final state xb.
Actually L is the classical Lagrangian of the one-dimensional system considered,
and the abovementioned "zigzagging" corresponds to the appearance of the terms
in the Riemann sum approximating the time integral, which are finally integrated over x1 to xn with the integration measure dx1...dxn, j is an arbitrary value of the interval corresponding to j, e.g. its center, xj + xj−1/2.
Path integral formula
In terms of the wave function in the position representation, the path integral formula reads as follows:
where denotes integration over all paths with and where is a normalization factor. Here is the action, given by
Free particle
The path integral representation gives the quantum amplitude to go from point x to point y as an integral over all paths. For a free-particle action (for simplicity let m = 1, ħ = 1)
the integral can be evaluated explicitly.
Splitting the integral into time slices:
and the result is
The proportionality constant is not really determined by the time-slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time slices the time evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process.
This condition normalizes the Gaussian and produces a kernel that obeys the diffusion equation:
For oscillatory path integrals, ones with an i in the numerator, the time slicing produces convolved Gaussians, just as before. Now, however, the convolution product is marginally singular, since it requires careful limits to evaluate the oscillating integrals. To make the factors well defined, the easiest way is to add a small imaginary part to the time increment ε. This is closely related to Wick rotation. Then the same convolution argument as before gives the propagation kernel:
This means that any superposition of Ks will also obey the same equation, by linearity. Defining
then ψt obeys the free Schrödinger equation just as K does:
Simple harmonic oscillator
The Lagrangian for the simple harmonic oscillator is
Write its trajectory x(t) as the classical trajectory plus some perturbation, x(t) = xc(t) + δx(t) and the action as S = Sc + δS. The classical trajectory can be written as
This trajectory yields the classical action
Next, expand the non-classical contribution to the action δS as a Fourier series, which gives
This means that the propagator is
for some normalization
Using the infinite-product representation of the sinc function,
the propagator can be written as
Let T = tfti. One may write this propagator in terms of energy eigenstates as
Using the identities i sin ωT = 1/2eiωT (1 − e−2iωT) and cos ωT = 1/2eiωT (1 + e−2iωT), this amounts to
One may absorb all terms after the first eiωT/2 into R(T), thereby obtaining
One may finally expand R(T) in powers of eiωT: All terms in this expansion get multiplied by the eiωT/2 factor in the front, yielding terms of the form
Comparison to the above eigenstate expansion yields the standard energy spectrum for the simple harmonic oscillator,
Coulomb potential
the singularity is removed and a time-sliced approximation exists, which is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert.[6] The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation.
The Schrödinger equation
Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of , the path integral has most weight for y close to x. In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula.) The exponential of the action is
Equations of motion
Start by considering the path integral with some fixed initial state
Now note that x(t) at each separate time is a separate integration variable. So it is legitimate to change variables in the integral by shifting: x(t) = u(t) + ε(t) where ε(t) is a different shift at each time but ε(0) = ε(T) = 0, since the endpoints are not integrated:
The change in the integral from the shift is, to first infinitesimal order in ε:
which, integrating by parts in t, gives:
this is the Heisenberg equation of motion.
Stationary-phase approximation
If the variation in the action exceeds ħ by many orders of magnitude, we typically have destructive interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation, which is now reinterpreted as the condition for constructive interference. This can be shown using the method of stationary phase applied to the propagator. As ħ decreases, the exponential in the integral oscillates rapidly in the complex domain for any change in the action. Thus, in the limit that ħ goes to zero, only points where the classical action does not vary contribute to the propagator.
Canonical commutation relations
Note that the distance that a random walk moves is proportional to t, so that:
The quantity xẋ is ambiguous, with two possible meanings:
In elementary calculus, the two are only different by an amount which goes to 0 as ε goes to 0. But in this case, the difference between the two is not 0:
and the equations of motion for f derived from extremizing the action S corresponding to L just set it equal to 1. In physics, such a quantity is "equal to 1 as an operator identity". In mathematics, it "weakly converges to 1". In either case, it is 1 in any expectation value, or when averaged over any interval, or for all practical purpose.
Defining the time order to be the operator order:
For a general statistical action, a similar argument shows that
Particle in curved space
For a particle in curved space the kinetic term depends on the position, and the above time slicing cannot be applied, this being a manifestation of the notorious operator ordering problem in Schrödinger quantum mechanics. One may, however, solve this problem by transforming the time-sliced flat-space path integral to curved space using a multivalued coordinate transformation (nonholonomic mapping explained here).
Measure-theoretic factors
This factor is needed to restore unitarity.
For instance, if
then it means that each spatial slice is multiplied by the measure g. This measure cannot be expressed as a functional multiplying the Dx measure because they belong to entirely different classes.
Euclidean path integrals
It is very common in path integrals to perform a Wick rotation from real to imaginary times. In the setting of quantum field theory, the Wick rotation changes the geometry of space-time from Lorentzian to Euclidean; as a result, Wick-rotated path integrals are often called Euclidean path integrals.
Wick rotation and the Feynman–Kac formula
If we replace by , the time-evolution operator is replaced by . (This change is known as a Wick rotation.) If we repeat the derivation of the path-integral formula in this setting, we obtain[8]
where is the Euclidean action, given by
Note the sign change between this and the normal action, where the potential energy term is negative. (The term Euclidean is from the context of quantum field theory, where the change from real to imaginary time changes the space-time geometry from Lorentzian to Euclidean.)
Now, the contribution of the kinetic energy to the path integral is as follows:
where includes all the remaining dependence of the integrand on the path. This integral has a rigorous mathematical interpretation as integration against the Wiener measure, denoted . The Wiener measure, constructed by Norbert Wiener gives a rigorous foundation to Einstein's mathematical model of Brownian motion. The subscript indicates that the measure is supported on paths with .
We then have a rigorous version of the Feynman path integral, known as the Feynman–Kac formula:[9]
where now satisfies the Wick-rotated version of the Schrödinger equation,
Although the Wick-rotated Schrödinger equation does not have a direct physical meaning, interesting properties of the Schrödinger operator can be extracted by studying it.[10]
Much of the study of quantum field theories from the path-integral perspective, in both the mathematics and physics literatures, is done in the Euclidean setting, that is, after a Wick rotation. In particular, there are various results showing that if a Euclidean field theory with suitable properties can be constructed, one can then undo the Wick rotation to recover the physical, Lorentzian theory.[11] On the other hand, it is much more difficult to give a meaning to path integrals (even Euclidean path integrals) in quantum field theory than in quantum mechanics.[12]
The path integral and the partition function
is the action of the classical problem in which one investigates the path starting at time t = 0 and ending at time t = T, and denotes integration over all paths. In the classical limit, , the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel.[13]
The connection with statistical mechanics follows. Considering only paths which begin and end in the same configuration, perform the Wick rotation it = τ, i.e., make time imaginary, and integrate over all possible beginning-ending configurations. The Wick-rotated path integral—described in the previous subsection, with the ordinary action replaced by its "Euclidean" counterpart—now resembles the partition function of statistical mechanics defined in a canonical ensemble with inverse temperature proportional to imaginary time, 1/T = kBτ/ħ. Strictly speaking, though, this is the partition function for a statistical field theory.
which is precisely the partition function of statistical mechanics for the same system at temperature quoted earlier. One aspect of this equivalence was also known to Erwin Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation. Note, however, that the Euclidean path integral is actually in the form of a classical statistical mechanics model.
Quantum field theory
Both the Schrödinger and Heisenberg approaches to quantum mechanics single out time and are not in the spirit of relativity. For example, the Heisenberg approach requires that scalar field operators obey the commutation relation
The problem of lost symmetry also appears in classical mechanics, where the Hamiltonian formulation also superficially singles out time. The Lagrangian formulation makes the relativistic invariance apparent. In the same way, the path integral is manifestly relativistic. It reproduces the Schrödinger equation, the Heisenberg equations of motion, and the canonical commutation relations and shows that they are compatible with relativity. It extends the Heisenberg-type operator algebra to operator product rules, which are new relations difficult to see in the old formalism.
Further, different choices of canonical variables lead to very different-seeming formulations of the same theory. The transformations between the variables can be very complicated, but the path integral makes them into reasonably straightforward changes of integration variables. For these reasons, the Feynman path integral has made earlier formalisms largely obsolete.
The propagator
This is called the propagator. Superposing different values of the initial position x with an arbitrary initial state ψ0(x) constructs the final state:
For a spatially homogeneous system, where K(x, y) is only a function of (xy), the integral is a convolution, the final state is the initial state convolved with the propagator:
For a free particle of mass m, the propagator can be evaluated either explicitly from the path integral or by noting that the Schrödinger equation is a diffusion equation in imaginary time, and the solution must be a normalized Gaussian:
Taking the Fourier transform in (xy) produces another Gaussian:
and in p-space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending K(p; T) to be zero for negative times, gives Green's function, or the frequency-space propagator:
which is the reciprocal of the operator that annihilates the wavefunction in the Schrödinger equation, which wouldn't have come out right if the proportionality factor weren't constant in the p-space representation.
The infinitesimal term in the denominator is a small positive number, which guarantees that the inverse Fourier transform in E will be nonzero only for future times. For past times, the inverse Fourier transform contour closes toward values of E where there is no singularity. This guarantees that K propagates the particle into the future and is the reason for the subscript "F" on G. The infinitesimal term can be interpreted as an infinitesimal rotation toward imaginary time.
It is also possible to reexpress the nonrelativistic time evolution in terms of propagators going toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the Gaussian t is replaced by t. In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction:
Given the nearly identical only change is the sign of E and ε, the parameter E in Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past.
For a nonrelativistic theory, the time as measured along the path of a moving particle and the time as measured by an outside observer are the same. In relativity, this is no longer true. For a relativistic theory the propagator should be defined as the sum over all paths that travel between two points in a fixed proper time, as measured along the path (these paths describe the trajectory of a particle in space and in time):
The integral above is not trivial to interpret because of the square root. Fortunately, there is a heuristic trick. The sum is over the relativistic arc length of the path of an oscillating quantity, and like the nonrelativistic path integral should be interpreted as slightly rotated into imaginary time. The function K(xy, τ) can be evaluated when the sum is over paths in Euclidean space:
This describes a sum over all paths of length Τ of the exponential of minus the length. This can be given a probability interpretation. The sum over all paths is a probability average over a path constructed step by step. The total number of steps is proportional to Τ, and each step is less likely the longer it is. By the central limit theorem, the result of many independent steps is a Gaussian of variance proportional to Τ:
where W(Τ) is a weight factor, the relative importance of paths of different proper time. By the translation symmetry in proper time, this weight can only be an exponential factor and can be absorbed into the constant α:
This is the Schwinger representation. Taking a Fourier transform over the variable (xy) can be done for each value of Τ separately, and because each separate Τ contribution is a Gaussian, gives whose Fourier transform is another Gaussian with reciprocal width. So in p-space, the propagator can be reexpressed simply:
which is the Euclidean propagator for a scalar particle. Rotating p0 to be imaginary gives the usual relativistic propagator, up to a factor of i and an ambiguity, which will be clarified below:
The second term has a nonrelativistic limit also, but this limit is concentrated on frequencies that are negative. The second pole is dominated by contributions from paths where the proper time and the coordinate time are ticking in an opposite sense, which means that the second term is to be interpreted as the antiparticle. The nonrelativistic analysis shows that with this form the antiparticle still has positive energy.
So in the relativistic case, the Feynman path-integral representation of the propagator includes paths going backwards in time, which describe antiparticles. The paths that contribute to the relativistic propagator go forward and backwards in time, and the interpretation of this is that the amplitude for a free particle to travel between two points includes amplitudes for the particle to fluctuate into an antiparticle, travel back in time, then forward again.
Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses that are nonzero outside the light cone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Green's function which is only nonzero in the future in a relativistically invariant theory.
Functionals of fields
Expectation values
In quantum field theory, if the action is given by the functional S of field configurations (which only depends locally on the fields), then the time-ordered vacuum expectation value of polynomially bounded functional F, F, is given by
The symbol Dϕ here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of space-time. As stated above, the unadorned path integral in the denominator ensures proper normalization.
As a probability
Strictly speaking, the only question that can be asked in physics is: What fraction of states satisfying condition A also satisfy condition B? The answer to this is a number between 0 and 1, which can be interpreted as a conditional probability, written as P(B|A). In terms of path integration, since P(B|A) = P(AB) / P(A), this means
where the functional Oin[ϕ] is the superposition of all incoming states that could lead to the states we are interested in. In particular, this could be a state corresponding to the state of the Universe just after the Big Bang, although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals, it is naturally normalised.
Schwinger–Dyson equations
Since this formulation of quantum mechanics is analogous to classical action principle, one might expect that identities concerning the action in classical mechanics would have quantum counterparts derivable from a functional integral. This is often the case.
If the functional measure Dϕ turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models), and if we assume that after a Wick rotation
which now becomes
for some H, it goes to zero faster than a reciprocal of any polynomial for large values of φ, then we can integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger–Dyson equations for the expectation:
for any polynomially-bounded functional F. In the deWitt notation this looks like[14]
These equations are the analog of the on-shell EL equations. The time ordering is taken before the time derivatives inside the S,i.
If J (called the source field) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure), then the generating functional Z of the source fields is defined to be
Note that
Basically, if Dφ eiS[φ] is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike its Wick-rotated statistical mechanics analogue, because we have time ordering complications here!), then φ(x1) ... φ(xn) are its moments, and Z is its Fourier transform.
If F is a functional of φ, then for an operator K, F[K] is defined to be the operator that substitutes K for φ. For example, if
and G is a functional of J, then
Then, from the properties of the functional integrals
we get the "master" Schwinger–Dyson equation:
If the functional measure is not translationally invariant, it might be possible to express it as the product M[φ] Dφ, where M is a functional and Dφ is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to Rn. However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense.
In that case, we would have to replace the S in this equation by another functional
The path integrals are usually thought of as being the sum of all paths through an infinite space–time. However, in local quantum field theory we would restrict everything to lie within a finite causally complete region, for example inside a double light-cone. This gives a more mathematically precise and physically rigorous definition of quantum field theory.
Ward–Takahashi identities
Let's also assume
which implies
Now, let's assume even further that Q is a local integral
so that
Then, we would have
The above two equations are the Ward–Takahashi identities.
The need for regulators and renormalization
The path integral in quantum-mechanical interpretation
In one interpretation of quantum mechanics, the "sum over histories" interpretation, the path integral is taken to be fundamental, and reality is viewed as a single indistinguishable "class" of paths that all share the same events. For this interpretation, it is crucial to understand what exactly an event is. The sum-over-histories method gives identical results to canonical quantum mechanics, and Sinha and Sorkin[15] claim the interpretation explains the Einstein–Podolsky–Rosen paradox without resorting to nonlocality.
Quantum gravity
Whereas in quantum mechanics the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction, and his work has been extended by Hawking and others.[16] Approaches that use this method include causal dynamical triangulations and spinfoam models.
Quantum tunneling
Quantum tunnelling can be modeled by using the path integral formation to determine the action of the trajectory through a potential barrier. Using the WKB approximation, the tunneling rate (Γ) can be determined to be of the form
with the effective action Seff and pre-exponential factor Ao. This form is specifically useful in a dissipative system, in which the systems and surroundings must be modeled together. Using the Langevin equation to model Brownian motion, the path integral formation can be used to determine an effective action and pre-exponential model to see the effect of dissipation on tunnelling.[17] From this model, tunneling rates of macroscopic systems (at finite temperatures) can be predicted.
See also
2. For a simplified, step-by-step derivation of the above relation, see Path Integrals in Quantum Theories: A Pedagogic 1st Step.
• Albeverio, S.; Hoegh-Krohn., R. & Mazzucchi, S (2008). Mathematical Theory of Feynman Path Integral. Lecture Notes in Mathematics 523. Springer-Verlag. ISBN 9783540769569.
• Caldeira, A. O.; Leggett, A. J. (1983). "Quantum tunnelling in a dissipative system". Annals of Physics. 149 (2): 374–456. Bibcode:1983AnPhy.149..374C. doi:10.1016/0003-4916(83)90202-6.
• Cartier, P; DeWitt-Morette, Cécile (1995). "A new perspective on Functional Integration". Journal of Mathematical Physics. 36 (5): 2137–2340. arXiv:funct-an/9602005. Bibcode:1995JMP....36.2237C. doi:10.1063/1.531039.
• Chaichian, M.; Demichev, A. P. (2001). "Introduction". Path Integrals in Physics Volume 1: Stochastic Process & Quantum Mechanics. Taylor & Francis. p. 1ff. ISBN 0-7503-0801-X.
• DeWitt-Morette, C. (1972). "Feynman's path integral: Definition without limiting procedure". Communications in Mathematical Physics. 28 (1): 47–67. Bibcode:1972CMaPh..28...47D. doi:10.1007/BF02099371. MR 0309456.
• Dirac, Paul A. M. (1933). "The Lagrangian in Quantum Mechanics" (PDF). Physikalische Zeitschrift der Sowjetunion. 3: 64–72.
• Feynman, R. P. (2005) [1942/1948]. Brown, L. M, ed. Feynman's Thesis — A New Approach to Quantum Theory. World Scientific. ISBN 978-981-256-366-8. The 1942 thesis. Also includes Dirac's 1933 paper and Feynman's 1948 publication.
• Feynman, R. P. (1948). "Space-Time Approach to Non-Relativistic Quantum Mechanics". Reviews of Modern Physics. 20 (2): 367–387. Bibcode:1948RvMP...20..367F. doi:10.1103/RevModPhys.20.367.
• Feynman, R. P.; Hibbs, A. R.; Styer, D. F. (2010). Quantum Mechanics and Path Integrals. Mineola, NY: Dover Publications. pp. 29–31. ISBN 0-486-47722-3.
• Gell-Mann, Murray (1993). "Most of the Good Stuff". In Brown, Laurie M.; Rigden, John S. Memories Of Richard Feynman. American Institute of Physics. ISBN 978-0883188705.
• Glimm, J. & Jaffe, A (1981). Quantum Physics: A Functional Integral Point of View. New York: Springer-Verlag. ISBN 0-387-90562-6.
• Glimm, J. & Jaffe, A. (1981). Quantum Physics: A Functional Integral Point of View. New York: Springer-Verlag. ISBN 0-387-90562-6.
• Hall, Brian C. (2013). Quantum Theory for Mathematicians. Graduate Texts in Mathematics. 267. Springer. doi:10.1007/978-1-4614-7116-5. ISBN 978-1-4614-7115-8.
• Janke, W.; Pelster, Axel, eds. (2008). Path Integrals--New Trends And Perspectives. Proceedings Of The 9Th International Conference. World Scientific Publishing. ISBN 978-981-283-726-4.
• MacKenzie, Richard (2000). "Path Integral Methods and Applications". arXiv:quant-ph/0004090.
• Müller-Kirsten, Harald J. W. (2012). Introduction to Quantum Mechanics: Schrödinger Equation and Path Integral (2nd ed.). Singapore: World Scientific.
• Rivers, R. J. (1987). Path Integrals Methods in Quantum Field Theory. Cambridge University Press. ISBN 0-521-25979-7.
• Simon, B. (1979). Functional Integration and Quantum Physics. New York: Academic Press. ISBN 0-8218-6941-8.
• Sinha, Sukanya; Sorkin, Rafael D. (1991). "A Sum-over-histories Account of an EPR(B) Experiment" (PDF). Foundations of Physics Letters. 4 (4): 303–335. Bibcode:1991FoPhL...4..303S. doi:10.1007/BF00665892.
• Van Vleck, J. H. (1928). "The correspondence principle in the statistical interpretation of quantum mechanics". Proceedings of the National Academy of Sciences of the United States of America. 14 (2): 178–188. Bibcode:1928PNAS...14..178V. doi:10.1073/pnas.14.2.178. PMC 1085402. PMID 16577107.
• Weinberg, S. (2002) [1995], Foundations, The Quantum Theory of Fields, 1, Cambridge: Cambridge University Press, ISBN 0-521-55001-7
• Zinn Justin, J. (2004). Path Integrals in Quantum Mechanics. Oxford University Press. ISBN 0-19-856674-3. |
37c44b8e5b7a740c | As always, the best way to get some intuition about an equation is to solve it for some simple cases, so let's give that a try with different fixed potentials.
Video 1. Simulation of the time-dependent Schrodinger equation (JavaScript Animation) by Coding Physics (2019) Source.
Source code: One dimensional potentials, non-interacting particles. The code is clean, graphics based on, and all maths from scratch. Source organization and comments are typical of numerical code, the anonymous author is was likely a Fortran user in the past.
A potential change patch in sketch.js:
- potential: x => 2E+4*Math.pow((4*x - 1)*(4*x - 3),2),
+ potential: x => 4*Math.pow(x - 0.5, 2),
Video 2. Quantum Mechanics 5b - Schrödinger Equation II by ViaScience (2013) Source. 2D non-interacting particle in a box, description says using Scilab and points to source. Has a double slit simulation.
Video 3. Visualization of Quantum Physics (Quantum Mechanics) by udiprod (2017) Source. Closed source, but a fantastic visualization and explanation of a 1D free wave packet, including how measurement snaps position to the measured range, position and momentum space and the uncertainty principle.
We select for the general Equation "Schrodinger equation":
giving the full explicit partial differential equation:
Equation 1. Schrödinger equation for a one dimensional particle
The corresponding time-independent Schrödinger equation for this equation is:
Equation 2. time-independent Schrödinger equation for a one dimensional particle
This equation is a subcase of Equation "Schrödinger equation for a one dimensional particle" with .
Now, there are two ways to go about this.
The first is the stupid "here's a guess" + "hey this family of solutions forms a complete bases"! This is exactly how we solved the problem at Section "Solving partial differential equations with the Fourier series", except that now the complete basis are the Hermite functions.
The second is the much celebrated ladder operator method.
A quantum version of the LC circuit!
TODO are there experiments, or just theoretical?
I.e.: they are both:
Not the same as Hermite polynomials.
The operators are a natural guess on the lines of "if p and x were integers".
And then we can prove the ladder properties easily.
The commutator appear in the middle of this analysis.
• flash memory uses quantum tunneling as the basis for setting and resetting bits
• alpha decay is understood as a quantum tunneling effect in the nucleus
Is the only atom that has a closed form solution, which allows for very good predictions, and gives awesome intuition about the orbitals in general.
It is arguably the most important solution of the Schrodinger equation.
In particular, it predicts:
The explicit solution can be written in terms of spherical harmonics.
Video 1. A Better Way To Picture Atoms by minutephysics (2021) Source. Renderings based on the exact Schrödinger equation solution for the hydrogen atom that depict wave function concentration by concentration of small balls, and angular momentum by how fast the balls rotate at each point. Mentions that the approach is inspired by de Broglie-Bohm theory.
In the case of the Schrödinger equation solution for the hydrogen atom, each orbital is one eigenvector of the solution.
Remember from time-independent Schrödinger equation that the final solution is just the weighted sum of the eigenvector decomposition of the initial state, analogously to solving partial differential equations with the Fourier series.
This is the table that you should have in mind to visualize them:
Quantum numbers appear directly in the Schrödinger equation solution for the hydrogen atom.
However, it very cool that they are actually discovered before the Schrödinger equation, and are present in the Bohr model (principal quantum number) and the Bohr-Sommerfeld model (azimuthal quantum number and magnetic quantum number) of the atom. This must be because they observed direct effects of those numbers in some experiments. TODO which experiments.
E.g. The Quantum Story by Jim Baggott (2011) page 34 mentions:
As the various lines in the spectrum were identified with different quantum jumps between different orbits, it was soon discovered that not all the possible jumps were appearing. Some lines were missing. For some reason certain jumps were forbidden. An elaborate scheme of ‘selection rules’ was established by Bohr and Sommerfeld to account for those jumps that were allowed and those that were forbidden.
This refers to forbidden mechanism. TODO concrete example, ideally the first one to be noticed. How can you notice this if the energy depends only on the principal quantum number?
Video 1. Quantum Numbers, Atomic Orbitals, and Electron configurations by Professor Dave Explains (2015) Source. He does not say the key words "Eigenvalues of the Schrödinger equation" (Which solve it), but the summary of results is good enough.
Determines energy. This comes out directly from the resolution of the Schrödinger equation solution for the hydrogen atom where we have to set some arbitrary values of energy by separation of variables just like we have to set some arbitrary numbers when solving partial differential equations with the Fourier series. We then just happen to see that only certain integer values are possible to satisfy the equations.
The direction however is not specified by this number.
To determine the quantum angular momentum, we need the magnetic quantum number, which then selects which orbital exactly we are talking about.
Fixed quantum angular momentum in a given direction.
Can range between .
E.g. consider gallium which is 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p1:
• the electrons in s-orbitals such as 1s, 2d, and 3d are , and so the only value for is 0
• the electrons in p-orbitals such as 2p, 3p and 4p are , and so the possible values for are -1, 0 and 1
• the electrons in d-orbitals such as 2d are , and so the possible values for are -2, -1, 0 and 1 and 2
The z component of the quantum angular momentum is simply:
so e.g. again for gallium:
• s-orbitals: necessarily have 0 z angular momentum
• p-orbitals: have either 0, or z angular momentum
Note that this direction is arbitrary, since for a fixed azimuthal quantum number (and therefore fixed total angular momentum), we can only know one direction for sure. is normally used by convention.
This notation is cool as it gives the spin quantum number, which is important e.g. when talking about hyperfine structure.
But it is a bit crap that the spin is not given simply as but rather mixes up both the azimuthal quantum number and spin. What is the reason?
TODO. Can't find it easily. Anyone?
This is closely linked to the Pauli exclusion principle.
What does a particle even mean, right? Especially in quantum field theory, where two electrons are just vibrations of a single electron field.
Another issue is that if we consider magnetism, things only make sense if we add special relativity, since Maxwell's equations require special relativity, so a non approximate solution for this will necessarily require full quantum electrodynamics.
As mentioned at lecture 1, relativistic quantum mechanical theories like the Dirac equation and Klein-Gordon equation make no sense for a "single particle": they must imply that particles can pop in out of existence.
Just ignore the electron electron interactions.
No closed form solution, but good approximation that can be calculated by hand with the Hartree-Fock method, see hartree-Fock method for the helium atom.
Video 1. Quantum Chemistry 9.2 - Helium Atom Energy Approximations by TMP Chem (2016) Source. Video gives the actual numerical value of various methods, second order perturbation theory being very close. But it the says that in the following videos will only do Hartree-Fock method.
That is, two electrons per atomic orbital, each with a different spin.
As shown at Schrödinger equation solution for the helium atom, they do repel each other, and that affects their measurable energy.
However, this energy is still lower than going up to the next orbital. TODO numbers.
This changes however at higher orbitals, notably as approximately described by the aufbau principle.
Boring rule that says that less energetic atomic orbitals are filled first.
Much more interesting is actually determining that order, which the Madelung energy ordering rule is a reasonable approximation to.
We will sometimes just write them without superscript, as it saves typing and is useless.
The principal quantum number thing fully determining energy is only true for the hydrogen emission spectrum for which we can solve the Schrödinger equation explicitly.
For other atoms with more than one electron, the orbital names are just a very good approximation/perturbation, as we don't have an explicit solution. And the internal electrons do change energy levels.
Note however that due to the more complex effect of the Lamb shift from QED, there is actually a very small 2p/2s shift even in hydrogen.
Looking at the energy level of the Schrödinger equation solution for the hydrogen atom, you would guess that for multi-electron atoms that only the principal quantum number would matter, azimuthal quantum number getting filled randomly.
However, orbitals energies for large atoms don't increase in energy like those of hydrogen due to electron-electron interactions.
In particular, the following would not be naively expected:
• 2s fills up before 2p. From the hydrogen solution, you might guess that they would randomly go into either one as they'd have the same energy
• in potassium fills up before 3d, even though it has a higher principal quantum number!
This rule is only an approximation, there exist exceptions to the Madelung energy ordering rule.
The most notable exception is the borrowing of 3d-orbital electrons to 4s as in chromium, leading to a 3d5 4s1 configuration instead of the 3d4 4s2 we would have with the rule. TODO how is that observed observed experimentally?
This notation is so confusing! People often don't manage to explain the intuition behind it, why this is an useful notation. When you see Indian university entry exam level memorization classes about this, it makes you want to cry.
The key reason why term symbols matter are Hund's rules, which allow us to predict with some accuracy which electron configurations of those states has more energy than the other. puts it well: electron configuration notation is not specific enough, as each such notation e.g. 1s2 2s2 2p2 contains several options of spins and z angular momentum. And those affect energy.
This is why those symbols are often used when talking about energy differences: they specify more precisely which levels you are talking about.
Basically, each term symbol appears to represent a group of possible electron configurations with a given quantum angular momentum.
We first fix the energy level by saying at which orbital each electron can be (hyperfine structure is ignored). It doesn't even have to be the ground state: we can make some electrons excited at will.
The best thing to learn this is likely to draw out all the possible configurations explicitly, and then understand what is the term symbol for each possible configuration, see e.g. term symbols for carbon ground state.
It also confusing how uppercase letters S, P and D are used, when they do not refer to orbitals s, p and d, but rather to states which have the same angular momentum as individual electrons in those states.
It is also very confusing how extremelly close it looks to spectroscopic notation!
The form of the term symbol is:
The can be understood directly as the degeneracy, how many configurations we have in that state.
Video 1. Atomic Term Symbols by TMP Chem (2015) Source.
Video 2. Atomic Term Symbols by T. Daniel Crawford (2016) Source.
Allow us to determine with good approximation in a multi-electron atom which electron configuration have more energy. It is a bit like the Aufbau principle, but at a finer resolution.
Note that this is not trivial since there is no explicit solution to the Schrödinger equation for multi-electron atoms like there is for hydrogen.
For example, consider carbon which has electron configuration 1s2 2s2 2p2.
If we were to populate the 3 p-orbitals with two electrons with spins either up or down, which has more energy? E.g. of the following two:
m_L -1 0 1
u_ u_ __
u_ __ u_
__ ud __
Higher spin multiplicity means lower energy. I.e.: you want to keep all spins pointin in the same direction.
This example covered for example at Video 1. "Term Symbols Example 1 by TMP Chem (2015)".
Carbon has electronic structure 1s2 2s2 2p2.
For term symbols we only care about unfilled layers, because in every filled layer the total z angular momentum is 0, as one electron necessarily cancels out each other:
• magnetic quantum number varies from -l to +l, each with z angular momentum to and so each cancels the other out
• spin quantum number is either + or minus half, and so each pair of electron cancels the other out
So in this case, we only care about the 2 electrons in 2p2. Let's list out all possible ways in which the 2p2 electrons can be.
There are 3 p orbitals, with three different magnetic quantum numbers, each representing a different possible z quantum angular momentum.
We are going to distribute 2 electrons with 2 different spins across them. All the possible distributions that don't violate the Pauli exclusion principle are:
m_l +1 0 -1 m_L m_S
u_ u_ __ 1 1
u_ __ u_ 0 1
__ u_ u_ -1 1
d_ d_ __ 1 -1
d_ __ d_ 0 -1
__ d_ d_ -1 -1
u_ d_ __ 1 0
d_ u_ __ 1 0
u_ __ d_ 0 0
d_ __ u_ 0 0
__ u_ d_ -1 0
__ d_ u_ -1 0
ud __ __ 2 0
__ ud __ 0 0
__ __ ud -2 0
• m_l is , the magnetic quantum number of each electron. Remember that this gives a orbital (non-spin) quantum angular momentum of to each such electron
• m_L with a capital L is the sum of the of each electron
• m_S with a capital S is the sum of the spin angular momentum of each electron
For example, on the first line:
m_l +1 0 -1 m_L m_S
u_ u_ __ 1 1
we have:
• one electron with , and so angular momentum
• one electron with , and so angular momentum 0
and so the sum of them has angular momentum . So the value of is 1, we just omit the .
TODO now I don't understand the logic behind the next steps... I understand how to mechanically do them, but what do they mean? Can you determine the term symbol for individual microstates at all? Or do you have to group them to get the answer? Since there are multiple choices in some steps, it appears that you can't assign a specific term symbol to an individual microstate. And it has something to do with the Slater determinant. The previous lecture mentions it: more precisely about carbon. mentions that is not allowed because it would imply , which would be a state uu __ __ which violates the Pauli exclusion principle, and so was not listed on our list of 15 states.
He then goes for and mentions:
• S = 1 so can only be 0
• L = 2 (D) so ranges in -2, -1, 0, 1, 2
and so that corresponds to states on our list:
ud __ __ 2 0
u_ d_ __ 1 0
u_ __ d_ 0 0
__ u_ d_ -1 0
__ __ ud -2 0
Note that for some we had a two choices, so we just pick any one of them and tick them off off from the table, which now looks like:
+1 0 -1 m_L m_S
u_ u_ __ 1 1
u_ __ u_ 0 1
__ u_ u_ -1 1
d_ d_ __ 1 -1
d_ __ d_ 0 -1
__ d_ d_ -1 -1
d_ u_ __ 1 0
d_ __ u_ 0 0
__ d_ u_ -1 0
__ ud __ 0 0
Then for the choices are:
• S = 2 so is either -1, 0 or 1
• L = 1 (P) so ranges in -1, 0, 1
so we have 9 possibilities for both together. We again verify that 9 such states are left matching those criteria, and tick them off, and so on.
For the , we have two electrons with spin up. The angular momentum of each electron is , and so given that we have two, the total is , so again we omit and is 1.
Video 1. Term Symbols Example 1 by TMP Chem (2015) Source. Carbon atom.
Can we make any ab initio predictions about it all?
Video 1. Quantum Chemistry 10.1 - Hydrogen Molecule Hamiltonian by TMP Chem (2016) Source. Continued in the following videos, uses the Born–Oppenheimer approximation. Does not give predictions vs experiment?
Isomers were quite confusing for early chemists, before atomic theory was widely accepted, and people where thinking mostly in terms of proportions of equations, related: Section "Isomers suggest that atoms exist (1874)".
Exist because double bonds don't rotate freely. Have different properties of course, unlike enantiomer.
Mirror images.
Key exmaple: d and L amino acids. Enantiomers have identical physico-chemical properties. But their biological roles can be very different, because an enzyme might only be able to act on one of them.
TODO definition. Appears to be isomers
Molecules that are the same if you just look at "what atom is linked to what atom", they are only different if you consider the relative spacial positions of atoms.
Discrete quantum system model that can model both spin in the Stern-Gerlach experiment or photon polarization in polarizer.
Also known in quantum computing as a qubit :-)
A more concrete and easier to understand version of it is the more photon-specific Poincaré sphere, have a look at that one first.
Discussion (0)
Sign up or sign in create discussions.
There are no discussions about this article yet. |
b823db5a0e377140 | A Look into Arizona Ballot Forensics
At Gateway Pundit is an article explaining the techniques for validating ballots used in 2020 elections Maricopa County Auditor Bob Hughes Shares How They Are Using High Tech Forensic Digital Cameras and OCR to Validate Ballots. Excerpts in italics with my bolds.
Any massive vote fraud in Maricopa County Arizona is going to be identified. The Democrats must be terrified.
One of the auditors working for the Arizona Senate, Bob Hughes, discussed the audit and the reasons why the Democrats are absolutely frightened of the Arizona audit and it being performed in other states:
The other thing that I think is interesting is they keep saying, ‘They don’t know what they are doing’. ‘They’re idiots’. ‘These people are ridiculous’. ‘They have no idea what they are doing’. ‘They’ve never done this before’.
This is the first time in the history of the United States, number one, that it’s ever been done, but more important it’s the first time it’s ever been needed. And it was done.
I can tell you that I could go over all the process and you’d all understand, but when a ballot gets created, think about this. It’s like your bill being paid from SRP. They go out and get your voter identification number and they find out what precinct you live in, what city, what county, where your school districts are. All that information has to be accessed to create the proper ballot exactly for you. Because you have to vote for the right candidates in a city mayoral election, council elections, JP elections, the legislative district, the congressional district. [In fact, in 2020 there were 667 different versions of Maricopa County ballots.]
Think of those as maps that overlay the Maricopa County area and it creates all these little sections, and all these people get a very different ballot. So if somebody did what we were told they might have done, which is gone out and just duplicate a bunch of ballots, or put the same ballot in many times, or any of these kinds of things, I knew there was a way to find that out.
And so what we did is we, the cameras are not only cameras. They’re digital cameras. Digital cameras that are forensic. They’re actually police forensic cameras. They’re very, very high speed, high definition digital cameras. They make a scanned ballot.
So we scan that ballot. We then use optical character recognition (OCR). We’re looking at what’s in place on that ballot. Based on who that ballot is. How many should be? Can there be this many?…
What I can tell you is you now will have the most authentic count of every legal authentic ballot you could possibly have.
See Hughes’ speech in the video here.
AZ Ballot Audit
Why Technocrats Deliver Catastrophes
Mark E. Jeftovic writes insightfully on the ways technology backfires when applied by bureaucrats in his article Why the Technocratic Mindset Produces Only Misery and Failure. H/T Tyler Durden at zerohedge. Excerpts in italics with my bolds.
Technocrats have the most fundamental aspect of reality backwards
Saw this article come across, come across my news alert for “Transhumanism”. In it Dr. David Eagleman talks about how not only can we augment human senses with fantastic new abilities (like to “see” heat and electromagnetic patterns), but how we’ll even be able to build machines that think too.
There is a line in his thinking that one can glean from the article: on one side of the line are enhancements and augmentations to the human experience which are startling and amazing and which will transform our societies: even more radical life extension will be in the cards quite soon (for those who can afford it).
Where Eagleman crosses into technocratic thinking is when he veers into the idea of being able to build thinking machines. The logic is that because we’ll be able to increasingly bioengineer our own living bodies, it means we should also be able to bioengineer a mind into machines using the same principles.
I think this is wrong and it’s the same theoretical mistake that leads directly to technocratically inspired catastrophes.
Yes, we continue to build on technological advancements, but we also commit a lot of unforced errors that inflict incalculable misery on humanity. These errors may manifest as policy blunders, economic crises and worse. Most recently, for example, we seem to have gotten ourselves into a global pandemic because a bunch of technocrats funded some gain-of-function experiments in hopes of preempting the next pandemic. Do you see the dynamic here?
Over the years a lot of thinkers have pointed out that technocratic policy tracks, devised by centralized groups of experts within an elite managerial class, often bring about the very conditions they were impaneled to obviate.
• Raising minimum wages increases unemployment.
• Holding interest rates to zero creates economic instability and increases wealth inequality.
• Forcing green energy initiatives creates systems with lower energy efficiency and higher carbon footprints.
• Banning guns increases gun violence.
• Censoring “hate” speech fosters more hatred and polarization.
It’s almost as if the managerial class has no awareness of second-order effects. When they inexorably come to pass they are often blamed on the very people who were counselling against the initial policy in the first place.
Thus, financial meltdowns are blamed on runaway free markets and capitalism gone wild. Global warming (if it truly plays out along prognosticated lines) is blamed on industries who are most rapidly transitioning toward greener energy anyway (like Bitcoin mining).
Climate change is another theme that exemplifies the technocratic dynamic: As a society we’re going to transition off of fossil fuels no matter what anybody thinks about the environment because we’re already past peak oil, and peak demand will probably flatline around 100M bpd and start coming down from there in a secular downtrend, for a variety of reasons (prolonged economic malaise and the ascent of green energy).
Yet the most viable pathway toward transitioning away from fossil fuels, nuclear (and in this I include Thorium), is currently relegated as problematic by technocrats and ideologues.
It all seems backwards and for a long time I’ve been positing a fundamental root cause of this backwardness. The premise is: We have the mind/matter equation completely backwards in the way we think about how the world works.
Conventional thought is that what we experience as consciousness is something that emanates from the brain. Like steam from a kettle. This is also the core assumption of AI. If we build something that resembles a brain, it’ll think. It’s a kind of Frankenstein approach that Eagleman alludes to in his article.
That won’t work and AI will never be achieved as long as the mechanistic, material reductionist worldview persists. Yet, technocrats put a lot of faith in AI, and they think models derived from AI are or will be superior to anything we can figure out on our own because they were outputted by machines with a bigger/faster/hardware brains.
It is completely… wrong.
I think that what we experience as matter are energy patterns that emanate from an underlying, and conscious sub-strata of reality. This is basic quantum theory. Quantum theory can be problematic because it opens the door to all kinds of New Age Woo Woo, which may not even be entirely wrong at its core, but is prone to deeply flawed implementations (like anything, I guess).
People, and probably most living things, have a sense, an intuitive awareness of this sub-strata of reality. Our mythology and sacred texts are probably the stories of sometimes being more attuned to it and sometimes less so. The late British writer Colin Wilson wrote at length on the consciousness of the Egyptians of the upper kingdom, possibly over 7500 years BC. Their consciousness and language was pictorial not linear. It may even be possible (my extrapolation, not his) that the demarcation point between conscious awareness between individuals was blurred somewhat.
So what happened?
Into this awareness came religions. Organized structures that would begin to dictate the basis on which members of society were to comprehend and approach this Great Sub-Carrier. Priesthoods evolved – the first monopolies. Religions. Hierarchies. Rulers. Subjects.
One of the earliest forms of social deviance was heresy: approaching the Divine Sub-Carrier from a direction outside the religious structure. Can’t have that.
This dynamic is as old as humanity. It could even be argued that historical progress is the story of the public coming to realize that the monopoly thought structure they were in was flawed or obsolete and then society moving on to the next one. The elites of the day would endeavour to halt the progression or when that failed, co-opt whatever came next.
Then new elites would erect a new orthodoxy that placed them directly in the nexus of what was unknowable and what the rabble thought they needed to know in order to perform their primary function of ….servitude.
Today the great sub-carrier is best described by science, not religion. But again, the priesthood is saying that all knowledge of the sub-carrier should come through them. That’s Scientism. That’s Technocracy. Management by Experts.
The last two years of life on earth are a foretaste of a full blown technocracy. Follow The Science™, plebes.
Only our elites can fathom how to approach and extract knowledge from The Great Externality, but this time they’ve made things even worse because they have it exactly backwards. They think the Great Externality doesn’t even exist. It’s for flakes and Bible bangers. The technocratic priesthood holds that material reality is near completely understood and that our minds are side effects of chemical reactions in our brains.
They hold that if only we can crunch enough Big Data and calculate out all the models we’ll be, like God (who doesn’t exist), able to fix everything and eliminate all bad outcomes, for everybody, everywhere. We may even be able to eliminate death, and we could upload our consciousness (which is an illusion) into the cloud and live forever.
Because of this backwardation, we will always be careening from one catastrophe to the next, and most of them will be of our own making. We collectively suffer from an illusion that we are in control.
But we are not in control. We’re a pattern. A dance. A cycle. Waveforms. Vibrations. What we as humans do specifically well, which is our superpower and has led to our technological advancement which could conceivably continue on a trajectory that makes humanity an interstellar phenomenon, is adapt.
What technocrats can’t understand, or admit is that we can’t control what is going to happen. Either on an individual scale of people thinking in ways they’re not supposed to think, or geological, cultural, geopolitical or cosmic scales. We can’t get interest rates right, we can’t get everybody to agree on whether it’s “Gif” or “jif” and somehow we’re going to change the trajectory of the climate? Achieve immortality? Crank out a Singularity?
That is highly unlikely and in trying to preempt theoretical bad outcomes we typically bring about horrible actual outcomes.
The lab leak from the Wuhan Institute of Virology, if it occurred and it is looking increasingly likely that it did, was the result of gain-of-function studies on bat coronaviruses. They didn’t do it as a bioweapon. It’s not a global conspiracy to institute a Great Reset (all that talk is opportunism more than planning).
They were trying to figure out how to plan for a future global pandemic that may catch humanity off guard and cause incalculable damage. What did they accomplish? They unleashed a global pandemic that caught humanity off guard and caused incalculable damage. Soon to be compounded by global, de-facto compulsory inoculations with experimental vaccines that have a distinctly politicized impetus behind them.
That same dynamic is applied to economics (its where the .COM crash and Global Financial Crisis came from), and social policy (the Woke movement), to climate is all the same technocratic mindset that doesn’t understand the order of reality (mind, then matter) but even worse thinks it knows it.
We’re stuck with that for awhile because the technocratic mindset is incapable of introspection or entertaining the possibility of being wrong about anything. The only move it knows is to double-down on failure.
The antidote to all this is massive decentralization on a global scale, which has the added benefit that decentralization by definition, is not something that gets decided from the top (it never is). It just happens, even in spite of the people in the centre of power who may feel something about their gravitas melting away.
That’s what has started to happen. A global opt-out. The Great Reject. As sure as the Reformation gave way to the Enlightenment despite the protestations of the Church, we’re headed into a world of networks and the sunset of nations. All the while the propagandists of the old order shrieking that in this direction lies certain doom.
The Enlightenment arose from an increase in the level of abstraction, structurally the universe changed from the Ptolemaic worldview (the world as the centre of all existence) to the Heliocentric solar system.
Now we’re experiencing a similar shift away from static top-down hierarchical structures as the natural shape of civilization and toward shifting, impermanent, overlapping networks.
Footnote: Another Example of Technocratic Adventurism
From American Thinker The Grave Perils of Genetic Editing. Excerpts in italics with my bolds.
A company called Oxitec, based in the U.K., is piloting a program using gene-/information-modified mosquitos to eliminate the invasive female Aedes aegypti mosquitoes in the Florida Keys. The mosquitoes potentially spread diseases such as Dengue fever and Zika.
Dr. Nathan Rose, head regulator of Oxitec, said mosquito-borne diseases are likely to worsen as a result of climate change. According to the CDC, in a ten-year span between 2010 and 2020, there were 71 cases of Dengue fever transmitted in Florida. In essence, the experiment is being conducted for fear of climate change causing a drastic increase in incidence of Dengue fever. In the Fox article, Rose states that Oxitec will first experiment in Florida, collect data, then “go to the U.S. regulatory agencies to actually get a commercial registration to be able to release these mosquitoes more broadly within the United States.”
Don’t think the Florida Keys just opened their arms with a great big bear hug to this experiment. No, there were pushback and questions. In fact, Oxitec had been pushing this experiment to Key Haven and Key West for years, only to be rejected. Many other places have also declined this experiment. When it was conducted in Brazil, it initially seemed to work, but in the end, the mutated mosquitos transferred mutations to the general public. Thankfully, gene drive was not used in the Brazil experiment, for this type of gene manipulation cannot be reversed and can wipe out a species over time.
Evidently, Oxitec has created a second-generation “friendly mosquito” technology, where new male mosquitoes are programmed to kill only female mosquitoes, with males serving and passing on the modified genes to male offspring for generations. Yes, they are programmed to kill. Oxitec CEO Grey Frandsen announced in 2020 that Oxitec looked forward to working with the Florida Keys community to “demonstrate the effectiveness of our safe, sustainable technology in light of the growing challenges controlling this disease-spreading mosquito.”
Let’s hope the Florida mosquitoes experiment is truly a necessity and not some type of climate-change fear-mongering “sustainable” technology based on speculation.
Don’t Assume Global Warming Blunts Economic Growth
In recent years, a strand of economic literature has argued that warming
not only negatively affects the level of economic activity,
but also the rate of income growth. PHOTO BY BLOOMBERG
Ross McKitrick explains in his Financial Post article Why climate change won’t hurt growth. Excerpts in italics with my bolds.
There is no robust evidence that even the worst-case warming scenarios would cause overall economic losses
It has long been observed that global poverty tends to be concentrated in hot, tropical regions. But persistent poverty in African and South American countries has political and historical roots, especially their embrace of Soviet-backed communism in the 20th century. In places where economic reforms were adopted, like South Asia, growth took off and they quickly converged with the West, despite having tropical climates. So the connection to climate may be coincidental.
But in recent years, a strand of economic literature has argued that warming not only negatively affects the level of economic activity, but also the rate of income growth. This matters because when conducting an analysis over a 100-year time span, small changes in the growth rate can compound over a century and result in large total changes.
A 2012 study led by Melissa Dell of Harvard University presented evidence that warming had insignificant effects on income growth in rich countries, but in poor countries the effect was negative and statistically significant. Another team used this result in a policy model to argue that the “social cost of carbon” was at least 10 times higher than previously thought.
This was followed up by several studies led by economists Marshall Burke of Stanford and Solomon Hsiang of Berkeley, who reported evidence that warming had significant negative effects on wealthy and poor countries alike. Suddenly a picture emerged that warming is much more harmful than we thought, so it should be full steam ahead on aggressive climate policy. Global policymakers have embraced this belief, in part at the urging of the United Nations Intergovernmental Panel on Climate Change’s (IPCC) 2018 special report on global warming of 1.5 C, which highlighted this research.
But other research tells a different story. One of the challenges in climate economics is that climate data are collected on a grid cell basis (organized in latitude-longitude boxes), while economic data is collected at the national level. To match them up, Dell’s group averaged the climate data up to the national level. There are different ways of doing the averaging, however, and the results are sensitive to the chosen method.
Other teams have begun trying to build economic data sets at the local and regional level so the averaging step can be omitted. One group from Northern Arizona University used grid cell-level economic data from around the world and found, like Dell, that warming temperatures has no effect on growth in rich countries, but they found it has a positive effect in poor countries up to an average temperature of about 17.5 C, which is above the sample average temperature of 14.4 C.
Then a team from Germany developed a regional economic database that lets them account for what economists call “country fixed effects,” namely, unobservable historical and institutional factors specific to each country that are unrelated to, in this case, the climate variables.
When they apply this method, the climate effects on growth and output vanish for rich and poor countries alike.
More recently, a group led by Richard Newell of Resources for the Future raised the issue that the econometric modelling can be done many different ways. Given the same data set, there are lots of decisions to make, such as how many lagged effects to include, whether to use linear or nonlinear equations and whether to use time trends. Altogether, they counted 800 different ways the same data could be analyzed.
In order to determine whether the results depend on the choice of models, they obtained the data set used by the Burke team and used the same country-level averaging method employed by Dell’s team. Then they ran a meta-analysis in which they ran all the possible models and evaluated at how well each one fit the data, in order to identify the best-performing models to reach their conclusions.
Dozens of different models all fit the data about equally well, and they could not rule out that the best ones do not include any role for temperature in economic growth. There was some evidence that warming is good for growth up to 13.4 C, but the positive and negative effects were not statistically significant.
Across the entire range of temperatures in the sample there was no significant influence of climate on either output or growth.
Under the highest-warming scenario, the Burke team had projected a 49 per cent global GDP loss from climate change by 2100, but Newell found the model variant that fits their data best implied a slight global GDP gain. The best growth models as a group project an effect on GDP by 2100 ranging from -84 per cent to +359 per cent, with the central estimates very close to zero. In other words, the effects are too imprecise to say much of anything for certain.
Now we come up against the challenge that policymakers seem to find it easier to deal with gloomy certainty than optimistic uncertainty. In the blink of an eye, a handful of studies in a new research area had become the canonical truth, on which governments swung into a much more aggressive climate policy stance.
But as time has advanced, new data sets, and even reanalysis of the old data sets, has called those results into question and has shown that temperature (and precipitation) changes likely have insignificant effects on GDP and growth, and the effects are as likely to be positive as they are to be negative. This does not mean there aren’t specific regions and specific industries where there are potential losses, especially if the countries don’t adapt. But for the world as a whole, there is no robust evidence that even the worst-case warming scenarios would cause overall economic losses.
It now falls to advisory groups like the IPCC to tell this to world leaders, before they enact any more disastrous climate policies that will do all the harm (and more) that the evidence says climate change itself will not do.
Footnote: There are also economists pushing the notion of direct costs from global warming/climate change due to supposed increasing health and prosperity impacts from extreme weather. This is contrary to IPCC approved studies by economist William Nordhaus. See IPCC Freakonomics
Georgia Ballot Review Case Going Forward
VoterGA Comment:
Tropics Lead Ocean Temps Return to Mean
The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:
• A major El Nino was the dominant climate feature in recent years.
HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3. More on what distinguishes HadSST3 from other SST products at the end.
The Current Context
The year end report below showed 2020 rapidly cooling in all regions. The anomalies have continued to drop sharply well below the mean since 1995. This Global Cooling was also evident in the UAH Land and Ocean air temperatures ( See March 2021 Ocean Chill Deepens)
The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through May 2021. After three straight Spring 2020 months of cooling led by the tropics and SH, NH spiked in the summer, along with smaller bumps elsewhere. Then temps everywhere dropped the last six months, hitting bottom in February 2021. All regions were well below the Global Mean since 2015, matching the cold of 2018, and lower than January 2015. Now the spring is bringing more temperate waters and a return to the mean anomaly since 2015.
A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.
Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year. A fourth NH bump was lower and peaked in September 2018. As noted above, a fifth peak in August 2019 and a sixth August 2020 exceeded the four previous upward bumps in NH.
In 2019 all regions had been converging to reach nearly the same value in April. Then NH rose exceptionally by almost 0.5C over the four summer months, in August 2019 exceeding previous summer peaks in NH since 2015. In the 4 succeeding months, that warm NH pulse reversed sharply. Then again NH temps warmed to a 2020 summer peak, matching 2019. This has now been reversed with all regions pulling the Global anomaly downward sharply, tempered by warming in March and April, and with May a return to the global mean anomaly since 2015.
And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one. Note the May warming was strongest in the Tropics, though the anomaly is quite cool compared to 2016.
A longer view of SSTs
Now again a different pattern appears. The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off. But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average. In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16. NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.
The highest summer NH peaks came in 2019 and 2020, only this time the Tropics and SH are offsetting rather adding to the warming. (Note: these are high anomalies on top of the highest absolute temps in the NH.) Since 2014 SH has played a moderating role, offsetting the NH warming pulses. After September 2020 temps dropped off down until February 2021, and now all regions are rising to bring the global anomaly above the mean since 1995.
What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH. The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before. After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.
But the peaks coming nearly every summer in HadSST require a different picture. Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
AMO Aug and Dec 2021The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows August warming began after 1992 up to 1998, with a series of matching years since, including 2020. Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
AMO decade 052021This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The black line shows that 2020 began slightly warm, then set records for 3 months. then dropped below 2016 and 2017, peaked in August ending below 2016. Now in 2021, AMO is tracking the coldest years.
The oceans are driving the warming this century. SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.” The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect. The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.
Footnote: Why Rely on HadSST3
HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.
HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.
In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.
Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.
USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean
Exposing Net-Zero Doublethink
Bjorn Lomberg exposes the doublethink rhetoric around the “Net-Zero” carbon emissions notion in his Financial Post article Enough with the net-zero doublethink Excerpts in italics with my bolds and images.
When John Kerry and many other politicians insist that climate policies mean no sacrifice, they are clearly dissembling.
Our current climate conversation embodies two blatantly contradictory claims. On one side, experts warn that promised climate policies will be economically crippling. In a new report, the International Energy Agency (IEA) states that achieving net-zero in 2050 will likely be “the greatest challenge humankind has ever faced.” That is a high bar, surpassing the Second World War, the black plague and COVID.
On the other side, hand-waving politicians sell net-zero climate schemes as a near-utopia that every nation will rush to embrace. As U.S. climate envoy John Kerry told world leaders gathered at President Biden’s climate summit in April: “No one is being asked for a sacrifice.”
Both claims can’t be true. Yet, they are often espoused by the same climate campaigners in different parts of their publicity cycle. The tough talk aims to shake us into action, and the promise of rainbows hides the political peril when the bills come due.
George Orwell called this willingness to espouse contradictory claims doublethink. It is politically expedient and gets climate-alarmed politicians reelected. But if we want to fix climate change, we need honesty. Currently promised climate policies will be incredibly expensive. While they will deliver some benefits, their costs will be much higher.
Yes, climate change is real and man-made, and we should be smart in fixing it. But we don’t because climate impacts are often vastly exaggerated, leaving us panicked. The UN Climate Panel estimates that if we do nothing, climate damages in 2100 will be equivalent to 2.6 per cent of global GDP. That is a problem but not the end of the world.
Because climate news only reports the worst outcomes most people think the damage will be much greater. Remember how we were repeatedly told 2020’s Atlantic hurricane season was the worst ever? The reporting ignored that almost everywhere else, hurricane intensity was feeble, making 2020 one of the globally weakest in satellite history. And even within the Atlantic, 2020 ranked thirteenth.
When John Kerry and many other politicians insist that climate policies mean no sacrifice, they are clearly dissembling. In the UN Climate Panel’s overview, all climate policies have real costs. Why else would we need recurrent climate summits to arm-twist unwilling politicians to ever-greater promises?
The IEA’s new net-zero report contains plenty of concrete examples of sacrifices. By 2050, we will have to live with much lower energy consumption than today. Despite being richer, the average global person will be allowed less energy than today’s average poor. We will all be allowed less energy than the average Albanian used in the 1980s. We will also have to accept shivering in winter at 19°C and sweltering in summer at 26°C, lower highway speeds and fewer people being allowed to fly.
But climate policy sacrifices could still make sense if their costs were lower than the achieved climate benefits. If we could avoid the 2.6 per cent climate damage for, say, one per cent sacrifice, that would be a good outcome. This is common sense and the core logic of the world’s only climate economist to win the Nobel Prize (2018 laureate William Nordhaus of Yale). Smart climate policy costs little and reduces climate damages a lot.
Unfortunately, our current doublethink delivers the reverse outcome. One new peer-reviewed study finds the cost of net-zero just after 2060 — much later than most politicians promise — will cost us more than four per cent of GDP by 2040, or about $5 trillion annually. And this assumes globally coordinated carbon taxes. Otherwise, costs will more than double. Paying eight per cent or more to avoid part of 2.6 per cent damages half a century later is just bad economics.
It is also implausible politics. Just for China, the cost of going net-zero exceeds seven to 14 per cent of its GDP. Instead, China uses green rhetoric to placate westerners but aims for development with 247 new coal-fired power plants. China now emits more greenhouse gases than the entire rich world. Most other poorer countries are hoping to follow China’s rapid ascendance. At a recent climate conference, where dozens of high-level delegates dutifully lauded net-zero, India went off-script. As other participants squirmed, power minister Raj Kumar Singh inconveniently blurted out the truth: net-zero “is just pie-in-the-sky.” He added that developing countries will want to use more and more fossil fuels and “you can’t stop them.”
If we push on with our climate doublethink, rich people will likely continue to wring their hands and aim for net-zero, even at considerable costs to their own societies. But three-quarters of future emissions come from poorer countries pursuing what they regard as the more important development priorities of avoiding poverty, hunger and disease.
Like most great challenges humanity has faced, we solve them not by pushing for endless sacrifices but through innovation. COVID is fixed with vaccines, not unending lockdowns. To tackle climate, we need to ramp up our investments in green energy innovation. Increasing green energy currently requires massive subsidies, but if we could innovate its future price down to below that of fossil fuels, everyone would switch. Innovation is the most sustainable climate solution. It is dramatically cheaper than current policies and demands fewer sacrifices while delivering benefits for most of the world’s population.
Bjorn Lomborg, president of the Copenhagen Consensus, is a visiting fellow at the Hoover Institution, Stanford University. His latest book is “False Alarm: How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet.”
US Heat and Drought Advisory June
Climatists are raising alarms about the rising temperatures and water shortages as evidence of impending doom (it’s summer and that time of year again). So some contextual information is suitable.
First, a comparison of recent US June forecasts for temperatures.
NOAA US temp 2019 2021
And then for the same years, precipitation forecasts.
NOAA US rain 2019 2021
Finally, a reminder of how unrelated CO2 is to all of this.
What Solstice Teaches Us About Climate Change
From Previous Post When Is It Warming?
On June 21, 2015 E.M. Smith made an intriguing comment on the occasion of Summer Solstice (NH) and Winter Solstice (SH):
“This is the time when the sun stops the apparent drift in the sky toward one pole, reverses, and heads toward the other. For about 2 more months, temperatures lag this change of trend. That is the total heat storage capacity of the planet. Heat is not stored beyond that point and there can not be any persistent warming as long as winter brings a return to cold.
I’d actually assert that there are only two measurements needed to show the existence or absence of global warming. Highs in the hottest month must get hotter and lows in the coldest month must get warmer. BOTH must happen, and no other months matter as they are just transitional.
I’m also pretty sure that the comparison of dates of peaks between locations could also be interesting. If one hemisphere is having a drift to, say, longer springs while the other is having longer falls, that’s more orbital mechanics than CO2 driven and ought to be reflected in different temperature trends / rates of drift.” Source: Summer Solstice is here at chiefio
Monthly Temps NH and SH
Notice that the global temperature tracks with the seasons of the NH. The reason for this is simple. The NH has twice as much land as the Southern Hemisphere (SH). Oceans do not change temperatures as much as land does. So every year when there is almost a 4 °C swing in the temperature of the Earth, it follows the seasons of the NH. This is especially interesting because the Earth gets the most energy from the sun in January presently. That is because of the orbit of the Earth. The perihelion is when the Earth is closest to the sun and that currently takes place in January.
Observations and Analysis:
At the time my curiosity was piqued by Chiefio’s comment, so I went looking for data to analyze to test his proposition. As it happens, Berkeley Earth provides data tables for monthly Tmax and Tmin by hemisphere (NH and SH), from land station records. Setting aside any concerns about adjustments or infilling I did the analysis taking the BEST data tables at face value. Since land surface temperatures are more variable than sea surface temps, it seems like a reasonable dataset to analyze for the mentioned patterns. In the analysis below, all years refers to data for the years 1877 through 2013.
Tmax Records
NH and SH long-term trends are the same 0.07C/decade, and in both there was cooling before 1979 and above average warming since. However, since 1950 NH warmed more strongly, and mostly prior to 1998, while SH has warmed strongly since 1998. (Trends below are in C/yr.)
Tmax Trends NH Tmax SH Tmax
All years 0.007 0.007
1998-2013 0.018 0.030
1979-1998 0.029 0.017
1950-1979 -0.003 -0.003
1950-2013 0.020 0.014
Summer Comparisons:
NH summer months are June, July, August, (6-8) and SH summer is December, January, February (12-2). The trends for each of those months were computed and the annual trends subtracted to show if summer months were warming more than the rest of the year (Trends below are in C/yr.).
Month less Annual NH
NH Tmax NH Tmax SH Tmax SH Tmax SH Tmax
Summer Trends
7 8 12 1
All years -0.002 -0.004 -0.004 0.000 0.003 0.002
1998-2013 0.026 0.002 0.006 0.022 0.004 -0.029
1979-1998 0.003 -0.004 -0.003 -0.014 -0.029 0.001
1950-1979 -0.002 -0.002 -0.005 0.004 0.005 -0.005
1950-2013 -0.002 -0.003 -0.002 -0.002 -0.002 -0.002
NH summer months are cooler than average overall and since 1950. Warming does appear since 1998 with a large anomaly in June and also warming in August. SH shows no strong pattern of Tmax warming in summer months. A hot December trend since 1998 is offset by a cold February. Overall SH summers are just above average, and since 1950 have been slightly cooler.
Tmin Records
Both NH and SH show Tmin rising 0.12C/decade, much more strongly warming than Tmax. SH show that average warming persisting throughout the record, slightly higher prior to 1979. NH Tmin is more variable, showing a large jump 1979-1998, a rate of 0.25 C/decade (Trends below are in C/yr.).
Trends NH Tmin SH Tmin
All years 0.012 0.012
1998-2013 0.010 0.010
1979-1998 0.025 0.011
1950-1979 0.006 0.014
1950-2013 0.022 0.014
Winter Comparisons:
SH winter months are June, July, August, (6-8) and NH winter is December, January, February (12-2). The trends for each of those months were computed and the annual trends subtracted to show if winter months were warming more than the rest of the year (Trends below are in C/yr.).
Month less Annual NH Tmin NH Tmin NH Tmin SH Tmin SH Tmin SH Tmin
Winter Trends
1 2 6 7
All years 0.007 0.008 0.007 0.005 0.003 0.004
1998-2013 -0.045 -0.035 -0.076 -0.043 -0.024 -0.019
1979-1998 -0.018 -0.005 0.024 0.034 0.008 -0.008
1950-1979 0.008 0.005 0.007 0.008 0.012 0.013
1950-2013 0.001 0.007 0.008 -0.001 -0.002 0.002
NH winter Tmin warming is stronger than SH Tmin trends, but shows quite strong cooling since 1998. An anomalously warm February is the exception in the period 1979-1998. Both NH and SH show higher Tmin warming in winter months, with some irregularities. Most of the SH Tmin warming was before 1979, with strong cooling since 1998. June was anomalously warming in the period 1979 to 1998.
Tmin did trend higher in winter months but not consistently. Mostly winter Tmin warmed 1950 to 1979, and was much cooler than other months since 1998.
Tmax has not warmed in summer more than in other months, with the exception of two anomalous months since 1998: NH June and SH December.
I find no convincing pattern of summer Tmax warming carrying over into winter Tmin warming. In other words, summers are not adding warming more than other seasons. There is no support for concerns over summer heat waves increasing as a pattern.
It is interesting to note that the plateau in temperatures since the 1998 El Nino is matched by winter months cooler than average during that period, leading to my discovering the real reason for lack of warming recently.
The Real Reason for the Pause in Global Warming?
These data suggest warming trends are coming from less cold overnight temperatures as measured at land weather stations. Since stations exposed to urban heat sources typically show higher minimums overnight and in winter months, this pattern is likely an artifact of human settlement activity rather than CO2 from fossil fuels.
Thus the Pause (more correctly the Plateau) in global warming is caused by end of the century completion of urbanization around most surface stations. With no additional warming from additional urban heat sources, temperatures have remained flat for more than 15 years.
Data is here:
Happy Summer Solstice
White Nights
White Nights Festival, St. Petersburg
Coincidence, or Connected Dot?
John Green writes at American Thinker Sometimes a Coincidence isn’t a Coincidence. Excerpts in italics with my bolds and images.
Coincidences are interesting things. They’re considered remarkable because their combined occurrence seems improbable. But sometimes, improbable occurrences really happen. Lightning really has struck the same location twice — on rare occasions.
In the past year and a half, we have witnessed a remarkable string of apparent coincidences.
Dr. Fauci sponsored “gain of function” research at the Wuhan Institute of Virology. Put simply, this work increases a virus’s ability to cause disease. It makes a virus more dangerous. Coincidentally, we’re now learning that COVID-19 originated from the Wuhan Institute of Virology.
The COVID-19 virus spread throughout the world in the early months of 2020. Coincidentally, this was at the same time that Donald Trump was ratcheting up sanctions against China and rallying worldwide support.
The pandemic resulting from COVID-19 was used as the rationale for fundamental changes to our election processes. These changes facilitated the most questionable election outcome in U.S. history. 51% of the population now believes that fraud affected the election outcome – and that number is growing. Coincidentally, the election of 2020 neutralized China’s biggest threat – President Donald J. Trump.
The beneficiary of the compromised election of 2020 is Joe Biden. Coincidentally, old Joe has deep and troubling financial connections to China. His son Hunter accompanied him to China when Joe was the vice president and subsequently made millions of dollars from Chinese-sponsored business ventures. Emails from Hunter’s abandoned laptop indicate that Joe was the recipient of a sizable portion of those proceeds.
In the past week, we learned that the Defense Intelligence Agency (DIA) has a high-level defector from China — whom they’re not sharing with the FBI or CIA. This defector is providing evidence that COVID-19 was not only created in the Wuhan lab but may have been deliberately leaked by the Chinese. This revelation coincidentally came at the same time the FBI was working to discredit scientists claiming the virus was created in a lab.
Representative Matt Gaetz aggressively questioned FBI Director Christopher Wray about the FBI’s behavior relative to COVID-19 scientific whistleblowers. Shortly after this questioning, the press began a series of stories insinuating that Gaetz had inappropriate relationships with underage girls — though no evidence has been presented yet. But I’m sure it’s just a coincidence.
Coincidentally, this is all happening at a time when China is making substantial investments in American property and businesses. After its behavior during the last year, is there any doubt that the NBA is beholden to China? The news media has run cover for China as well, claiming that any attempt to tie them to the pandemic is racism. There are also land purchases. China bought 180,000 acres (280 square miles) in Texas! They say they’re building a wind farm, but the property has a 5,000-foot runway which they’re expanding, and it’s adjacent to a busy U.S. military base. I’m sure the location is just coincidental.
This seems that an unbelievable number of happenstance occurrences have all benefited China. Is it possible that these events are not coincidences at all, but are rather engineered outcomes in support of a higher objective? If so, it raises a number of questions.
Are the FBI and CIA hopelessly compromised? Is it possible that the organizations which supported a coup attempt against an elected President can’t be trusted with national security? They’re certainly no longer the premier law enforcement and intelligence agencies they claim to be. They have too many failures to be a “premier” anything – except maybe a clown show. Are they incompetent, corrupt, or have they been infiltrated? It probably doesn’t matter since incompetence or corruption invites infiltration.
Where does the support for Antifa and BLM originate? They’re both doing their part to destabilize America. BLM is led by self-professed Marxists – making them useful idiots. Antifa seems to believe in nothing but anarchy – making them useful thugs. Whenever members of either group are arrested, there’s plenty of money to bail them out – from somewhere.
How beholden to China is the news and entertainment industry? I notice that those taking a knee for our National Anthem haven’t uttered a word of criticism against China’s use of slavery. News organizations called Trump a “racist” for characterizing COVID as the Chinese virus – even though naming viruses by their point of origin is common practice.
Does China have any inappropriate influence over Joe Biden? We know his family has received millions of dollars from China and there is evidence he has shared in that bounty. Is our President vulnerable to blackmail?
Have we been under attack from China and didn’t know it because our intelligence and political leadership swore to defend the United States, but really had other priorities?
Clearly, we don’t know the answers to these questions. But if China decides to act on its expansionist ambitions, our intelligence community is unlikely to provide any warning. Likewise, our current political leadership is unlikely to take any meaningful action.
But maybe this is all just crazy conspiracy thinking. Perhaps everything we’ve experienced since early last year is just an astronomically unlikely confluence of random events. But isn’t it interesting that these events have left America disengaged at the very time China is expanding its global influence? One final question: If China wanted to neutralize America, could they have done it any better by some other means?
Politicize Science at Your Peril
Anna I. Krylov (Department of Chemistry, University of Southern California) writes at the American Chemical Society The Peril of Politicizing Science. Excerpts in italics with my bolds and some added images.
I came of age during a relatively mellow period of the Soviet rule, post-Stalin. Still, the ideology permeated all aspects of life, and survival required strict adherence to the party line and enthusiastic displays of ideologically proper behavior. Not joining a young communist organization (Komsomol) would be career suicide—nonmembers were barred from higher education. Openly practicing religion could lead to more grim consequences, up to imprisonment. So could reading the wrong book (Orwell, Solzhenitsyn, etc.). Even a poetry book that was not on the state-approved list could get one in trouble.
Mere compliance was not sufficient—the ideology committees were constantly on the lookout for individuals whose support of the regime was not sufficiently enthusiastic. It was not uncommon to get disciplined for being too quiet during mandatory political assemblies (politinformation or komsomolskoe sobranie) or for showing up late to mandatory mass-celebrations (such as the May or November demonstrations). Once I got a notice for promoting an imperialistic agenda by showing up in jeans for an informal school event. A friend’s dossier was permanently blemished—making him ineligible for Ph.D. programs—for not fully participating in a trip required of university students: an act of “voluntary” help to comrades in collective farms (Figure 2).
Figure 2. Fourth-year chemistry students from Moscow State University (the author is on the right) enjoying a short break in the potato fields during mandatory farm labor, ca. 1987. The sticks were used as aids for separating potatoes from the mud.
Science was not spared from this strict ideological control.(6) Western influences were considered to be dangerous. Textbooks and scientific papers tirelessly emphasized the priority and pre-eminence of Russian and Soviet science. Entire disciplines were declared ideologically impure, reactionary, and hostile to the cause of working-class dominance and the World Revolution. Notable examples of “bourgeois pseudo-science” included genetics and cybernetics. Quantum mechanics and general relativity were also criticized for insufficient alignment with dialectic materialism.
Most relevant to chemistry was the antiresonance campaign (1949–1951).(7) The theory of resonating structures, which brought Linus Pauling the Nobel prize in 1954, was deemed to be bourgeois pseudoscience. Scientists who attempted to defend the merits of the theory and its utility for understanding chemical structures were accused of “cosmopolitism” (Western sympathy) and servility to Western bourgeois science. Some lost jobs. . . This is a recurring motif in all political campaigns within science in Soviet Russia, Nazi Germany, and McCarthy’s America—those who are “on the right side” of the issue can jump a few rungs and take the place of those who were canceled. By the time I studied quantum chemistry at Moscow State University, resonance theory had been rehabilitated. Yet, the history of the campaign and the injustices it entailed were not discussed in the open—the Party did not welcome conversations about its past mistakes. I remember hearing parts of the story, narrated under someone’s breath at a party after copious amounts of alcohol had loosened a tongue.
Fast forward to 2021—another century. The Cold War is a distant memory and the country shown on my birth certificate and school and university diplomas, the USSR, is no longer on the map. But I find myself experiencing its legacy some thousands of miles to the west, as if I am living in an Orwellian twilight zone. I witness ever-increasing attempts to subject science and education to ideological control and censorship. Just as in Soviet times, the censorship is being justified by the greater good. Whereas in 1950, the greater good was advancing the World Revolution (in the USSR; in the USA the greater good meant fighting Communism), in 2021 the greater good is “Social Justice” (the capitalization is important: “Social Justice” is a specific ideology, with goals that have little in common with what lower-case “social justice” means in plain English).(10−12) As in the USSR, the censorship is enthusiastically imposed also from the bottom, by members of the scientific community, whose motives vary from naive idealism to cynical power-grabbing.
Just as during the time of the Great Terror,(5,13) dangerous conspiracies and plots against the World Revolution were seen everywhere, from illustrations in children’s books to hairstyles and fashions; today we are told that racism, patriarchy, misogyny, and other reprehensible ideas are encoded in scientific terms, names of equations, and in plain English words. We are told that in order to build a better world and to address societal inequalities, we need to purge our literature of the names of people whose personal records are not up to the high standards of the self-anointed bearers of the new truth, the Elect.(11) We are told that we need to rewrite our syllabi and change the way we teach and speak.(14,15)
As an example of political censorship and cancel culture, consider a recent viewpoint(16) discussing the centuries-old tradition of attaching names to scientific concepts and discoveries (Archimede’s Principle, Newton’s Laws of Motion, Schrödinger equation, Curie Law, etc.). The authors call for vigilance in naming discoveries and assert that “basing the name with inclusive priorities may provide a path to a richer, deeper, and more robust understanding of the science and its advancement.” Really? On what empirical grounds is this based?
History teaches us the opposite: the outcomes of the merit-based science of liberal, pluralistic societies are vastly superior to those of the ideologically controlled science of the USSR and other totalitarian regimes.
Conversations about the history of science and the complexity of its social and ethical aspects can enrich our lives and should be a welcome addition to science curricula. The history of science can teach us to appreciate the complexity of the world and humanity. It can also help us to navigate urgent contemporary issues.(25) Censorship and cancellation will not make us smarter, will not lead to better science, and will not help the next generation of scientists to make better choices.
Today’s censorship does not stop at purging the scientific vocabulary of the names of scientists who “crossed the line” or fail the ideological litmus tests of the Elect.(11) In some schools,(33,34) physics classes no longer teach “Newton’s Laws”, but “the three fundamental laws of physics”. Why was Newton canceled? Because he was white, and the new ideology(10,12,15) calls for “decentering whiteness” and “decolonizing” the curriculum. A comment in Nature(35) calls for replacing the accepted technical term “quantum supremacy” by “quantum advantage”. The authors regard the English word “supremacy” as “violent” and equate its usage with promoting racism and colonialism. They also warn us about “damage” inflicted by using such terms as “conquest”. I assume “divide-and-conquer” will have to go too. Remarkably, this Soviet-style ghost-chasing gains traction. In partnership with their Diversity, Equity, and Inclusion taskforce, the Information and Technology Services Department of the University of Michigan set out to purge the language within the university and without (by imposing restrictions on university vendors) from such hurtful and racist terms as “picnic”, “brown bag lunch”, “black-and-white thinking”, “master password”, “dummy variable”, “disabled system”, “grandfathered account”, “strawman argument”, and “long time no see”.(36) “The list is not exhaustive and will continue to grow”, warns the memo. Indeed, new words are canceled every day—I just learned that the word “normal” will no longer be used on Dove soap packaging because “it makes most people feel excluded”(37)
jimbob outrage
Do words have life and power of their own? Can they really cause injury? Do they carry hidden messages? The ideology claims so and encourages us all to be on the constant lookout for offenses. If you are not sure when you should be offended—check out the list of microagressions—a quick google search can deliver plenty of official documents from serious institutions that, with a few exceptions, sound like a sketch for the next Borat movie.(38) If nothing fits the bill, you can always find malice in the sounds of a foreign language. At the University of Southern California, a professor was recently suspended because students claimed to have been offended by the sounds of Chinese words used to illustrate the concept of filler words in a communications class.(39,40)
Why did I devote a considerable amount of my time to writing this essay? . .The answer is simple: our future is at stake. As a community, we face an important choice. We can succumb to extreme left ideology and spend the rest of our lives ghost-chasing and witch-hunting, rewriting history, politicizing science, redefining elements of language, and turning STEM (science, technology, engineering, and mathematics) education into a farce.(41−44) Or we can uphold a key principle of democratic society—the free and uncensored exchange of ideas—and continue our core mission, the pursuit of truth, focusing attention on solving real, important problems of humankind. |
37fbfa4cf4cf22e1 | Who Wants Pi?
Today is Pi Day, and not the fruity, creamy or custardy kind with the sweet filling and tender crust. Nope, it’s the math kind of pi, the 3.14159… I don’t know the rest because I never memorized past the 9!
The Greek symbol π, or Pi, represents that elusive number that goes on forever, the quotient found by dividing the circumference of a circle (the distance around it) by its diameter (the distance across its middle). Pi Day, March 14 (3/14), was founded in 1988 by Larry Shaw, a curator at San Francisco’s Exploratorium Museum of Science, Art, and Human Perception. Mr. Shaw passed away last fall, but his special day will likely live on as long as the numbers after the decimal point in pi.
By Matman from Lublin – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14598097
Pi is primarily known for its usefulness for determining a circle’s circumference and the area of a circle, cylinder, cone, or sphere, but pi has much, much more to offer than your average decimal. In my early statistics classes, I vaguely remember using pi to figure out distributions, which in turn were used to figure out probabilities, or the likelihood of an occurrence. Knowing the likelihood of a future event is useful information to have. For example, I would like to know the odds of one of my sons calling me this month, or the chances that I’ll get around to mowing the lawn this year or finishing this column by the deadline. Pi can help with that.
Pi is also featured in the Fourier transform, a formula for converting signals to frequency spectrums. It’s used in cell phone and medical imaging technology, for analyzing DNA sequences, and lots of other places. You should Google or YouTube it for a thorough explanation, or if you have a teenager, ask them. I learned about it in an electronic music class back in the 1970s when I was trying to build a Theremin, a predecessor to modern digital instruments. I never got it to work right, and that was the end of my math (and musical) career.
Pi is also part of the Schrödinger equation, a staple of quantum mechanics. This is the same cat-in-the-box Schrödinger from physics class. Remember that guy? His formula, a differential wave equation, recently enjoyed a galactic comeback across science journals and social media. Apparently, the Schrödinger equation, typically relegated to the lowly study of waves, particles, and “wavicles” at the atomic and subatomic level, explains the warps in all those astronomical spinning space disks we’ve been wondering about for so many years. OK, not all of us have been wondering about the warps. I wasn’t even aware of them until last week. But hey, way to go, pi! We couldn’t have done it without you. By the way, if you want an idea of what a warped disk looks like, get out your “original copy” of Led Zeppelin II, throw it on the turntable, and give it a spin.
Knowing what pi means makes people feel smart, even if it’s the only thing they remember from math class. Most of us finish up geometry and trigonometry, dabble a bit in calculus, and then — unless we start a math-centric major like science, engineering or finance — we move on and forget all about math. But we shouldn’t forget math or take it for granted. And we should never, ever think a mathematical discovery has nothing to do with us.
Math explains a lot whether we care about it or not. When new uses are discovered for numbers — like explaining the lumps and bumps in space disks — we should pay attention and thank our lucky stars there are people who remember and use math all the time. Those kinds of discoveries have a way of trickling down to innovative technologies that affect our daily lives, typically for the better.
I’m not one of those math people, but I salute them, and I never take math for granted. So, on this day, March 14, raise a glass of whatever you’re drinking — to pi! And while you’re staring down at the bottom of your glass, remember: thanks to the power of pi, determining the value of its circumference and the volume of whatever you’re drinking — if you’re inclined to do the math — is possible!
This blog first appeared as a column in the March 14, 2018, Woodmen Edition of the Gazette Community News. |
7a436ad55ae0ac1c | @misc{5400, abstract = {We consider partially observable Markov decision processes (POMDPs) with ω-regular conditions specified as parity objectives. The class of ω-regular languages extends regular languages to infinite strings and provides a robust specification language to express all properties used in verification, and parity objectives are canonical forms to express ω-regular conditions. The qualitative analysis problem given a POMDP and a parity objective asks whether there is a strategy to ensure that the objective is satis- fied with probability 1 (resp. positive probability). While the qualitative analysis problems are known to be undecidable even for very special cases of parity objectives, we establish decidability (with optimal complexity) of the qualitative analysis problems for POMDPs with all parity objectives under finite- memory strategies. We establish asymptotically optimal (exponential) memory bounds and EXPTIME- completeness of the qualitative analysis problems under finite-memory strategies for POMDPs with parity objectives.}, author = {Chatterjee, Krishnendu and Chmelik, Martin and Tracol, Mathieu}, issn = {2664-1690}, pages = {41}, publisher = {IST Austria}, title = {{What is decidable about partially observable Markov decision processes with ω-regular objectives}}, doi = {10.15479/AT:IST-2013-109-v1-1}, year = {2013}, } @techreport{5401, abstract = {This document is created as a part of the project “Repository for Research Data at IST Austria”. It summarises the actual initiatives, projects and standards related to the project. It supports the preparation of standards and specifications for the project, which should be considered and followed to ensure interoperability and visibility of the uploaded data.}, author = {Porsche, Jana}, publisher = {IST Austria}, title = {{Initiatives and projects related to RD}}, year = {2013}, } @misc{5402, abstract = {Linearizability requires that the outcome of calls by competing threads to a concurrent data structure is the same as some sequential execution where each thread has exclusive access to the data structure. In an ordered data structure, such as a queue or a stack, linearizability is ensured by requiring threads commit in the order dictated by the sequential semantics of the data structure; e.g., in a concurrent queue implementation a dequeue can only remove the oldest element. In this paper, we investigate the impact of this strict ordering, by comparing what linearizability allows to what existing implementations do. We first give an operational definition for linearizability which allows us to build the most general linearizable implementation as a transition system for any given sequential specification. We then use this operational definition to categorize linearizable implementations based on whether they are bound or free. In a bound implementation, whenever all threads observe the same logical state, the updates to the logical state and the temporal order of commits coincide. All existing queue implementations we know of are bound. We then proceed to present, to the best of our knowledge, the first ever free queue implementation. Our experiments show that free implementations have the potential for better performance by suffering less from contention.}, author = {Henzinger, Thomas A and Sezgin, Ali}, issn = {2664-1690}, pages = {16}, publisher = {IST Austria}, title = {{How free is your linearizable concurrent data structure?}}, doi = {10.15479/AT:IST-2013-123-v1-1}, year = {2013}, } @misc{5403, abstract = {We consider concurrent games played by two-players on a finite state graph, where in every round the players simultaneously choose a move, and the current state along with the joint moves determine the successor state. We study the most fundamental objective for concurrent games, namely, mean-payoff or limit-average objective, where a reward is associated to every transition, and the goal of player 1 is to maximize the long-run average of the rewards, and the objective of player 2 is strictly the opposite (i.e., the games are zero-sum). The path constraint for player 1 could be qualitative, i.e., the mean-payoff is the maximal reward, or arbitrarily close to it; or quantitative, i.e., a given threshold between the minimal and maximal reward. We consider the computation of the almost-sure (resp. positive) winning sets, where player 1 can ensure that the path constraint is satisfied with probability 1 (resp. positive probability). Almost-sure winning with qualitative constraint exactly corresponds to the question whether there exists a strategy to ensure that the payoff is the maximal reward of the game. Our main results for qualitative path constraints are as follows: (1) we establish qualitative determinacy results that show for every state either player 1 has a strategy to ensure almost-sure (resp. positive) winning against all player-2 strategies or player 2 has a spoiling strategy to falsify almost-sure (resp. positive) winning against all player-1 strategies; (2) we present optimal strategy complexity results that precisely characterize the classes of strategies required for almost-sure and positive winning for both players; and (3) we present quadratic time algorithms to compute the almost-sure and the positive winning sets, matching the best known bound of the algorithms for much simpler problems (such as reachability objectives). For quantitative constraints we show that a polynomial time solution for the almost-sure or the positive winning set would imply a solution to a long-standing open problem (of solving the value problem of mean-payoff games) that is not known to be in polynomial time.}, author = {Chatterjee, Krishnendu and Ibsen-Jensen, Rasmus}, issn = {2664-1690}, pages = {33}, publisher = {IST Austria}, title = {{Qualitative analysis of concurrent mean-payoff games}}, doi = {10.15479/AT:IST-2013-126-v1-1}, year = {2013}, } @misc{5404, abstract = {We study finite-state two-player (zero-sum) concurrent mean-payoff games played on a graph. We focus on the important sub-class of ergodic games where all states are visited infinitely often with probability 1. The algorithmic study of ergodic games was initiated in a seminal work of Hoffman and Karp in 1966, but all basic complexity questions have remained unresolved. Our main results for ergodic games are as follows: We establish (1) an optimal exponential bound on the patience of stationary strategies (where patience of a distribution is the inverse of the smallest positive probability and represents a complexity measure of a stationary strategy); (2) the approximation problem lie in FNP; (3) the approximation problem is at least as hard as the decision problem for simple stochastic games (for which NP and coNP is the long-standing best known bound). We show that the exact value can be expressed in the existential theory of the reals, and also establish square-root sum hardness for a related class of games.}, author = {Chatterjee, Krishnendu and Ibsen-Jensen, Rasmus}, issn = {2664-1690}, pages = {29}, publisher = {IST Austria}, title = {{The complexity of ergodic games}}, doi = {10.15479/AT:IST-2013-127-v1-1}, year = {2013}, } @misc{5405, abstract = {The theory of graph games is the foundation for modeling and synthesizing reactive processes. In the synthesis of stochastic processes, we use 2-1/2-player games where some transitions of the game graph are controlled by two adversarial players, the System and the Environment, and the other transitions are determined probabilistically. We consider 2-1/2-player games where the objective of the System is the conjunction of a qualitative objective (specified as a parity condition) and a quantitative objective (specified as a mean-payoff condition). We establish that the problem of deciding whether the System can ensure that the probability to satisfy the mean-payoff parity objective is at least a given threshold is in NP ∩ coNP, matching the best known bound in the special case of 2-player games (where all transitions are deterministic) with only parity objectives, or with only mean-payoff objectives. We present an algorithm running in time O(d · n^{2d}·MeanGame) to compute the set of almost-sure winning states from which the objective can be ensured with probability 1, where n is the number of states of the game, d the number of priorities of the parity objective, and MeanGame is the complexity to compute the set of almost-sure winning states in 2-1/2-player mean-payoff games. Our results are useful in the synthesis of stochastic reactive systems with both functional requirement (given as a qualitative objective) and performance requirement (given as a quantitative objective).}, author = {Chatterjee, Krishnendu and Doyen, Laurent and Gimbert, Hugo and Oualhadj, Youssouf}, issn = {2664-1690}, pages = {22}, publisher = {IST Austria}, title = {{Perfect-information stochastic mean-payoff parity games}}, doi = {10.15479/AT:IST-2013-128-v1-1}, year = {2013}, } @misc{5406, abstract = {We consider the distributed synthesis problem fortemporal logic specifications. Traditionally, the problem has been studied for LTL, and the previous results show that the problem is decidable iff there is no information fork in the architecture. We consider the problem for fragments of LTLand our main results are as follows: (1) We show that the problem is undecidable for architectures with information forks even for the fragment of LTL with temporal operators restricted to next and eventually. (2) For specifications restricted to globally along with non-nested next operators, we establish decidability (in EXPSPACE) for star architectures where the processes receive disjoint inputs, whereas we establish undecidability for architectures containing an information fork-meet structure. (3)Finally, we consider LTL without the next operator, and establish decidability (NEXPTIME-complete) for all architectures for a fragment that consists of a set of safety assumptions, and a set of guarantees where each guarantee is a safety, reachability, or liveness condition.}, author = {Chatterjee, Krishnendu and Henzinger, Thomas A and Otop, Jan and Pavlogiannis, Andreas}, issn = {2664-1690}, pages = {11}, publisher = {IST Austria}, title = {{Distributed synthesis for LTL Fragments}}, doi = {10.15479/AT:IST-2013-130-v1-1}, year = {2013}, } @techreport{5407, abstract = {This document is created as a part of the project “Repository for Research Data at IST Austria”. It summarises the mandatory features, which need to be fulfilled to provide an institutional repository as a platform and also a service to the scientists at the institute. It also includes optional features, which would be of strong benefit for the scientists and would increase the usage of the repository, and hence the visibility of research at IST Austria.}, author = {Porsche, Jana}, publisher = {IST Austria}, title = {{Technical requirements and features}}, year = {2013}, } @misc{5408, abstract = {We consider two-player partial-observation stochastic games where player 1 has partial observation and player 2 has perfect observation. The winning condition we study are omega-regular conditions specified as parity objectives. The qualitative analysis problem given a partial-observation stochastic game and a parity objective asks whether there is a strategy to ensure that the objective is satisfied with probability 1 (resp. positive probability). While the qualitative analysis problems are known to be undecidable even for very special cases of parity objectives, they were shown to be decidable in 2EXPTIME under finite-memory strategies. We improve the complexity and show that the qualitative analysis problems for partial-observation stochastic parity games under finite-memory strategies are EXPTIME-complete; and also establish optimal (exponential) memory bounds for finite-memory strategies required for qualitative analysis. }, author = {Chatterjee, Krishnendu and Doyen, Laurent and Nain, Sumit and Vardi, Moshe}, issn = {2664-1690}, pages = {17}, publisher = {IST Austria}, title = {{The complexity of partial-observation stochastic parity games with finite-memory strategies}}, doi = {10.15479/AT:IST-2013-141-v1-1}, year = {2013}, } @misc{5409, abstract = {The edit distance between two (untimed) traces is the minimum cost of a sequence of edit operations (insertion, deletion, or substitution) needed to transform one trace to the other. Edit distances have been extensively studied in the untimed setting, and form the basis for approximate matching of sequences in different domains such as coding theory, parsing, and speech recognition. In this paper, we lift the study of edit distances from untimed languages to the timed setting. We define an edit distance between timed words which incorporates both the edit distance between the untimed words and the absolute difference in timestamps. Our edit distance between two timed words is computable in polynomial time. Further, we show that the edit distance between a timed word and a timed language generated by a timed automaton, defined as the edit distance between the word and the closest word in the language, is PSPACE-complete. While computing the edit distance between two timed automata is undecidable, we show that the approximate version, where we decide if the edit distance between two timed automata is either less than a given parameter or more than delta away from the parameter, for delta>0, can be solved in exponential space and is EXPSPACE-hard. Our definitions and techniques can be generalized to the setting of hybrid systems, and we show analogous decidability results for rectangular automata.}, author = {Chatterjee, Krishnendu and Ibsen-Jensen, Rasmus and Majumdar, Rupak}, issn = {2664-1690}, pages = {12}, publisher = {IST Austria}, title = {{Edit distance for timed automata}}, doi = {10.15479/AT:IST-2013-144-v1-1}, year = {2013}, } @misc{5410, abstract = {Board games, like Tic-Tac-Toe and CONNECT-4, play an important role not only in development of mathematical and logical skills, but also in emotional and social development. In this paper, we address the problem of generating targeted starting positions for such games. This can facilitate new approaches for bringing novice players to mastery, and also leads to discovery of interesting game variants. Our approach generates starting states of varying hardness levels for player 1 in a two-player board game, given rules of the board game, the desired number of steps required for player 1 to win, and the expertise levels of the two players. Our approach leverages symbolic methods and iterative simulation to efficiently search the extremely large state space. We present experimental results that include discovery of states of varying hardness levels for several simple grid-based board games. Also, the presence of such states for standard game variants like Tic-Tac-Toe on board size 4x4 opens up new games to be played that have not been played for ages since the default start state is heavily biased. }, author = {Ahmed, Umair and Chatterjee, Krishnendu and Gulwani, Sumit}, issn = {2664-1690}, pages = {13}, publisher = {IST Austria}, title = {{Automatic generation of alternative starting positions for traditional board games}}, doi = {10.15479/AT:IST-2013-146-v1-1}, year = {2013}, } @inbook{5747, author = {Dragoi, Cezara and Gupta, Ashutosh and Henzinger, Thomas A}, booktitle = {Computer Aided Verification}, isbn = {9783642397981}, issn = {0302-9743}, location = {Saint Petersburg, Russia}, pages = {174--190}, publisher = {Springer Berlin Heidelberg}, title = {{Automatic Linearizability Proofs of Concurrent Objects with Cooperating Updates}}, doi = {10.1007/978-3-642-39799-8_11}, volume = {8044}, year = {2013}, } @article{10895, abstract = {Due to their sessile lifestyles, plants need to deal with the limitations and stresses imposed by the changing environment. Plants cope with these by a remarkable developmental flexibility, which is embedded in their strategy to survive. Plants can adjust their size, shape and number of organs, bend according to gravity and light, and regenerate tissues that were damaged, utilizing a coordinating, intercellular signal, the plant hormone, auxin. Another versatile signal is the cation, Ca2+, which is a crucial second messenger for many rapid cellular processes during responses to a wide range of endogenous and environmental signals, such as hormones, light, drought stress and others. Auxin is a good candidate for one of these Ca2+-activating signals. However, the role of auxin-induced Ca2+ signaling is poorly understood. Here, we will provide an overview of possible developmental and physiological roles, as well as mechanisms underlying the interconnection of Ca2+ and auxin signaling. }, author = {Vanneste, Steffen and Friml, Jiří}, issn = {2223-7747}, journal = {Plants}, keywords = {Plant Science, Ecology, Ecology, Evolution, Behavior and Systematics}, number = {4}, pages = {650--675}, publisher = {MDPI}, title = {{Calcium: The missing link in auxin action}}, doi = {10.3390/plants2040650}, volume = {2}, year = {2013}, } @misc{6440, abstract = {In order to guarantee that each method of a data structure updates the logical state exactly once, al-most all non-blocking implementations employ Compare-And-Swap (CAS) based synchronization. For FIFO queue implementations this translates into concurrent enqueue or dequeue methods competing among themselves to update the same variable, the tail or the head, respectively, leading to high contention and poor scalability. Recent non-blocking queue implementations try to alleviate high contentionby increasing the number of contention points, all the while using CAS-based synchronization. Furthermore, obtaining a wait-free implementation with competition is achieved by additional synchronization which leads to further degradation of performance.In this paper we formalize the notion of competitiveness of a synchronizing statement which can beused as a measure for the scalability of concurrent implementations. We present a new queue implementation, the Speculative Pairing (SP) queue, which, as we show, decreases competitiveness by using Fetch-And-Increment (FAI) instead of CAS. We prove that the SP queue is linearizable and lock-free.We also show that replacing CAS with FAI leads to wait-freedom for dequeue methods without an adverse effect on performance. In fact, our experiments suggest that the SP queue can perform and scale better than the state-of-the-art queue implementations.}, author = {Henzinger, Thomas A and Payer, Hannes and Sezgin, Ali}, issn = {2664-1690}, pages = {23}, publisher = {IST Austria}, title = {{Replacing competition with cooperation to achieve scalable lock-free FIFO queues }}, doi = {10.15479/AT:IST-2013-124-v1-1}, year = {2013}, } @inproceedings{1374, abstract = {We study two-player zero-sum games over infinite-state graphs equipped with ωB and finitary conditions. Our first contribution is about the strategy complexity, i.e the memory required for winning strategies: we prove that over general infinite-state graphs, memoryless strategies are sufficient for finitary Büchi, and finite-memory suffices for finitary parity games. We then study pushdown games with boundedness conditions, with two contributions. First we prove a collapse result for pushdown games with ωB-conditions, implying the decidability of solving these games. Second we consider pushdown games with finitary parity along with stack boundedness conditions, and show that solving these games is EXPTIME-complete.}, author = {Chatterjee, Krishnendu and Fijalkow, Nathanaël}, booktitle = {22nd EACSL Annual Conference on Computer Science Logic}, location = {Torino, Italy}, pages = {181 -- 196}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Infinite-state games with finitary conditions}}, doi = {10.4230/LIPIcs.CSL.2013.181}, volume = {23}, year = {2013}, } @inproceedings{1376, abstract = {We consider the distributed synthesis problem for temporal logic specifications. Traditionally, the problem has been studied for LTL, and the previous results show that the problem is decidable iff there is no information fork in the architecture. We consider the problem for fragments of LTL and our main results are as follows: (1) We show that the problem is undecidable for architectures with information forks even for the fragment of LTL with temporal operators restricted to next and eventually. (2) For specifications restricted to globally along with non-nested next operators, we establish decidability (in EXPSPACE) for star architectures where the processes receive disjoint inputs, whereas we establish undecidability for architectures containing an information fork-meet structure. (3) Finally, we consider LTL without the next operator, and establish decidability (NEXPTIME-complete) for all architectures for a fragment that consists of a set of safety assumptions, and a set of guarantees where each guarantee is a safety, reachability, or liveness condition.}, author = {Chatterjee, Krishnendu and Henzinger, Thomas A and Otop, Jan and Pavlogiannis, Andreas}, booktitle = {13th International Conference on Formal Methods in Computer-Aided Design}, location = {Portland, OR, United States}, pages = {18 -- 25}, publisher = {IEEE}, title = {{Distributed synthesis for LTL fragments}}, doi = {10.1109/FMCAD.2013.6679386}, year = {2013}, } @inproceedings{1385, abstract = {It is often difficult to correctly implement a Boolean controller for a complex system, especially when concurrency is involved. Yet, it may be easy to formally specify a controller. For instance, for a pipelined processor it suffices to state that the visible behavior of the pipelined system should be identical to a non-pipelined reference system (Burch-Dill paradigm). We present a novel procedure to efficiently synthesize multiple Boolean control signals from a specification given as a quantified first-order formula (with a specific quantifier structure). Our approach uses uninterpreted functions to abstract details of the design. We construct an unsatisfiable SMT formula from the given specification. Then, from just one proof of unsatisfiability, we use a variant of Craig interpolation to compute multiple coordinated interpolants that implement the Boolean control signals. Our method avoids iterative learning and back-substitution of the control functions. We applied our approach to synthesize a controller for a simple two-stage pipelined processor, and present first experimental results.}, author = {Hofferek, Georg and Gupta, Ashutosh and Könighofer, Bettina and Jiang, Jie and Bloem, Roderick}, booktitle = {2013 Formal Methods in Computer-Aided Design}, location = {Portland, OR, United States}, pages = {77 -- 84}, publisher = {IEEE}, title = {{Synthesizing multiple boolean functions using interpolation on a single proof}}, doi = {10.1109/FMCAD.2013.6679394}, year = {2013}, } @inproceedings{1387, abstract = {Choices made by nondeterministic word automata depend on both the past (the prefix of the word read so far) and the future (the suffix yet to be read). In several applications, most notably synthesis, the future is diverse or unknown, leading to algorithms that are based on deterministic automata. Hoping to retain some of the advantages of nondeterministic automata, researchers have studied restricted classes of nondeterministic automata. Three such classes are nondeterministic automata that are good for trees (GFT; i.e., ones that can be expanded to tree automata accepting the derived tree languages, thus whose choices should satisfy diverse futures), good for games (GFG; i.e., ones whose choices depend only on the past), and determinizable by pruning (DBP; i.e., ones that embody equivalent deterministic automata). The theoretical properties and relative merits of the different classes are still open, having vagueness on whether they really differ from deterministic automata. In particular, while DBP ⊆ GFG ⊆ GFT, it is not known whether every GFT automaton is GFG and whether every GFG automaton is DBP. Also open is the possible succinctness of GFG and GFT automata compared to deterministic automata. We study these problems for ω-regular automata with all common acceptance conditions. We show that GFT=GFG⊃DBP, and describe a determinization construction for GFG automata.}, author = {Boker, Udi and Kuperberg, Denis and Kupferman, Orna and Skrzypczak, Michał}, location = {Riga, Latvia}, number = {PART 2}, pages = {89 -- 100}, publisher = {Springer}, title = {{Nondeterminism in the presence of a diverse or unknown future}}, doi = {10.1007/978-3-642-39212-2_11}, volume = {7966}, year = {2013}, } @phdthesis{1405, abstract = {Motivated by the analysis of highly dynamic message-passing systems, i.e. unbounded thread creation, mobility, etc. we present a framework for the analysis of depth-bounded systems. Depth-bounded systems are one of the most expressive known fragment of the π-calculus for which interesting verification problems are still decidable. Even though they are infinite state systems depth-bounded systems are well-structured, thus can be analyzed algorithmically. We give an interpretation of depth-bounded systems as graph-rewriting systems. This gives more flexibility and ease of use to apply depth-bounded systems to other type of systems like shared memory concurrency. First, we develop an adequate domain of limits for depth-bounded systems, a prerequisite for the effective representation of downward-closed sets. Downward-closed sets are needed by forward saturation-based algorithms to represent potentially infinite sets of states. Then, we present an abstract interpretation framework to compute the covering set of well-structured transition systems. Because, in general, the covering set is not computable, our abstraction over-approximates the actual covering set. Our abstraction captures the essence of acceleration based-algorithms while giving up enough precision to ensure convergence. We have implemented the analysis in the PICASSO tool and show that it is accurate in practice. Finally, we build some further analyses like termination using the covering set as starting point.}, author = {Zufferey, Damien}, pages = {134}, publisher = {IST Austria}, title = {{Analysis of dynamic message passing programs}}, doi = {10.15479/at:ista:1405}, year = {2013}, } @phdthesis{1406, abstract = {Epithelial spreading is a critical part of various developmental and wound repair processes. Here we use zebrafish epiboly as a model system to study the cellular and molecular mechanisms underlying the spreading of epithelial sheets. During zebrafish epiboly the enveloping cell layer (EVL), a simple squamous epithelium, spreads over the embryo to eventually cover the entire yolk cell by the end of gastrulation. The EVL leading edge is anchored through tight junctions to the yolk syncytial layer (YSL), where directly adjacent to the EVL margin a contractile actomyosin ring is formed that is thought to drive EVL epiboly. The prevalent view in the field was that the contractile ring exerts a pulling force on the EVL margin, which pulls the EVL towards the vegetal pole. However, how this force is generated and how it affects EVL morphology still remains elusive. Moreover, the cellular mechanisms mediating the increase in EVL surface area, while maintaining tissue integrity and function are still unclear. Here we show that the YSL actomyosin ring pulls on the EVL margin by two distinct force-generating mechanisms. One mechanism is based on contraction of the ring around its circumference, as previously proposed. The second mechanism is based on actomyosin retrogade flows, generating force through resistance against the substrate. The latter can function at any epiboly stage even in situations where the contraction-based mechanism is unproductive. Additionally, we demonstrate that during epiboly the EVL is subjected to anisotropic tension, which guides the orientation of EVL cell division along the main axis (animal-vegetal) of tension. The influence of tension in cell division orientation involves cell elongation and requires myosin-2 activity for proper spindle alignment. Strikingly, we reveal that tension-oriented cell divisions release anisotropic tension within the EVL and that in the absence of such divisions, EVL cells undergo ectopic fusions. We conclude that forces applied to the EVL by the action of the YSL actomyosin ring generate a tension anisotropy in the EVL that orients cell divisions, which in turn limit tissue tension increase thereby facilitating tissue spreading.}, author = {Campinho, Pedro}, pages = {123}, publisher = {IST Austria}, title = {{Mechanics of zebrafish epiboly: Tension-oriented cell divisions limit anisotropic tissue tension in epithelial spreading}}, year = {2013}, } @article{10396, abstract = {Stimfit is a free cross-platform software package for viewing and analyzing electrophysiological data. It supports most standard file types for cellular neurophysiology and other biomedical formats. Its analysis algorithms have been used and validated in several experimental laboratories. Its embedded Python scripting interface makes Stimfit highly extensible and customizable.}, author = {Schlögl, Alois and Jonas, Peter M and Schmidt-Hieber, C. and Guzman, S. J.}, issn = {1862-278X}, journal = {Biomedical Engineering / Biomedizinische Technik}, keywords = {biomedical engineering, data analysis, free software}, location = {Graz, Austria}, number = {SI-1-Track-G}, publisher = {De Gruyter}, title = {{Stimfit: A fast visualization and analysis environment for cellular neurophysiology}}, doi = {10.1515/bmt-2013-4181}, volume = {58}, year = {2013}, } @inproceedings{2000, abstract = {In this work we present a flexible tool for tumor progression, which simulates the evolutionary dynamics of cancer. Tumor progression implements a multi-type branching process where the key parameters are the fitness landscape, the mutation rate, and the average time of cell division. The fitness of a cancer cell depends on the mutations it has accumulated. The input to our tool could be any fitness landscape, mutation rate, and cell division time, and the tool produces the growth dynamics and all relevant statistics.}, author = {Reiter, Johannes and Božić, Ivana and Chatterjee, Krishnendu and Nowak, Martin}, booktitle = {Proceedings of 25th Int. Conf. on Computer Aided Verification}, location = {St. Petersburg, Russia}, pages = {101 -- 106}, publisher = {Springer}, title = {{TTP: Tool for tumor progression}}, doi = {10.1007/978-3-642-39799-8_6}, volume = {8044}, year = {2013}, } @article{2009, abstract = {Traditional statistical methods for confidentiality protection of statistical databases do not scale well to deal with GWAS databases especially in terms of guarantees regarding protection from linkage to external information. The more recent concept of differential privacy, introduced by the cryptographic community, is an approach which provides a rigorous definition of privacy with meaningful privacy guarantees in the presence of arbitrary external information, although the guarantees may come at a serious price in terms of data utility. Building on such notions, we propose new methods to release aggregate GWAS data without compromising an individual’s privacy. We present methods for releasing differentially private minor allele frequencies, chi-square statistics and p-values. We compare these approaches on simulated data and on a GWAS study of canine hair length involving 685 dogs. We also propose a privacy-preserving method for finding genome-wide associations based on a differentially-private approach to penalized logistic regression.}, author = {Uhler, Caroline and Slavkovic, Aleksandra and Fienberg, Stephen}, journal = {Journal of Privacy and Confidentiality }, number = {1}, pages = {137 -- 166}, publisher = {Carnegie Mellon University}, title = {{Privacy-preserving data sharing for genome-wide association studies}}, doi = {10.29012/jpc.v5i1.629}, volume = {5}, year = {2013}, } @article{2010, abstract = {Many algorithms for inferring causality rely heavily on the faithfulness assumption. The main justification for imposing this assumption is that the set of unfaithful distributions has Lebesgue measure zero, since it can be seen as a collection of hypersurfaces in a hypercube. However, due to sampling error the faithfulness condition alone is not sufficient for statistical estimation, and strong-faithfulness has been proposed and assumed to achieve uniform or high-dimensional consistency. In contrast to the plain faithfulness assumption, the set of distributions that is not strong-faithful has nonzero Lebesgue measure and in fact, can be surprisingly large as we show in this paper. We study the strong-faithfulness condition from a geometric and combinatorial point of view and give upper and lower bounds on the Lebesgue measure of strong-faithful distributions for various classes of directed acyclic graphs. Our results imply fundamental limitations for the PC-algorithm and potentially also for other algorithms based on partial correlation testing in the Gaussian case.}, author = {Uhler, Caroline and Raskutti, Garvesh and Bühlmann, Peter and Yu, Bin}, journal = {The Annals of Statistics}, number = {2}, pages = {436 -- 463}, publisher = {Institute of Mathematical Statistics}, title = {{Geometry of the faithfulness assumption in causal inference}}, doi = {10.1214/12-AOS1080}, volume = {41}, year = {2013}, } @inproceedings{2181, abstract = {There is a trade-off between performance and correctness in implementing concurrent data structures. Better performance may be achieved at the expense of relaxing correctness, by redefining the semantics of data structures. We address such a redefinition of data structure semantics and present a systematic and formal framework for obtaining new data structures by quantitatively relaxing existing ones. We view a data structure as a sequential specification S containing all "legal" sequences over an alphabet of method calls. Relaxing the data structure corresponds to defining a distance from any sequence over the alphabet to the sequential specification: the k-relaxed sequential specification contains all sequences over the alphabet within distance k from the original specification. In contrast to other existing work, our relaxations are semantic (distance in terms of data structure states). As an instantiation of our framework, we present two simple yet generic relaxation schemes, called out-of-order and stuttering relaxation, along with several ways of computing distances. We show that the out-of-order relaxation, when further instantiated to stacks, queues, and priority queues, amounts to tolerating bounded out-of-order behavior, which cannot be captured by a purely syntactic relaxation (distance in terms of sequence manipulation, e.g. edit distance). We give concurrent implementations of relaxed data structures and demonstrate that bounded relaxations provide the means for trading correctness for performance in a controlled way. The relaxations are monotonic which further highlights the trade-off: increasing k increases the number of permitted sequences, which as we demonstrate can lead to better performance. Finally, since a relaxed stack or queue also implements a pool, we actually have new concurrent pool implementations that outperform the state-of-the-art ones.}, author = {Henzinger, Thomas A and Kirsch, Christoph and Payer, Hannes and Sezgin, Ali and Sokolova, Ana}, booktitle = {Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming language}, isbn = {978-1-4503-1832-7}, location = {Rome, Italy}, pages = {317 -- 328}, publisher = {ACM}, title = {{Quantitative relaxation of concurrent data structures}}, doi = {10.1145/2429069.2429109}, year = {2013}, } @inproceedings{2182, abstract = {We propose a general framework for abstraction with respect to quantitative properties, such as worst-case execution time, or power consumption. Our framework provides a systematic way for counter-example guided abstraction refinement for quantitative properties. The salient aspect of the framework is that it allows anytime verification, that is, verification algorithms that can be stopped at any time (for example, due to exhaustion of memory), and report approximations that improve monotonically when the algorithms are given more time. We instantiate the framework with a number of quantitative abstractions and refinement schemes, which differ in terms of how much quantitative information they keep from the original system. We introduce both state-based and trace-based quantitative abstractions, and we describe conditions that define classes of quantitative properties for which the abstractions provide over-approximations. We give algorithms for evaluating the quantitative properties on the abstract systems. We present algorithms for counter-example based refinements for quantitative properties for both state-based and segment-based abstractions. We perform a case study on worst-case execution time of executables to evaluate the anytime verification aspect and the quantitative abstractions we proposed.}, author = {Cerny, Pavol and Henzinger, Thomas A and Radhakrishna, Arjun}, booktitle = {Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming language}, location = {Rome, Italy}, pages = {115 -- 128}, publisher = {ACM}, title = {{Quantitative abstraction refinement}}, doi = {10.1145/2429069.2429085}, year = {2013}, } @inproceedings{2209, abstract = {A straight skeleton is a well-known geometric structure, and several algorithms exist to construct the straight skeleton for a given polygon or planar straight-line graph. In this paper, we ask the reverse question: Given the straight skeleton (in form of a planar straight-line graph, with some rays to infinity), can we reconstruct a planar straight-line graph for which this was the straight skeleton? We show how to reduce this problem to the problem of finding a line that intersects a set of convex polygons. We can find these convex polygons and all such lines in $O(nlog n)$ time in the Real RAM computer model, where $n$ denotes the number of edges of the input graph. We also explain how our approach can be used for recognizing Voronoi diagrams of points, thereby completing a partial solution provided by Ash and Bolker in 1985. }, author = {Biedl, Therese and Held, Martin and Huber, Stefan}, location = {St. Petersburg, Russia}, pages = {37 -- 46}, publisher = {IEEE}, title = {{Recognizing straight skeletons and Voronoi diagrams and reconstructing their input}}, doi = {10.1109/ISVD.2013.11}, year = {2013}, } @inproceedings{2210, abstract = {A straight skeleton is a well-known geometric structure, and several algorithms exist to construct the straight skeleton for a given polygon. In this paper, we ask the reverse question: Given the straight skeleton (in form of a tree with a drawing in the plane, but with the exact position of the leaves unspecified), can we reconstruct the polygon? We show that in most cases there exists at most one polygon; in the remaining case there is an infinite number of polygons determined by one angle that can range in an interval. We can find this (set of) polygon(s) in linear time in the Real RAM computer model.}, author = {Biedl, Therese and Held, Martin and Huber, Stefan}, booktitle = {29th European Workshop on Computational Geometry}, location = {Braunschweig, Germany}, pages = {95 -- 98}, publisher = {TU Braunschweig}, title = {{Reconstructing polygons from embedded straight skeletons}}, year = {2013}, } @inproceedings{2237, abstract = {We describe new extensions of the Vampire theorem prover for computing tree interpolants. These extensions generalize Craig interpolation in Vampire, and can also be used to derive sequence interpolants. We evaluated our implementation on a large number of examples over the theory of linear integer arithmetic and integer-indexed arrays, with and without quantifiers. When compared to other methods, our experiments show that some examples could only be solved by our implementation.}, author = {Blanc, Régis and Gupta, Ashutosh and Kovács, Laura and Kragl, Bernhard}, location = {Stellenbosch, South Africa}, pages = {173 -- 181}, publisher = {Springer}, title = {{Tree interpolation in Vampire}}, doi = {10.1007/978-3-642-45221-5_13}, volume = {8312}, year = {2013}, } @inproceedings{2238, abstract = {We study the problem of achieving a given value in Markov decision processes (MDPs) with several independent discounted reward objectives. We consider a generalised version of discounted reward objectives, in which the amount of discounting depends on the states visited and on the objective. This definition extends the usual definition of discounted reward, and allows to capture the systems in which the value of different commodities diminish at different and variable rates. We establish results for two prominent subclasses of the problem, namely state-discount models where the discount factors are only dependent on the state of the MDP (and independent of the objective), and reward-discount models where they are only dependent on the objective (but not on the state of the MDP). For the state-discount models we use a straightforward reduction to expected total reward and show that the problem whether a value is achievable can be solved in polynomial time. For the reward-discount model we show that memory and randomisation of the strategies are required, but nevertheless that the problem is decidable and it is sufficient to consider strategies which after a certain number of steps behave in a memoryless way. For the general case, we show that when restricted to graphs (i.e. MDPs with no randomisation), pure strategies and discount factors of the form 1/n where n is an integer, the problem is in PSPACE and finite memory suffices for achieving a given value. We also show that when the discount factors are not of the form 1/n, the memory required by a strategy can be infinite. }, author = {Chatterjee, Krishnendu and Forejt, Vojtěch and Wojtczak, Dominik}, location = {Stellenbosch, South Africa}, pages = {228 -- 242}, publisher = {Springer}, title = {{Multi-objective discounted reward verification in graphs and MDPs}}, doi = {10.1007/978-3-642-45221-5_17}, volume = {8312}, year = {2013}, } @inproceedings{2243, abstract = {We show that modal logic over universally first-order definable classes of transitive frames is decidable. More precisely, let K be an arbitrary class of transitive Kripke frames definable by a universal first-order sentence. We show that the global and finite global satisfiability problems of modal logic over K are decidable in NP, regardless of choice of K. We also show that the local satisfiability and the finite local satisfiability problems of modal logic over K are decidable in NEXPTIME.}, author = {Michaliszyn, Jakub and Otop, Jan}, location = {Torino, Italy}, pages = {563 -- 577}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Elementary modal logics over transitive structures}}, doi = {10.4230/LIPIcs.CSL.2013.563}, volume = {23}, year = {2013}, } @inproceedings{2244, abstract = {We consider two systems (α1,...,αm) and (β1,...,βn) of curves drawn on a compact two-dimensional surface ℳ with boundary. Each αi and each βj is either an arc meeting the boundary of ℳ at its two endpoints, or a closed curve. The αi are pairwise disjoint except for possibly sharing endpoints, and similarly for the βj. We want to "untangle" the βj from the αi by a self-homeomorphism of ℳ; more precisely, we seek an homeomorphism φ: ℳ → ℳ fixing the boundary of ℳ pointwise such that the total number of crossings of the αi with the φ(βj) is as small as possible. This problem is motivated by an application in the algorithmic theory of embeddings and 3-manifolds. We prove that if ℳ is planar, i.e., a sphere with h ≥ 0 boundary components ("holes"), then O(mn) crossings can be achieved (independently of h), which is asymptotically tight, as an easy lower bound shows. In general, for an arbitrary (orientable or nonorientable) surface ℳ with h holes and of (orientable or nonorientable) genus g ≥ 0, we obtain an O((m + n)4) upper bound, again independent of h and g. }, author = {Matoušek, Jiří and Sedgwick, Eric and Tancer, Martin and Wagner, Uli}, location = {Bordeaux, France}, pages = {472 -- 483}, publisher = {Springer}, title = {{Untangling two systems of noncrossing curves}}, doi = {10.1007/978-3-319-03841-4_41}, volume = {8242}, year = {2013}, } @article{2247, abstract = {Cooperative behavior, where one individual incurs a cost to help another, is a wide spread phenomenon. Here we study direct reciprocity in the context of the alternating Prisoner's Dilemma. We consider all strategies that can be implemented by one and two-state automata. We calculate the payoff matrix of all pairwise encounters in the presence of noise. We explore deterministic selection dynamics with and without mutation. Using different error rates and payoff values, we observe convergence to a small number of distinct equilibria. Two of them are uncooperative strict Nash equilibria representing always-defect (ALLD) and Grim. The third equilibrium is mixed and represents a cooperative alliance of several strategies, dominated by a strategy which we call Forgiver. Forgiver cooperates whenever the opponent has cooperated; it defects once when the opponent has defected, but subsequently Forgiver attempts to re-establish cooperation even if the opponent has defected again. Forgiver is not an evolutionarily stable strategy, but the alliance, which it rules, is asymptotically stable. For a wide range of parameter values the most commonly observed outcome is convergence to the mixed equilibrium, dominated by Forgiver. Our results show that although forgiving might incur a short-term loss it can lead to a long-term gain. Forgiveness facilitates stable cooperation in the presence of exploitation and noise.}, author = {Zagorsky, Benjamin and Reiter, Johannes and Chatterjee, Krishnendu and Nowak, Martin}, journal = {PLoS One}, number = {12}, publisher = {Public Library of Science}, title = {{Forgiver triumphs in alternating prisoner's dilemma }}, doi = {10.1371/journal.pone.0080814}, volume = {8}, year = {2013}, } @article{2256, abstract = {Linked (Open) Data - bibliographic data on the Semantic Web. Report of the Working Group on Linked Data to the plenary assembly of the Austrian Library Network (translation of the title). Linked Data stands for a certain approach to publishing data on the Web. The underlying idea is to harmonise heterogeneous data sources of different origin in order to improve their accessibility and interoperability, effectively making them queryable as a big distributed database. This report summarises relevant developments in Europe as well as the Linked Data Working Group‘s strategic and technical considerations regarding the publishing of the Austrian Library Network’s (OBV’s) bibliographic datasets. It concludes with the mutual agreement that the implementation of Linked Data principles within the OBV can only be taken into consideration accompanied by a discussion about the provision of the datasets under a free license.}, author = {Danowski, Patrick and Goldfarb, Doron and Schaffner, Verena and Seidler, Wolfram}, journal = {VÖB Mitteilungen}, number = {3/4}, pages = {559 -- 587}, publisher = {Verein Österreichischer Bibliothekarinnen und Bibliothekare}, title = {{Linked (Open) Data - Bibliographische Daten im Semantic Web}}, volume = {66}, year = {2013}, } @inproceedings{2258, abstract = {In a digital signature scheme with message recovery, rather than transmitting the message m and its signature σ, a single enhanced signature τ is transmitted. The verifier is able to recover m from τ and at the same time verify its authenticity. The two most important parameters of such a scheme are its security and overhead |τ| − |m|. A simple argument shows that for any scheme with “n bits security” |τ| − |m| ≥ n, i.e., the overhead is lower bounded by the security parameter n. Currently, the best known constructions in the random oracle model are far from this lower bound requiring an overhead of n + logq h , where q h is the number of queries to the random oracle. In this paper we give a construction which basically matches the n bit lower bound. We propose a simple digital signature scheme with n + o(logq h ) bits overhead, where q h denotes the number of random oracle queries. Our construction works in two steps. First, we propose a signature scheme with message recovery having optimal overhead in a new ideal model, the random invertible function model. Second, we show that a four-round Feistel network with random oracles as round functions is tightly “public-indifferentiable” from a random invertible function. At the core of our indifferentiability proof is an almost tight upper bound for the expected number of edges of the densest “small” subgraph of a random Cayley graph, which may be of independent interest. }, author = {Kiltz, Eike and Pietrzak, Krzysztof Z and Szegedy, Mario}, location = {Santa Barbara, CA, United States}, pages = {571 -- 588}, publisher = {Springer}, title = {{Digital signatures with minimal overhead from indifferentiable random invertible functions}}, doi = {10.1007/978-3-642-40041-4_31}, volume = {8042}, year = {2013}, } @inproceedings{2259, abstract = {The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen at EUROCRYPT ’12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than super-polynomial) approximation factors. As a tool in the reduction, we show that there is a “lossy mode” for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple constructions of deterministic encryption, lossy trapdoor functions and reusable extractors. Our approach is inspired by a technique of Goldwasser et al. from ICS ’10, which implicitly showed the existence of a “lossy mode” for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio. }, author = {Alwen, Joel F and Krenn, Stephan and Pietrzak, Krzysztof Z and Wichs, Daniel}, location = {Santa Barbara, CA, United States}, number = {1}, pages = {57 -- 74}, publisher = {Springer}, title = {{Learning with rounding, revisited: New reduction properties and applications}}, doi = {10.1007/978-3-642-40041-4_4}, volume = {8042}, year = {2013}, } @inproceedings{2260, abstract = {Direct Anonymous Attestation (DAA) is one of the most complex cryptographic protocols deployed in practice. It allows an embedded secure processor known as a Trusted Platform Module (TPM) to attest to the configuration of its host computer without violating the owner’s privacy. DAA has been standardized by the Trusted Computing Group and ISO/IEC. The security of the DAA standard and all existing schemes is analyzed in the random-oracle model. We provide the first constructions of DAA in the standard model, that is, without relying on random oracles. Our constructions use new building blocks, including the first efficient signatures of knowledge in the standard model, which have many applications beyond DAA. }, author = {Bernhard, David and Fuchsbauer, Georg and Ghadafi, Essam}, location = {Banff, AB, Canada}, pages = {518 -- 533}, publisher = {Springer}, title = {{Efficient signatures of knowledge and DAA in the standard model}}, doi = {10.1007/978-3-642-38980-1_33}, volume = {7954}, year = {2013}, } @article{2264, abstract = {Faithful progression through the cell cycle is crucial to the maintenance and developmental potential of stem cells. Here, we demonstrate that neural stem cells (NSCs) and intermediate neural progenitor cells (NPCs) employ a zinc-finger transcription factor specificity protein 2 (Sp2) as a cell cycle regulator in two temporally and spatially distinct progenitor domains. Differential conditional deletion of Sp2 in early embryonic cerebral cortical progenitors, and perinatal olfactory bulb progenitors disrupted transitions through G1, G2 and M phases, whereas DNA synthesis appeared intact. Cell-autonomous function of Sp2 was identified by deletion of Sp2 using mosaic analysis with double markers, which clearly established that conditional Sp2-null NSCs and NPCs are M phase arrested in vivo. Importantly, conditional deletion of Sp2 led to a decline in the generation of NPCs and neurons in the developing and postnatal brains. Our findings implicate Sp2-dependent mechanisms as novel regulators of cell cycle progression, the absence of which disrupts neurogenesis in the embryonic and postnatal brain.}, author = {Liang, Huixuan and Xiao, Guanxi and Yin, Haifeng and Hippenmeyer, Simon and Horowitz, Jonathan and Ghashghaei, Troy}, journal = {Development}, number = {3}, pages = {552 -- 561}, publisher = {Company of Biologists}, title = {{Neural development is dependent on the function of specificity protein 2 in cell cycle progression}}, doi = {10.1242/dev.085621}, volume = {140}, year = {2013}, } @inproceedings{2270, abstract = {Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inher- ent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation (Deng and Papadimitriou 1994), which maintains knowledge about synergies between agents in the form of an edge weighted graph. We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minorfree and bounded degree graphs.}, author = {Bachrach, Yoram and Kohli, Pushmeet and Kolmogorov, Vladimir and Zadimoghaddam, Morteza}, location = {Bellevue, WA, United States}, pages = {81--87}, publisher = {AAAI Press}, title = {{Optimal Coalition Structures in Cooperative Graph Games}}, year = {2013}, } @inproceedings{2272, abstract = {We consider Conditional Random Fields (CRFs) with pattern-based potentials defined on a chain. In this model the energy of a string (labeling) x1...xn is the sum of terms over intervals [i,j] where each term is non-zero only if the substring xi...xj equals a prespecified pattern α. Such CRFs can be naturally applied to many sequence tagging problems. We present efficient algorithms for the three standard inference tasks in a CRF, namely computing (i) the partition function, (ii) marginals, and (iii) computing the MAP. Their complexities are respectively O(nL), O(nLℓmax) and O(nLmin{|D|,log(ℓmax+1)}) where L is the combined length of input patterns, ℓmax is the maximum length of a pattern, and D is the input alphabet. This improves on the previous algorithms of (Ye et al., 2009) whose complexities are respectively O(nL|D|), O(n|Γ|L2ℓ2max) and O(nL|D|), where |Γ| is the number of input patterns. In addition, we give an efficient algorithm for sampling. Finally, we consider the case of non-positive weights. (Komodakis & Paragios, 2009) gave an O(nL) algorithm for computing the MAP. We present a modification that has the same worst-case complexity but can beat it in the best case. }, author = {Takhanov, Rustem and Kolmogorov, Vladimir}, booktitle = {ICML'13 Proceedings of the 30th International Conference on International}, location = {Atlanta, GA, USA}, number = {3}, pages = {145 -- 153}, publisher = {International Machine Learning Society}, title = {{Inference algorithms for pattern-based CRFs on sequence data}}, volume = {28}, year = {2013}, } @techreport{2273, abstract = {We propose a new family of message passing techniques for MAP estimation in graphical models which we call Sequential Reweighted Message Passing (SRMP). Special cases include well-known techniques such as Min-Sum Diusion (MSD) and a faster Sequential Tree-Reweighted Message Passing (TRW-S). Importantly, our derivation is simpler than the original derivation of TRW-S, and does not involve a decomposition into trees. This allows easy generalizations. We present such a generalization for the case of higher-order graphical models, and test it on several real-world problems with promising results.}, author = {Vladimir Kolmogorov}, publisher = {IST Austria}, title = {{Reweighted message passing revisited}}, year = {2013}, } @techreport{2274, abstract = {Proofs of work (PoW) have been suggested by Dwork and Naor (Crypto'92) as protection to a shared resource. The basic idea is to ask the service requestor to dedicate some non-trivial amount of computational work to every request. The original applications included prevention of spam and protection against denial of service attacks. More recently, PoWs have been used to prevent double spending in the Bitcoin digital currency system. In this work, we put forward an alternative concept for PoWs -- so-called proofs of space (PoS), where a service requestor must dedicate a significant amount of disk space as opposed to computation. We construct secure PoS schemes in the random oracle model, using graphs with high "pebbling complexity" and Merkle hash-trees. }, author = {Dziembowski, Stefan and Faust, Sebastian and Kolmogorov, Vladimir and Pietrzak, Krzysztof Z}, publisher = {IST Austria}, title = {{Proofs of Space}}, year = {2013}, } @inproceedings{2276, abstract = {The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [19, 20]. It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. The number of “labeled” pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O (log k) maxflow computations (or one parametric maxflow computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for Tree Metrics . We also show a connection to k-submodular functions from combinatorial optimization, and discuss k-submodular relaxations for general energy functions.}, author = {Gridchyn, Igor and Kolmogorov, Vladimir}, location = {Sydney, Australia}, pages = {2320 -- 2327}, publisher = {IEEE}, title = {{Potts model, parametric maxflow and k-submodular functions}}, doi = {10.1109/ICCV.2013.288}, year = {2013}, } @article{2278, abstract = {It is firmly established that interactions between neurons and glia are fundamental across species for the correct establishment of a functional brain. Here, we found that the glia of the Drosophila larval brain display an essential non-autonomous role during the development of the optic lobe. The optic lobe develops from neuroepithelial cells that proliferate by dividing symmetrically until they switch to asymmetric/differentiative divisions that generate neuroblasts. The proneural gene lethal of scute (l9sc) is transiently activated by the epidermal growth factor receptor (EGFR)-Ras signal transduction pathway at the leading edge of a proneural wave that sweeps from medial to lateral neuroepithelium, promoting this switch. This process is tightly regulated by the tissue-autonomous function within the neuroepithelium of multiple signaling pathways, including EGFR-Ras and Notch. This study shows that the Notch ligand Serrate (Ser) is expressed in the glia and it forms a complex in vivo with Notch and Canoe, which colocalize at the adherens junctions of neuroepithelial cells. This complex is crucial for interactions between glia and neuroepithelial cells during optic lobe development. Ser is tissue-autonomously required in the glia where it activates Notch to regulate its proliferation, and non-autonomously in the neuroepithelium where Ser induces Notch signaling to avoid the premature activation of the EGFR-Ras pathway and hence of L9sc. Interestingly, different Notch activity reporters showed very different expression patterns in the glia and in the neuroepithelium, suggesting the existence of tissue-specific factors that promote the expression of particular Notch target genes or/and a reporter response dependent on different thresholds of Notch signaling.}, author = {Pérez Gómez, Raquel and Slovakova, Jana and Rives Quinto, Noemí and Krejčí, Alena and Carmena, Ana}, journal = {Journal of Cell Science}, number = {21}, pages = {4873 -- 4884}, publisher = {Company of Biologists}, title = {{A serrate-notch-canoe complex mediates essential interactions between glia and neuroepithelial cells during Drosophila optic lobe development}}, doi = {10.1242/jcs.125617}, volume = {126}, year = {2013}, } @inproceedings{2279, abstract = {We consider two-player games played on weighted directed graphs with mean-payoff and total-payoff objectives, two classical quantitative objectives. While for single-dimensional games the complexity and memory bounds for both objectives coincide, we show that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete, multi-dimensional total-payoff games are undecidable. We introduce conservative approximations of these objectives, where the payoff is considered over a local finite window sliding along a play, instead of the whole play. For single dimension, we show that (i) if the window size is polynomial, deciding the winner takes polynomial time, and (ii) the existence of a bounded window can be decided in NP ∩ coNP, and is at least as hard as solving mean-payoff games. For multiple dimensions, we show that (i) the problem with fixed window size is EXPTIME-complete, and (ii) there is no primitive-recursive algorithm to decide the existence of a bounded window.}, author = {Chatterjee, Krishnendu and Doyen, Laurent and Randour, Mickael and Raskin, Jean}, location = {Hanoi, Vietnam}, pages = {118 -- 132}, publisher = {Springer}, title = {{Looking at mean-payoff and total-payoff through windows}}, doi = {10.1007/978-3-319-02444-8_10}, volume = {8172}, year = {2013}, } @article{2280, abstract = {The problem of packing ellipsoids of different sizes and shapes into an ellipsoidal container so as to minimize a measure of overlap between ellipsoids is considered. A bilevel optimization formulation is given, together with an algorithm for the general case and a simpler algorithm for the special case in which all ellipsoids are in fact spheres. Convergence results are proved and computational experience is described and illustrated. The motivating application-chromosome organization in the human cell nucleus-is discussed briefly, and some illustrative results are presented.}, author = {Uhler, Caroline and Wright, Stephen}, journal = {SIAM Review}, number = {4}, pages = {671 -- 706}, publisher = {Society for Industrial and Applied Mathematics }, title = {{Packing ellipsoids with overlap}}, doi = {10.1137/120872309}, volume = {55}, year = {2013}, } @article{2282, abstract = {Epithelial spreading is a common and fundamental aspect of various developmental and disease-related processes such as epithelial closure and wound healing. A key challenge for epithelial tissues undergoing spreading is to increase their surface area without disrupting epithelial integrity. Here we show that orienting cell divisions by tension constitutes an efficient mechanism by which the enveloping cell layer (EVL) releases anisotropic tension while undergoing spreading during zebrafish epiboly. The control of EVL cell-division orientation by tension involves cell elongation and requires myosin II activity to align the mitotic spindle with the main tension axis. We also found that in the absence of tension-oriented cell divisions and in the presence of increased tissue tension, EVL cells undergo ectopic fusions, suggesting that the reduction of tension anisotropy by oriented cell divisions is required to prevent EVL cells from fusing. We conclude that cell-division orientation by tension constitutes a key mechanism for limiting tension anisotropy and thus promoting tissue spreading during EVL epiboly.}, author = {Campinho, Pedro and Behrndt, Martin and Ranft, Jonas and Risler, Thomas and Minc, Nicolas and Heisenberg, Carl-Philipp J}, journal = {Nature Cell Biology}, pages = {1405 -- 1414}, publisher = {Nature Publishing Group}, title = {{Tension-oriented cell divisions limit anisotropic tissue tension in epithelial spreading during zebrafish epiboly}}, doi = {10.1038/ncb2869}, volume = {15}, year = {2013}, } @article{2283, abstract = {Pathogens exert a strong selection pressure on organisms to evolve effective immune defences. In addition to individual immunity, social organisms can act cooperatively to produce collective defences. In many ant species, queens have the option to found a colony alone or in groups with other, often unrelated, conspecifics. These associations are transient, usually lasting only as long as each queen benefits from the presence of others. In fact, once the first workers emerge, queens fight to the death for dominance. One potential advantage of co-founding may be that queens benefit from collective disease defences, such as mutual grooming, that act against common soil pathogens. We test this hypothesis by exposing single and co-founding queens to a fungal parasite, in order to assess whether queens in co-founding associations have improved survival. Surprisingly, co-foundresses exposed to the entomopathogenic fungus Metarhizium did not engage in cooperative disease defences, and consequently, we find no direct benefit of multiple queens on survival. However, an indirect benefit was observed, with parasite-exposed queens producing more brood when they co-founded, than when they were alone. We suggest this is due to a trade-off between reproduction and immunity. Additionally, we report an extraordinary ability of the queens to tolerate an infection for long periods after parasite exposure. Our study suggests that there are no social immunity benefits for co-founding ant queens, but that in parasite-rich environments, the presence of additional queens may nevertheless improve the chances of colony founding success.}, author = {Pull, Christopher and Hughes, William and Brown, Markus}, journal = {Naturwissenschaften}, number = {12}, pages = {1125 -- 1136}, publisher = {Springer}, title = {{Tolerating an infection: an indirect benefit of co-founding queen associations in the ant Lasius niger }}, doi = {10.1007/s00114-013-1115-5}, volume = {100}, year = {2013}, } @article{2286, abstract = {The spatiotemporal control of cell divisions is a key factor in epithelial morphogenesis and patterning. Mao et al (2013) now describe how differential rates of proliferation within the Drosophila wing disc epithelium give rise to anisotropic tissue tension in peripheral/proximal regions of the disc. Such global tissue tension anisotropy in turn determines the orientation of cell divisions by controlling epithelial cell elongation.}, author = {Campinho, Pedro and Heisenberg, Carl-Philipp J}, journal = {EMBO Journal}, number = {21}, pages = {2783 -- 2784}, publisher = {Wiley-Blackwell}, title = {{The force and effect of cell proliferation}}, doi = {10.1038/emboj.2013.225}, volume = {32}, year = {2013}, } @article{2287, abstract = {Negative frequency-dependent selection should result in equal sex ratios in large populations of dioecious flowering plants, but deviations from equality are commonly reported. A variety of ecological and genetic factors can explain biased sex ratios, although the mechanisms involved are not well understood. Most dioecious species are long-lived and/or clonal complicating efforts to identify stages during the life cycle when biases develop. We investigated the demographic correlates of sex-ratio variation in two chromosome races of Rumex hastatulus, an annual, wind-pollinated colonizer of open habitats from the southern USA. We examined sex ratios in 46 populations and evaluated the hypothesis that the proximity of males in the local mating environment, through its influence on gametophytic selection, is the primary cause of female-biased sex ratios. Female-biased sex ratios characterized most populations of R. hastatulus (mean sex ratio = 0.62), with significant female bias in 89% of populations. Large, high-density populations had the highest proportion of females, whereas smaller, low-density populations had sex ratios closer to equality. Progeny sex ratios were more female biased when males were in closer proximity to females, a result consistent with the gametophytic selection hypothesis. Our results suggest that interactions between demographic and genetic factors are probably the main cause of female-biased sex ratios in R. hastatulus. The annual life cycle of this species may limit the scope for selection against males and may account for the weaker degree of bias in comparison with perennial Rumex species.}, author = {Pickup, Melinda and Barrett, Spencer}, journal = {Ecology and Evolution}, number = {3}, pages = {629 -- 639}, publisher = {Wiley-Blackwell}, title = {{The influence of demography and local mating environment on sex ratios in a wind-pollinated dioecious plant}}, doi = {10.1002/ece3.465}, volume = {3}, year = {2013}, } @proceedings{2288, abstract = {This book constitutes the proceedings of the 11th International Conference on Computational Methods in Systems Biology, CMSB 2013, held in Klosterneuburg, Austria, in September 2013. The 15 regular papers included in this volume were carefully reviewed and selected from 27 submissions. They deal with computational models for all levels, from molecular and cellular, to organs and entire organisms.}, editor = {Gupta, Ashutosh and Henzinger, Thomas A}, isbn = {978-3-642-40707-9}, location = {Klosterneuburg, Austria}, publisher = {Springer}, title = {{Computational Methods in Systems Biology}}, doi = {10.1007/978-3-642-40708-6}, volume = {8130}, year = {2013}, } @article{2289, abstract = {Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.}, author = {Henzinger, Thomas A}, journal = {Computer Science Research and Development}, number = {4}, pages = {331 -- 344}, publisher = {Springer}, title = {{Quantitative reactive modeling and verification}}, doi = {10.1007/s00450-013-0251-7}, volume = {28}, year = {2013}, } @article{2290, abstract = {The plant hormone indole-acetic acid (auxin) is essential for many aspects of plant development. Auxin-mediated growth regulation typically involves the establishment of an auxin concentration gradient mediated by polarly localized auxin transporters. The localization of auxin carriers and their amount at the plasma membrane are controlled by membrane trafficking processes such as secretion, endocytosis, and recycling. In contrast to endocytosis or recycling, how the secretory pathway mediates the localization of auxin carriers is not well understood. In this study we have used the differential cell elongation process during apical hook development to elucidate the mechanisms underlying the post-Golgi trafficking of auxin carriers in Arabidopsis. We show that differential cell elongation during apical hook development is defective in Arabidopsis mutant echidna (ech). ECH protein is required for the trans-Golgi network (TGN)-mediated trafficking of the auxin influx carrier AUX1 to the plasma membrane. In contrast, ech mutation only marginally perturbs the trafficking of the highly related auxin influx carrier LIKE-AUX1-3 or the auxin efflux carrier PIN-FORMED-3, both also involved in hook development. Electron tomography reveals that the trafficking defects in ech mutant are associated with the perturbation of secretory vesicle genesis from the TGN. Our results identify differential mechanisms for the post-Golgi trafficking of de novo-synthesized auxin carriers to plasma membrane from the TGN and reveal how trafficking of auxin influx carriers mediates the control of differential cell elongation in apical hook development.}, author = {Boutté, Yohann and Jonsson, Kristoffer and Mcfarlane, Heather and Johnson, Errin and Gendre, Delphine and Swarup, Ranjan and Friml, Jirí and Samuels, Lacey and Robert, Stéphanie and Bhalerao, Rishikesh}, journal = {PNAS}, number = {40}, pages = {16259 -- 16264}, publisher = {National Academy of Sciences}, title = {{ECHIDNA mediated post Golgi trafficking of auxin carriers for differential cell elongation}}, doi = {10.1073/pnas.1309057110}, volume = {110}, year = {2013}, } @inproceedings{2291, abstract = {Cryptographic access control promises to offer easily distributed trust and broader applicability, while reducing reliance on low-level online monitors. Traditional implementations of cryptographic access control rely on simple cryptographic primitives whereas recent endeavors employ primitives with richer functionality and security guarantees. Worryingly, few of the existing cryptographic access-control schemes come with precise guarantees, the gap between the policy specification and the implementation being analyzed only informally, if at all. In this paper we begin addressing this shortcoming. Unlike prior work that targeted ad-hoc policy specification, we look at the well-established Role-Based Access Control (RBAC) model, as used in a typical file system. In short, we provide a precise syntax for a computational version of RBAC, offer rigorous definitions for cryptographic policy enforcement of a large class of RBAC security policies, and demonstrate that an implementation based on attribute-based encryption meets our security notions. We view our main contribution as being at the conceptual level. Although we work with RBAC for concreteness, our general methodology could guide future research for uses of cryptography in other access-control models. }, author = {Ferrara, Anna and Fuchsbauer, Georg and Warinschi, Bogdan}, location = {New Orleans, LA, United States}, pages = {115 -- 129}, publisher = {IEEE}, title = {{Cryptographically enforced RBAC}}, doi = {10.1109/CSF.2013.15}, year = {2013}, } @proceedings{2292, abstract = {This book constitutes the thoroughly refereed conference proceedings of the 38th International Symposium on Mathematical Foundations of Computer Science, MFCS 2013, held in Klosterneuburg, Austria, in August 2013. The 67 revised full papers presented together with six invited talks were carefully selected from 191 submissions. Topics covered include algorithmic game theory, algorithmic learning theory, algorithms and data structures, automata, formal languages, bioinformatics, complexity, computational geometry, computer-assisted reasoning, concurrency theory, databases and knowledge-based systems, foundations of computing, logic in computer science, models of computation, semantics and verification of programs, and theoretical issues in artificial intelligence.}, editor = {Chatterjee, Krishnendu and Sgall, Jiri}, isbn = {978-3-642-40312-5}, location = {Klosterneuburg, Austria}, pages = {VI -- 854}, publisher = {Springer}, title = {{Mathematical Foundations of Computer Science 2013}}, doi = {10.1007/978-3-642-40313-2}, volume = {8087}, year = {2013}, } @inproceedings{2293, abstract = {Many computer vision problems have an asymmetric distribution of information between training and test time. In this work, we study the case where we are given additional information about the training data, which however will not be available at test time. This situation is called learning using privileged information (LUPI). We introduce two maximum-margin techniques that are able to make use of this additional source of information, and we show that the framework is applicable to several scenarios that have been studied in computer vision before. Experiments with attributes, bounding boxes, image tags and rationales as additional information in object classification show promising results.}, author = {Sharmanska, Viktoriia and Quadrianto, Novi and Lampert, Christoph}, location = {Sydney, Australia}, pages = {825 -- 832}, publisher = {IEEE}, title = {{Learning to rank using privileged information}}, doi = {10.1109/ICCV.2013.107}, year = {2013}, } @inproceedings{2294, abstract = {In this work we propose a system for automatic classification of Drosophila embryos into developmental stages. While the system is designed to solve an actual problem in biological research, we believe that the principle underly- ing it is interesting not only for biologists, but also for researchers in computer vision. The main idea is to combine two orthogonal sources of information: one is a classifier trained on strongly invariant features, which makes it applicable to images of very different conditions, but also leads to rather noisy predictions. The other is a label propagation step based on a more powerful similarity measure that however is only consistent within specific subsets of the data at a time. In our biological setup, the information sources are the shape and the staining patterns of embryo images. We show experimentally that while neither of the methods can be used by itself to achieve satisfactory results, their combina- tion achieves prediction quality comparable to human performance.}, author = {Kazmar, Tomas and Kvon, Evgeny and Stark, Alexander and Lampert, Christoph}, location = {Sydney, Australia}, publisher = {IEEE}, title = {{Drosophila Embryo Stage Annotation using Label Propagation}}, doi = {10.1109/ICCV.2013.139}, year = {2013}, } @inproceedings{2295, abstract = {We consider partially observable Markov decision processes (POMDPs) with ω-regular conditions specified as parity objectives. The qualitative analysis problem given a POMDP and a parity objective asks whether there is a strategy to ensure that the objective is satisfied with probability 1 (resp. positive probability). While the qualitative analysis problems are known to be undecidable even for very special cases of parity objectives, we establish decidability (with optimal EXPTIME-complete complexity) of the qualitative analysis problems for POMDPs with all parity objectives under finite-memory strategies. We also establish asymptotically optimal (exponential) memory bounds.}, author = {Chatterjee, Krishnendu and Chmelik, Martin and Tracol, Mathieu}, location = {Torino, Italy}, pages = {165 -- 180}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{What is decidable about partially observable Markov decision processes with omega-regular objectives}}, doi = {10.4230/LIPIcs.CSL.2013.165}, volume = {23}, year = {2013}, } @article{2297, abstract = {We present an overview of mathematical results on the low temperature properties of dilute quantum gases, which have been obtained in the past few years. The presentation includes a discussion of Bose-Einstein condensation, the excitation spectrum for trapped gases and its relation to superfluidity, as well as the appearance of quantized vortices in rotating systems. All these properties are intensely being studied in current experiments on cold atomic gases. We will give a description of the mathematics involved in understanding these phenomena, starting from the underlying many-body Schrödinger equation.}, author = {Seiringer, Robert}, journal = {Japanese Journal of Mathematics}, number = {2}, pages = {185 -- 232}, publisher = {Springer}, title = {{Hot topics in cold gases: A mathematical physics perspective}}, doi = {10.1007/s11537-013-1264-5}, volume = {8}, year = {2013}, } @inproceedings{2298, abstract = {We present a shape analysis for programs that manipulate overlaid data structures which share sets of objects. The abstract domain contains Separation Logic formulas that (1) combine a per-object separating conjunction with a per-field separating conjunction and (2) constrain a set of variables interpreted as sets of objects. The definition of the abstract domain operators is based on a notion of homomorphism between formulas, viewed as graphs, used recently to define optimal decision procedures for fragments of the Separation Logic. Based on a Frame Rule that supports the two versions of the separating conjunction, the analysis is able to reason in a modular manner about non-overlaid data structures and then, compose information only at a few program points, e.g., procedure returns. We have implemented this analysis in a prototype tool and applied it on several interesting case studies that manipulate overlaid and nested linked lists. }, author = {Dragoi, Cezara and Enea, Constantin and Sighireanu, Mihaela}, location = {Seattle, WA, United States}, pages = {150 -- 171}, publisher = {Springer}, title = {{Local shape analysis for overlaid data structures}}, doi = {10.1007/978-3-642-38856-9_10}, volume = {7935}, year = {2013}, } @article{2299, abstract = {The standard hardware design flow involves: (a) design of an integrated circuit using a hardware description language, (b) extensive functional and formal verification, and (c) logical synthesis. However, the above-mentioned processes consume significant effort and time. An alternative approach is to use a formal specification language as a high-level hardware description language and synthesize hardware from formal specifications. Our work is a case study of the synthesis of the widely and industrially used AMBA AHB protocol from formal specifications. Bloem et al. presented the first formal specifications for the AMBA AHB Arbiter and synthesized the AHB Arbiter circuit. However, in the first formal specification some important assumptions were missing. Our contributions are as follows: (a) We present detailed formal specifications for the AHB Arbiter incorporating the missing details, and obtain significant improvements in the synthesis results (both with respect to the number of gates in the synthesized circuit and with respect to the time taken to synthesize the circuit), and (b) we present formal specifications to generate compact circuits for the remaining two main components of AMBA AHB, namely, AHB Master and AHB Slave. Thus with systematic description we are able to automatically and completely synthesize an important and widely used industrial protocol.}, author = {Godhal, Yashdeep and Chatterjee, Krishnendu and Henzinger, Thomas A}, journal = {International Journal on Software Tools for Technology Transfer}, number = {5-6}, pages = {585 -- 601}, publisher = {Springer}, title = {{Synthesis of AMBA AHB from formal specification: A case study}}, doi = {10.1007/s10009-011-0207-9}, volume = {15}, year = {2013}, } @article{2300, abstract = {We consider Ising models in two and three dimensions with nearest neighbor ferromagnetic interactions and long-range, power law decaying, antiferromagnetic interactions. If the strength of the ferromagnetic coupling J is larger than a critical value Jc, then the ground state is homogeneous and ferromagnetic. As the critical value is approached from smaller values of J, it is believed that the ground state consists of a periodic array of stripes (d=2) or slabs (d=3), all of the same size and alternating magnetization. Here we prove rigorously that the ground state energy per site converges to that of the optimal periodic striped or slabbed state, in the limit that J tends to the ferromagnetic transition point. While this theorem does not prove rigorously that the ground state is precisely striped or slabbed, it does prove that in any suitably large box the ground state is striped or slabbed with high probability.}, author = {Giuliani, Alessandro and Lieb, Élliott and Seiringer, Robert}, journal = {Physical Review B}, number = {6}, publisher = {American Physical Society}, title = {{Realization of stripes and slabs in two and three dimensions}}, doi = {10.1103/PhysRevB.88.064401}, volume = {88}, year = {2013}, } @inproceedings{2301, abstract = {We describe the design and implementation of P, a domain-specific language to write asynchronous event driven code. P allows the programmer to specify the system as a collection of interacting state machines, which communicate with each other using events. P unifies modeling and programming into one activity for the programmer. Not only can a P program be compiled into executable code, but it can also be tested using model checking techniques. P allows the programmer to specify the environment, used to "close" the system during testing, as nondeterministic ghost machines. Ghost machines are erased during compilation to executable code; a type system ensures that the erasure is semantics preserving. The P language is designed so that a P program can be checked for responsiveness-the ability to handle every event in a timely manner. By default, a machine needs to handle every event that arrives in every state. But handling every event in every state is impractical. The language provides a notion of deferred events where the programmer can annotate when she wants to delay processing an event. The default safety checker looks for presence of unhan-dled events. The language also provides default liveness checks that an event cannot be potentially deferred forever. P was used to implement and verify the core of the USB device driver stack that ships with Microsoft Windows 8. The resulting driver is more reliable and performs better than its prior incarnation (which did not use P); we have more confidence in the robustness of its design due to the language abstractions and verification provided by P.}, author = {Desai, Ankush and Gupta, Vivek and Jackson, Ethan and Qadeer, Shaz and Rajamani, Sriram and Zufferey, Damien}, booktitle = {Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation}, location = {Seattle, WA, United States}, pages = {321 -- 331}, publisher = {ACM}, title = {{P: Safe asynchronous event-driven programming}}, doi = {10.1145/2491956.2462184}, year = {2013}, } @article{2303, abstract = {MADM (Mosaic Analysis with Double Markers) technology offers a genetic approach in mice to visualize and concomitantly manipulate genetically defined cells at clonal level and single cell resolution. MADM employs Cre recombinase/loxP-dependent interchromosomal mitotic recombination to reconstitute two split marker genes—green GFP and red tdTomato—and can label sparse clones of homozygous mutant cells in one color and wild-type cells in the other color in an otherwise unlabeled background. At present, major MADM applications include lineage tracing, single cell labeling, conditional knockouts in small populations of cells and induction of uniparental chromosome disomy to assess effects of genomic imprinting. MADM can be applied universally in the mouse with the sole limitation being the specificity of the promoter controlling Cre recombinase expression. Here I review recent developments and extensions of the MADM technique and give an overview of the major discoveries and progresses enabled by the implementation of the novel genetic MADM tools.}, author = {Hippenmeyer, Simon}, journal = {Frontiers in Biology}, number = {6}, pages = {557 -- 568}, publisher = {Springer}, title = {{Dissection of gene function at clonal level using mosaic analysis with double markers}}, doi = {10.1007/s11515-013-1279-6}, volume = {8}, year = {2013}, } @article{2304, abstract = {This extended abstract is concerned with the irregularities of distribution of one-dimensional permuted van der Corput sequences that are generated from linear permutations. We show how to obtain upper bounds for the discrepancy and diaphony of these sequences, by relating them to Kronecker sequences and applying earlier results of Faure and Niederreiter.}, author = {Pausinger, Florian}, journal = {Electronic Notes in Discrete Mathematics}, pages = {43 -- 50}, publisher = {Elsevier}, title = {{Van der Corput sequences and linear permutations}}, doi = {10.1016/j.endm.2013.07.008}, volume = {43}, year = {2013}, } @inproceedings{2305, abstract = {We study the complexity of central controller synthesis problems for finite-state Markov decision processes, where the objective is to optimize both the expected mean-payoff performance of the system and its stability. e argue that the basic theoretical notion of expressing the stability in terms of the variance of the mean-payoff (called global variance in our paper) is not always sufficient, since it ignores possible instabilities on respective runs. For this reason we propose alernative definitions of stability, which we call local and hybrid variance, and which express how rewards on each run deviate from the run's own mean-payoff and from the expected mean-payoff, respectively. We show that a strategy ensuring both the expected mean-payoff and the variance below given bounds requires randomization and memory, under all the above semantics of variance. We then look at the problem of determining whether there is a such a strategy. For the global variance, we show that the problem is in PSPACE, and that the answer can be approximated in pseudo-polynomial time. For the hybrid variance, the analogous decision problem is in NP, and a polynomial-time approximating algorithm also exists. For local variance, we show that the decision problem is in NP. Since the overall performance can be traded for stability (and vice versa), we also present algorithms for approximating the associated Pareto curve in all the three cases. Finally, we study a special case of the decision problems, where we require a given expected mean-payoff together with zero variance. Here we show that the problems can be all solved in polynomial time.}, author = {Brázdil, Tomáš and Chatterjee, Krishnendu and Forejt, Vojtěch and Kučera, Antonín}, booktitle = {28th Annual ACM/IEEE Symposium}, location = {New Orleans, LA, United States}, pages = {331 -- 340}, publisher = {IEEE}, title = {{Trading performance for stability in Markov decision processes}}, doi = {10.1109/LICS.2013.39}, year = {2013}, } @book{2306, abstract = {Das Buch ist sowohl eine Einführung in die Themen Linked Data, Open Data und Open Linked Data als es auch den konkreten Bezug auf Bibliotheken behandelt. Hierzu werden konkrete Anwendungsprojekte beschrieben. Der Band wendet sich dabei sowohl an Personen aus der Bibliothekspraxis als auch an Personen aus dem Bibliotheksmanagement, die noch nicht mit dem Thema vertraut sind.}, author = {Danowski, Patrick and Pohl, Adrian}, isbn = { 978-3-11-027634-3}, issn = {2191-3587}, publisher = {De Gruyter}, title = {{(Open) Linked Data in Bibliotheken}}, doi = {10.1515/9783110278736}, volume = {50}, year = {2013}, } @inproceedings{2327, abstract = {We define the model-measuring problem: given a model M and specification φ, what is the maximal distance ρ such that all models M′ within distance ρ from M satisfy (or violate) φ. The model measuring problem presupposes a distance function on models. We concentrate on automatic distance functions, which are defined by weighted automata. The model-measuring problem subsumes several generalizations of the classical model-checking problem, in particular, quantitative model-checking problems that measure the degree of satisfaction of a specification, and robustness problems that measure how much a model can be perturbed without violating the specification. We show that for automatic distance functions, and ω-regular linear-time and branching-time specifications, the model-measuring problem can be solved. We use automata-theoretic model-checking methods for model measuring, replacing the emptiness question for standard word and tree automata by the optimal-weight question for the weighted versions of these automata. We consider weighted automata that accumulate weights by maximizing, summing, discounting, and limit averaging. We give several examples of using the model-measuring problem to compute various notions of robustness and quantitative satisfaction for temporal specifications.}, author = {Henzinger, Thomas A and Otop, Jan}, location = {Buenos Aires, Argentina}, pages = {273 -- 287}, publisher = {Springer}, title = {{From model checking to model measuring}}, doi = {10.1007/978-3-642-40184-8_20}, volume = {8052}, year = {2013}, } @inproceedings{2328, abstract = {Linearizability of concurrent data structures is usually proved by monolithic simulation arguments relying on identifying the so-called linearization points. Regrettably, such proofs, whether manual or automatic, are often complicated and scale poorly to advanced non-blocking concurrency patterns, such as helping and optimistic updates. In response, we propose a more modular way of checking linearizability of concurrent queue algorithms that does not involve identifying linearization points. We reduce the task of proving linearizability with respect to the queue specification to establishing four basic properties, each of which can be proved independently by simpler arguments. As a demonstration of our approach, we verify the Herlihy and Wing queue, an algorithm that is challenging to verify by a simulation proof.}, author = {Henzinger, Thomas A and Sezgin, Ali and Vafeiadis, Viktor}, location = {Buenos Aires, Argentina}, pages = {242 -- 256}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Aspect-oriented linearizability proofs}}, doi = {10.1007/978-3-642-40184-8_18}, volume = {8052}, year = {2013}, } @inproceedings{2329, abstract = {Two-player games on graphs are central in many problems in formal verification and program analysis such as synthesis and verification of open systems. In this work, we consider both finite-state game graphs, and recursive game graphs (or pushdown game graphs) that model the control flow of sequential programs with recursion. The objectives we study are multidimensional mean-payoff objectives, where the goal of player 1 is to ensure that the mean-payoff is non-negative in all dimensions. In pushdown games two types of strategies are relevant: (1) global strategies, that depend on the entire global history; and (2) modular strategies, that have only local memory and thus do not depend on the context of invocation. Our main contributions are as follows: (1) We show that finite-state multidimensional mean-payoff games can be solved in polynomial time if the number of dimensions and the maximal absolute value of the weights are fixed; whereas if the number of dimensions is arbitrary, then the problem is known to be coNP-complete. (2) We show that pushdown graphs with multidimensional mean-payoff objectives can be solved in polynomial time. For both (1) and (2) our algorithms are based on hyperplane separation technique. (3) For pushdown games under global strategies both one and multidimensional mean-payoff objectives problems are known to be undecidable, and we show that under modular strategies the multidimensional problem is also undecidable; under modular strategies the one-dimensional problem is NP-complete. We show that if the number of modules, the number of exits, and the maximal absolute value of the weights are fixed, then pushdown games under modular strategies with one-dimensional mean-payoff objectives can be solved in polynomial time, and if either the number of exits or the number of modules is unbounded, then the problem is NP-hard. (4) Finally we show that a fixed parameter tractable algorithm for finite-state multidimensional mean-payoff games or pushdown games under modular strategies with one-dimensional mean-payoff objectives would imply the fixed parameter tractability of parity games.}, author = {Chatterjee, Krishnendu and Velner, Yaron}, location = {Buenos Aires, Argentinia}, pages = {500 -- 515}, publisher = {Springer}, title = {{Hyperplane separation technique for multidimensional mean-payoff games}}, doi = {10.1007/978-3-642-40184-8_35}, volume = {8052}, year = {2013}, } @article{2410, abstract = {Here, we describe a novel virulent bacteriophage that infects Bacillus weihenstephanensis, isolated from soil in Austria. It is the first phage to be discovered that infects this species. Here, we present the complete genome sequence of this podovirus. }, author = {Fernandes Redondo, Rodrigo A and Kupczok, Anne and Stift, Gertraud and Bollback, Jonathan P}, journal = {Genome Announcements}, number = {3}, publisher = {American Society for Microbiology}, title = {{Complete genome sequence of the novel phage MG-B1 infecting bacillus weihenstephanensis}}, doi = {10.1128/genomeA.00216-13}, volume = {1}, year = {2013}, } @article{2412, abstract = {Background: The CRISPR/Cas system is known to act as an adaptive and heritable immune system in Eubacteria and Archaea. Immunity is encoded in an array of spacer sequences. Each spacer can provide specific immunity to invasive elements that carry the same or a similar sequence. Even in closely related strains, spacer content is very dynamic and evolves quickly. Standard models of nucleotide evolutioncannot be applied to quantify its rate of change since processes other than single nucleotide changes determine its evolution.Methods We present probabilistic models that are specific for spacer content evolution. They account for the different processes of insertion and deletion. Insertions can be constrained to occur on one end only or are allowed to occur throughout the array. One deletion event can affect one spacer or a whole fragment of adjacent spacers. Parameters of the underlying models are estimated for a pair of arrays by maximum likelihood using explicit ancestor enumeration.Results Simulations show that parameters are well estimated on average under the models presented here. There is a bias in the rate estimation when including fragment deletions. The models also estimate times between pairs of strains. But with increasing time, spacer overlap goes to zero, and thus there is an upper bound on the distance that can be estimated. Spacer content similarities are displayed in a distance based phylogeny using the estimated times.We use the presented models to analyze different Yersinia pestis data sets and find that the results among them are largely congruent. The models also capture the variation in diversity of spacers among the data sets. A comparison of spacer-based phylogenies and Cas gene phylogenies shows that they resolve very different time scales for this data set.Conclusions The simulations and data analyses show that the presented models are useful for quantifying spacer content evolution and for displaying spacer content similarities of closely related strains in a phylogeny. This allows for comparisons of different CRISPR arrays or for comparisons between CRISPR arrays and nucleotide substitution rates.}, author = {Kupczok, Anne and Bollback, Jonathan P}, journal = {BMC Evolutionary Biology}, number = {1}, pages = {54 -- 54}, publisher = {BioMed Central}, title = {{Probabilistic models for CRISPR spacer content evolution }}, doi = {10.1186/1471-2148-13-54}, volume = {13}, year = {2013}, } @inbook{2413, abstract = {Progress in understanding the global brain dynamics has remained slow to date in large part because of the highly multiscale nature of brain activity. Indeed, normal brain dynamics is characterized by complex interactions between multiple levels: from the microscopic scale of single neurons to the mesoscopic level of local groups of neurons, and finally to the macroscopic level of the whole brain. Among the most difficult tasks are those of identifying which scales are significant for a given particular function and describing how the scales affect each other. It is important to realize that the scales of time and space are linked together, or even intertwined, and that causal inference is far more ambiguous between than within levels. We approach this problem from the perspective of our recent work on simultaneous recording from micro- and macroelectrodes in the human brain. We propose a physiological description of these multilevel interactions, based on phase–amplitude coupling of neuronal oscillations that operate at multiple frequencies and on different spatial scales. Specifically, the amplitude of the oscillations on a particular spatial scale is modulated by phasic variations in neuronal excitability induced by lower frequency oscillations that emerge on a larger spatial scale. Following this general principle, it is possible to scale up or scale down the multiscale brain dynamics. It is expected that large-scale network oscillations in the low-frequency range, mediating downward effects, may play an important role in attention and consciousness.}, author = {Valderrama, Mario and Botella Soler, Vicente and Le Van Quyen, Michel}, booktitle = {Multiscale Analysis and Nonlinear Dynamics: From Genes to the Brain}, editor = {Meyer, Misha and Pesenson, Z.}, isbn = {9783527411986 }, publisher = {Wiley-VCH}, title = {{Neuronal oscillations scale up and scale down the brain dynamics }}, doi = {10.1002/9783527671632.ch08}, year = {2013}, } @inproceedings{2444, abstract = {We consider two core algorithmic problems for probabilistic verification: the maximal end-component decomposition and the almost-sure reachability set computation for Markov decision processes (MDPs). For MDPs with treewidth k, we present two improved static algorithms for both the problems that run in time O(n·k 2.38·2k ) and O(m·logn· k), respectively, where n is the number of states and m is the number of edges, significantly improving the previous known O(n·k·√n· k) bound for low treewidth. We also present decremental algorithms for both problems for MDPs with constant treewidth that run in amortized logarithmic time, which is a huge improvement over the previously known algorithms that require amortized linear time.}, author = {Chatterjee, Krishnendu and Ła̧Cki, Jakub}, location = {St. Petersburg, Russia}, pages = {543 -- 558}, publisher = {Springer}, title = {{Faster algorithms for Markov decision processes with low treewidth}}, doi = {10.1007/978-3-642-39799-8_36}, volume = {8044}, year = {2013}, } @inproceedings{2445, abstract = {We develop program synthesis techniques that can help programmers fix concurrency-related bugs. We make two new contributions to synthesis for concurrency, the first improving the efficiency of the synthesized code, and the second improving the efficiency of the synthesis procedure itself. The first contribution is to have the synthesis procedure explore a variety of (sequential) semantics-preserving program transformations. Classically, only one such transformation has been considered, namely, the insertion of synchronization primitives (such as locks). Based on common manual bug-fixing techniques used by Linux device-driver developers, we explore additional, more efficient transformations, such as the reordering of independent instructions. The second contribution is to speed up the counterexample-guided removal of concurrency bugs within the synthesis procedure by considering partial-order traces (instead of linear traces) as counterexamples. A partial-order error trace represents a set of linear (interleaved) traces of a concurrent program all of which lead to the same error. By eliminating a partial-order error trace, we eliminate in a single iteration of the synthesis procedure all linearizations of the partial-order trace. We evaluated our techniques on several simplified examples of real concurrency bugs that occurred in Linux device drivers.}, author = {Cerny, Pavol and Henzinger, Thomas A and Radhakrishna, Arjun and Ryzhyk, Leonid and Tarrach, Thorsten}, location = {St. Petersburg, Russia}, pages = {951 -- 967}, publisher = {Springer}, title = {{Efficient synthesis for concurrency by semantics-preserving transformations}}, doi = {10.1007/978-3-642-39799-8_68}, volume = {8044}, year = {2013}, } @inproceedings{2446, abstract = {The model-checking problem for probabilistic systems crucially relies on the translation of LTL to deterministic Rabin automata (DRW). Our recent Safraless translation [KE12, GKE12] for the LTL(F,G) fragment produces smaller automata as compared to the traditional approach. In this work, instead of DRW we consider deterministic automata with acceptance condition given as disjunction of generalized Rabin pairs (DGRW). The Safraless translation of LTL(F,G) formulas to DGRW results in smaller automata as compared to DRW. We present algorithms for probabilistic model-checking as well as game solving for DGRW conditions. Our new algorithms lead to improvement both in terms of theoretical bounds as well as practical evaluation. We compare PRISM with and without our new translation, and show that the new translation leads to significant improvements.}, author = {Chatterjee, Krishnendu and Gaiser, Andreas and Kretinsky, Jan}, location = {St. Petersburg, Russia}, pages = {559 -- 575}, publisher = {Springer}, title = {{Automata with generalized Rabin pairs for probabilistic model checking and LTL synthesis}}, doi = {10.1007/978-3-642-39799-8_37}, volume = {8044}, year = {2013}, } @inproceedings{2447, abstract = {Separation logic (SL) has gained widespread popularity because of its ability to succinctly express complex invariants of a program’s heap configurations. Several specialized provers have been developed for decidable SL fragments. However, these provers cannot be easily extended or combined with solvers for other theories that are important in program verification, e.g., linear arithmetic. In this paper, we present a reduction of decidable SL fragments to a decidable first-order theory that fits well into the satisfiability modulo theories (SMT) framework. We show how to use this reduction to automate satisfiability, entailment, frame inference, and abduction problems for separation logic using SMT solvers. Our approach provides a simple method of integrating separation logic into existing verification tools that provide SMT backends, and an elegant way of combining SL fragments with other decidable first-order theories. We implemented this approach in a verification tool and applied it to heap-manipulating programs whose verification involves reasoning in theory combinations. }, author = {Piskac, Ruzica and Wies, Thomas and Zufferey, Damien}, location = {St. Petersburg, Russia}, pages = {773 -- 789}, publisher = {Springer}, title = {{Automating separation logic using SMT}}, doi = {10.1007/978-3-642-39799-8_54}, volume = {8044}, year = {2013}, } @article{2448, abstract = {Cell-to-cell directional flow of the phytohormone auxin is primarily established by polar localization of the PIN auxin transporters, a process tightly regulated at multiple levels by auxin itself. We recently reported that, in the context of strong auxin flows, activity of the vacuolar ZIFL1.1 transporter is required for fine-tuning of polar auxin transport rates in the Arabidopsis root. In particular, ZIFL1.1 function protects plasma-membrane stability of the PIN2 carrier in epidermal root tip cells under conditions normally triggering PIN2 degradation. Here, we show that ZIFL1.1 activity at the root tip also promotes PIN1 plasma-membrane abundance in central cylinder cells, thus supporting the notion that ZIFL1.1 acts as a general positive modulator of polar auxin transport in roots.}, author = {Remy, Estelle and Baster, Pawel and Friml, Jirí and Duque, Paula}, journal = {Plant Signaling & Behavior}, number = {10}, publisher = {Landes Bioscience}, title = {{ZIFL1.1 transporter modulates polar auxin transport by stabilizing membrane abundance of multiple PINs in Arabidopsis root tip}}, doi = {10.4161/psb.25688}, volume = {8}, year = {2013}, } @article{2449, abstract = {Intracellular protein routing is mediated by vesicular transport which is tightly regulated in eukaryotes. The protein and lipid homeostasis depends on coordinated delivery of de novo synthesized or recycled cargoes to the plasma membrane by exocytosis and their subsequent removal by rerouting them for recycling or degradation. Here, we report the characterization of protein affected trafficking 3 (pat3) mutant that we identified by an epifluorescence-based forward genetic screen for mutants defective in subcellular distribution of Arabidopsis auxin transporter PIN1–GFP. While pat3 displays largely normal plant morphology and development in nutrient-rich conditions, it shows strong ectopic intracellular accumulations of different plasma membrane cargoes in structures that resemble prevacuolar compartments (PVC) with an aberrant morphology. Genetic mapping revealed that pat3 is defective in vacuolar protein sorting 35A (VPS35A), a putative subunit of the retromer complex that mediates retrograde trafficking between the PVC and trans-Golgi network. Similarly, a mutant defective in another retromer subunit, vps29, shows comparable subcellular defects in PVC morphology and protein accumulation. Thus, our data provide evidence that the retromer components VPS35A and VPS29 are essential for normal PVC morphology and normal trafficking of plasma membrane proteins in plants. In addition, we show that, out of the three VPS35 retromer subunits present in Arabidopsis thaliana genome, the VPS35 homolog A plays a prevailing role in trafficking to the lytic vacuole, presenting another level of complexity in the retromer-dependent vacuolar sorting. }, author = {Nodzyński, Tomasz and Feraru, Murguel and Hirsch, Sibylle and De Rycke, Riet and Nicuales, Claudiu and Van Leene, Jelle and De Jaeger, Geert and Vanneste, Steffen and Friml, Jirí}, journal = {Molecular Plant}, number = {6}, pages = {1849 -- 1862}, publisher = {Cell Press}, title = {{Retromer subunits VPS35A and VPS29 mediate prevacuolar compartment (PVC) function in Arabidopsis}}, doi = {10.1093/mp/sst044}, volume = {6}, year = {2013}, } @article{2466, abstract = {We introduce a new method for efficiently simulating liquid with extreme amounts of spatial adaptivity. Our method combines several key components to drastically speed up the simulation of large-scale fluid phenomena: We leverage an alternative Eulerian tetrahedral mesh discretization to significantly reduce the complexity of the pressure solve while increasing the robustness with respect to element quality and removing the possibility of locking. Next, we enable subtle free-surface phenomena by deriving novel second-order boundary conditions consistent with our discretization. We couple this discretization with a spatially adaptive Fluid-Implicit Particle (FLIP) method, enabling efficient, robust, minimally-dissipative simulations that can undergo sharp changes in spatial resolution while minimizing artifacts. Along the way, we provide a new method for generating a smooth and detailed surface from a set of particles with variable sizes. Finally, we explore several new sizing functions for determining spatially adaptive simulation resolutions, and we show how to couple them to our simulator. We combine each of these elements to produce a simulation algorithm that is capable of creating animations at high maximum resolutions while avoiding common pitfalls like inaccurate boundary conditions and inefficient computation.}, author = {Ando, Ryoichi and Thuerey, Nils and Wojtan, Christopher J}, journal = {ACM Transactions on Graphics}, number = {4}, publisher = {ACM}, title = {{Highly adaptive liquid simulations on tetrahedral meshes}}, doi = {10.1145/2461912.2461982}, volume = {32}, year = {2013}, } @article{2467, abstract = {This paper presents a method for computing topology changes for triangle meshes in an interactive geometric modeling environment. Most triangle meshes in practice do not exhibit desirable geometric properties, so we develop a solution that is independent of standard assumptions and robust to geometric errors. Specifically, we provide the first method for topology change applicable to arbitrary non-solid, non-manifold, non-closed, self-intersecting surfaces. We prove that this new method for topology change produces the expected conventional results when applied to solid (closed, manifold, non-self-intersecting) surfaces---that is, we prove a backwards-compatibility property relative to prior work. Beyond solid surfaces, we present empirical evidence that our method remains tolerant to a variety of surface aberrations through the incorporation of a novel error correction scheme. Finally, we demonstrate how topology change applied to non-solid objects enables wholly new and useful behaviors.}, author = {Bernstein, Gilbert and Wojtan, Christopher J}, journal = {ACM Transactions on Graphics}, number = {4}, publisher = {ACM}, title = {{Putting holes in holey geometry: Topology change for arbitrary surfaces}}, doi = {10.1145/2461912.2462027}, volume = {32}, year = {2013}, } @article{2468, abstract = {Our work concerns the combination of an Eulerian liquid simulation with a high-resolution surface tracker (e.g. the level set method or a Lagrangian triangle mesh). The naive application of a high-resolution surface tracker to a low-resolution velocity field can produce many visually disturbing physical and topological artifacts that limit their use in practice. We address these problems by defining an error function which compares the current state of the surface tracker to the set of physically valid surface states. By reducing this error with a gradient descent technique, we introduce a novel physics-based surface fairing method. Similarly, by treating this error function as a potential energy, we derive a new surface correction force that mimics the vortex sheet equations. We demonstrate our results with both level set and mesh-based surface trackers.}, author = {Bojsen-Hansen, Morten and Wojtan, Christopher J}, journal = {ACM Transactions on Graphics}, number = {4}, publisher = {ACM}, title = {{Liquid surface tracking with error compensation}}, doi = {10.1145/2461912.2461991}, volume = {32}, year = {2013}, } @article{2469, abstract = {Cadherins are transmembrane proteins that mediate cell–cell adhesion in animals. By regulating contact formation and stability, cadherins play a crucial role in tissue morphogenesis and homeostasis. Here, we review the three major unctions of cadherins in cell–cell contact formation and stability. Two of those functions lead to a decrease in interfacial ension at the forming cell–cell contact, thereby promoting contact expansion — first, by providing adhesion tension that lowers interfacial tension at the cell–cell contact, and second, by signaling to the actomyosin cytoskeleton in order to reduce cortex tension and thus interfacial tension at the contact. The third function of cadherins in cell–cell contact formation is to stabilize the contact by resisting mechanical forces that pull on the contact.}, author = {Maître, Jean-Léon and Heisenberg, Carl-Philipp J}, journal = {Current Biology}, number = {14}, pages = {R626 -- R633}, publisher = {Cell Press}, title = {{Three functions of cadherins in cell adhesion}}, doi = {10.1016/j.cub.2013.06.019}, volume = {23}, year = {2013}, } @article{2470, abstract = {Background:Auxin binding protein 1 (ABP1) is a putative auxin receptor and its function is indispensable for plant growth and development. ABP1 has been shown to be involved in auxin-dependent regulation of cell division and expansion, in plasma-membrane-related processes such as changes in transmembrane potential, and in the regulation of clathrin-dependent endocytosis. However, the ABP1-regulated downstream pathway remains elusive.Methodology/Principal Findings:Using auxin transport assays and quantitative analysis of cellular morphology we show that ABP1 regulates auxin efflux from tobacco BY-2 cells. The overexpression of ABP1can counterbalance increased auxin efflux and auxin starvation phenotypes caused by the overexpression of PIN auxin efflux carrier. Relevant mechanism involves the ABP1-controlled vesicle trafficking processes, including positive regulation of endocytosis of PIN auxin efflux carriers, as indicated by fluorescence recovery after photobleaching (FRAP) and pharmacological manipulations.Conclusions/Significance:The findings indicate the involvement of ABP1 in control of rate of auxin transport across plasma membrane emphasizing the role of ABP1 in regulation of PIN activity at the plasma membrane, and highlighting the relevance of ABP1 for the formation of developmentally important, PIN-dependent auxin gradients.}, author = {Čovanová, Milada and Sauer, Michael and Rychtář, Jan and Friml, Jirí and Petrášek, Jan and Zažímalová, Eva}, journal = {PLoS One}, number = {7}, publisher = {Public Library of Science}, title = {{Overexpression of the auxin binding PROTEIN1 modulates PIN-dependent auxin transport in tobacco cells}}, doi = {10.1371/journal.pone.0070050}, volume = {8}, year = {2013}, } @article{2471, abstract = {The impact of disulfide bonds on protein stability goes beyond simple equilibrium thermodynamics effects associated with the conformational entropy of the unfolded state. Indeed, disulfide crosslinks may play a role in the prevention of dysfunctional association and strongly affect the rates of irreversible enzyme inactivation, highly relevant in biotechnological applications. While these kinetic-stability effects remain poorly understood, by analogy with proposed mechanisms for processes of protein aggregation and fibrillogenesis, we propose that they may be determined by the properties of sparsely-populated, partially-unfolded intermediates. Here we report the successful design, on the basis of high temperature molecular-dynamics simulations, of six thermodynamically and kinetically stabilized variants of phytase from Citrobacter braakii (a biotechnologically important enzyme) with one, two or three engineered disulfides. Activity measurements and 3D crystal structure determination demonstrate that the engineered crosslinks do not cause dramatic alterations in the native structure. The inactivation kinetics for all the variants displays a strongly non-Arrhenius temperature dependence, with the time-scale for the irreversible denaturation process reaching a minimum at a given temperature within the range of the denaturation transition. We show this striking feature to be a signature of a key role played by a partially unfolded, intermediate state/ensemble. Energetic and mutational analyses confirm that the intermediate is highly unfolded (akin to a proposed critical intermediate in the misfolding of the prion protein), a result that explains the observed kinetic stabilization. Our results provide a rationale for the kinetic-stability consequences of disulfide-crosslink engineering and an experimental methodology to arrive at energetic/structural descriptions of the sparsely populated and elusive intermediates that play key roles in irreversible protein denaturation.}, author = {Sanchez Romero, Inmaculada and Ariza, Antonio and Wilson, Keith and Skjøt, Michael and Vind, Jesper and De Maria, Leonardo and Skov, Lars and Sánchez Ruiz, Jose}, journal = {PLoS One}, number = {7}, publisher = {Public Library of Science}, title = {{Mechanism of protein kinetic stabilization by engineered disulfide crosslinks}}, doi = {10.1371/journal.pone.0070013}, volume = {8}, year = {2013}, } @article{2472, abstract = {Plant-specific PIN-formed (PIN) efflux transporters for the plant hormone auxin are required for tissue-specific directional auxin transport and cellular auxin homeostasis. The Arabidopsis PIN protein family has been shown to play important roles in developmental processes such as embryogenesis, organogenesis, vascular tissue differentiation, root meristem patterning and tropic growth. Here we analyzed roles of the less characterised Arabidopsis PIN6 auxin transporter. PIN6 is auxin-inducible and is expressed during multiple auxin-regulated developmental processes. Loss of pin6 function interfered with primary root growth and lateral root development. Misexpression of PIN6 affected auxin transport and interfered with auxin homeostasis in other growth processes such as shoot apical dominance, lateral root primordia development, adventitious root formation, root hair outgrowth and root waving. These changes in auxin-regulated growth correlated with a reduction in total auxin transport as well as with an altered activity of DR5-GUS auxin response reporter. Overall, the data indicate that PIN6 regulates auxin homeostasis during plant development.}, author = {Cazzonelli, Christopher and Vanstraelen, Marleen and Simon, Sibu and Yin, Kuide and Carron Arthur, Ashley and Nisar, Nazia and Tarle, Gauri and Cuttriss, Abby and Searle, Iain and Benková, Eva and Mathesius, Ulrike and Masle, Josette and Friml, Jirí and Pogson, Barry}, journal = {PLoS One}, number = {7}, publisher = {Public Library of Science}, title = {{Role of the Arabidopsis PIN6 auxin transporter in auxin homeostasis and auxin-mediated development}}, doi = {10.1371/journal.pone.0070069}, volume = {8}, year = {2013}, } @article{2473, abstract = {When a mutation with selective advantage s spreads through a panmictic population, it may cause two lineages at a linked locus to coalesce; the probability of coalescence is exp(−2rT), where T∼log(2Ns)/s is the time to fixation, N is the number of haploid individuals, and r is the recombination rate. Population structure delays fixation, and so weakens the effect of a selective sweep. However, favourable alleles spread through a spatially continuous population behind a narrow wavefront; ancestral lineages are confined at the tip of this front, and so coalesce rapidly. In extremely dense populations, coalescence is dominated by rare fluctuations ahead of the front. However, we show that for moderate densities, a simple quasi-deterministic approximation applies: the rate of coalescence within the front is λ∼2g(η)/(ρℓ), where ρ is the population density and is the characteristic scale of the wavefront; g(η) depends only on the strength of random drift, . The net effect of a sweep on coalescence also depends crucially on whether two lineages are ever both within the wavefront at the same time: even in the extreme case when coalescence within the front is instantaneous, the net rate of coalescence may be lower than in a single panmictic population. Sweeps can also have a substantial impact on the rate of gene flow. A single lineage will jump to a new location when it is hit by a sweep, with mean square displacement ; this can be substantial if the species’ range, L, is large, even if the species-wide rate of sweeps per map length, Λ/R, is small. This effect is half as strong in two dimensions. In contrast, the rate of coalescence between lineages, at random locations in space and on the genetic map, is proportional to (c/L)(Λ/R), where c is the wavespeed: thus, on average, one-dimensional structure is likely to reduce coalescence due to sweeps, relative to panmixis. In two dimensions, genes must move along the front before they can coalesce; this process is rapid, being dominated by rare fluctuations. This leads to a dramatically higher rate of coalescence within the wavefront than if lineages simply diffused along the front. Nevertheless, the net rate of coalescence due to a sweep through a two-dimensional population is likely to be lower than it would be with panmixis.}, author = {Barton, Nicholas H and Etheridge, Alison and Kelleher, Jerome and Véber, Amandine}, journal = {Theoretical Population Biology}, number = {8}, pages = {75 -- 89}, publisher = {Elsevier}, title = {{Genetic hitch-hiking in spatially extended populations}}, doi = {10.1016/j.tpb.2012.12.001}, volume = {87}, year = {2013}, } @article{2516, abstract = {We study the problem of object recognition for categories for which we have no training examples, a task also called zero-data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently: the world contains tens of thousands of different object classes and for only few of them image collections have been formed and suitably annotated. To tackle the problem we introduce attribute-based classification: objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be pre-learned independently, e.g. from existing image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper we also introduce a new dataset, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more datasets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.}, author = {Lampert, Christoph and Nickisch, Hannes and Harmeling, Stefan}, journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, number = {3}, pages = {453 -- 465}, publisher = {IEEE}, title = {{Attribute-based classification for zero-shot learning of object categories}}, doi = {10.1109/TPAMI.2013.140}, volume = {36}, year = {2013}, } @inproceedings{2517, abstract = {Traditional formal methods are based on a Boolean satisfaction notion: a reactive system satisfies, or not, a given specification. We generalize formal methods to also address the quality of systems. As an adequate specification formalism we introduce the linear temporal logic LTL[F]. The satisfaction value of an LTL[F] formula is a number between 0 and 1, describing the quality of the satisfaction. The logic generalizes traditional LTL by augmenting it with a (parameterized) set F of arbitrary functions over the interval [0,1]. For example, F may contain the maximum or minimum between the satisfaction values of subformulas, their product, and their average. The classical decision problems in formal methods, such as satisfiability, model checking, and synthesis, are generalized to search and optimization problems in the quantitative setting. For example, model checking asks for the quality in which a specification is satisfied, and synthesis returns a system satisfying the specification with the highest quality. Reasoning about quality gives rise to other natural questions, like the distance between specifications. We formalize these basic questions and study them for LTL[F]. By extending the automata-theoretic approach for LTL to a setting that takes quality into an account, we are able to solve the above problems and show that reasoning about LTL[F] has roughly the same complexity as reasoning about traditional LTL.}, author = {Almagor, Shaull and Boker, Udi and Kupferman, Orna}, location = {Riga, Latvia}, number = {Part 2}, pages = {15 -- 27}, publisher = {Springer}, title = {{Formalizing and reasoning about quality}}, doi = {10.1007/978-3-642-39212-2_3}, volume = {7966}, year = {2013}, } @inproceedings{2518, abstract = {A class of valued constraint satisfaction problems (VCSPs) is characterised by a valued constraint language, a fixed set of cost functions on a finite domain. An instance of the problem is specified by a sum of cost functions from the language with the goal to minimise the sum. We study which classes of finite-valued languages can be solved exactly by the basic linear programming relaxation (BLP). Thapper and Živný showed [20] that if BLP solves the language then the language admits a binary commutative fractional polymorphism. We prove that the converse is also true. This leads to a necessary and a sufficient condition which can be checked in polynomial time for a given language. In contrast, the previous necessary and sufficient condition due to [20] involved infinitely many inequalities. More recently, Thapper and Živný [21] showed (using, in particular, a technique introduced in this paper) that core languages that do not satisfy our condition are NP-hard. Taken together, these results imply that a finite-valued language can either be solved using Linear Programming or is NP-hard.}, author = {Kolmogorov, Vladimir}, location = {Riga, Latvia}, number = {1}, pages = {625 -- 636}, publisher = {Springer}, title = {{The power of linear programming for finite-valued CSPs: A constructive characterization}}, doi = {10.1007/978-3-642-39206-1_53}, volume = {7965}, year = {2013}, } @inproceedings{2520, abstract = {We propose a probabilistic model to infer supervised latent variables in the Hamming space from observed data. Our model allows simultaneous inference of the number of binary latent variables, and their values. The latent variables preserve neighbourhood structure of the data in a sense that objects in the same semantic concept have similar latent values, and objects in different concepts have dissimilar latent values. We formulate the supervised infinite latent variable problem based on an intuitive principle of pulling objects together if they are of the same type, and pushing them apart if they are not. We then combine this principle with a flexible Indian Buffet Process prior on the latent variables. We show that the inferred supervised latent variables can be directly used to perform a nearest neighbour search for the purpose of retrieval. We introduce a new application of dynamically extending hash codes, and show how to effectively couple the structure of the hash codes with continuously growing structure of the neighbourhood preserving infinite latent feature space.}, author = {Quadrianto, Novi and Sharmanska, Viktoriia and Knowles, David and Ghahramani, Zoubin}, booktitle = {Proceedings of the 29th conference uncertainty in Artificial Intelligence}, isbn = {9780974903996}, location = {Bellevue, WA, United States}, pages = {527 -- 536}, publisher = {AUAI Press}, title = {{The supervised IBP: Neighbourhood preserving infinite latent feature models}}, year = {2013}, } @article{2698, abstract = {We consider non-interacting particles subject to a fixed external potential V and a self-generated magnetic field B. The total energy includes the field energy β∫B2 and we minimize over all particle states and magnetic fields. In the case of spin-1/2 particles this minimization leads to the coupled Maxwell-Pauli system. The parameter β tunes the coupling strength between the field and the particles and it effectively determines the strength of the field. We investigate the stability and the semiclassical asymptotics, h→0, of the total ground state energy E(β,h,V). The relevant parameter measuring the field strength in the semiclassical limit is κ=βh. We are not able to give the exact leading order semiclassical asymptotics uniformly in κ or even for fixed κ. We do however give upper and lower bounds on E with almost matching dependence on κ. In the simultaneous limit h→0 and κ→∞ we show that the standard non-magnetic Weyl asymptotics holds. The same result also holds for the spinless case, i.e. where the Pauli operator is replaced by the Schrödinger operator.}, author = {Erdös, László and Fournais, Søren and Solovej, Jan}, journal = {Journal of the European Mathematical Society}, number = {6}, pages = {2093 -- 2113}, publisher = {European Mathematical Society}, title = {{Stability and semiclassics in self-generated fields}}, doi = {10.4171/JEMS/416}, volume = {15}, year = {2013}, } @inproceedings{2718, abstract = {Even though both population and quantitative genetics, and evolutionary computation, deal with the same questions, they have developed largely independently of each other. I review key results from each field, emphasising those that apply independently of the (usually unknown) relation between genotype and phenotype. The infinitesimal model provides a simple framework for predicting the response of complex traits to selection, which in biology has proved remarkably successful. This allows one to choose the schedule of population sizes and selection intensities that will maximise the response to selection, given that the total number of individuals realised, C = ∑t Nt, is constrained. This argument shows that for an additive trait (i.e., determined by the sum of effects of the genes), the optimum population size and the maximum possible response (i.e., the total change in trait mean) are both proportional to √C.}, author = {Barton, Nicholas H and Paixao, Tiago}, booktitle = {Proceedings of the 15th annual conference on Genetic and evolutionary computation}, location = {Amsterdam, Netherlands}, pages = {1573 -- 1580}, publisher = {ACM}, title = {{Can quantitative and population genetics help us understand evolutionary computation?}}, doi = {10.1145/2463372.2463568}, year = {2013}, } @inproceedings{2719, abstract = {Prediction of the evolutionary process is a long standing problem both in the theory of evolutionary biology and evolutionary computation (EC). It has long been realized that heritable variation is crucial to both the response to selection and the success of genetic algorithms. However, not all variation contributes in the same way to the response. Quantitative genetics has developed a large body of work trying to estimate and understand how different components of the variance in fitness in the population contribute to the response to selection. We illustrate how to apply some concepts of quantitative genetics to the analysis of genetic algorithms. In particular, we derive estimates for the short term prediction of the response to selection and we use variance decomposition to gain insight on local aspects of the landscape. Finally, we propose a new population based genetic algorithm that uses these methods to improve its operation.}, author = {Paixao, Tiago and Barton, Nicholas H}, booktitle = {Proceedings of the 15th annual conference on Genetic and evolutionary computation}, location = {Amsterdam, Netherlands}, pages = {845 -- 852}, publisher = {ACM}, title = {{A variance decomposition approach to the analysis of genetic algorithms}}, doi = {10.1145/2463372.2463470}, year = {2013}, } @article{2720, abstract = {Knowledge of the rate and fitness effects of mutations is essential for understanding the process of evolution. Mutations are inherently difficult to study because they are rare and are frequently eliminated by natural selection. In the ciliate Tetrahymena thermophila, mutations can accumulate in the germline genome without being exposed to selection. We have conducted a mutation accumulation (MA) experiment in this species. Assuming that all mutations are deleterious and have the same effect, we estimate that the deleterious mutation rate per haploid germline genome per generation is U = 0.0047 (95% credible interval: 0.0015, 0.0125), and that germline mutations decrease fitness by s = 11% when expressed in a homozygous state (95% CI: 4.4%, 27%). We also estimate that deleterious mutations are partially recessive on average (h = 0.26; 95% CI: –0.022, 0.62) and that the rate of lethal mutations is <10% of the deleterious mutation rate. Comparisons between the observed evolutionary responses in the germline and somatic genomes and the results from individual-based simulations of MA suggest that the two genomes have similar mutational parameters. These are the first estimates of the deleterious mutation rate and fitness effects from the eukaryotic supergroup Chromalveolata and are within the range of those of other eukaryotes.}, author = {Long, Hongan and Paixao, Tiago and Azevedo, Ricardo and Zufall, Rebecca}, journal = {Genetics}, number = {2}, pages = {527--540}, publisher = {Genetics Society of America}, title = {{Accumulation of spontaneous mutations in the ciliate Tetrahymena thermophila}}, doi = {10.1534/genetics.113.153536}, volume = {195}, year = {2013}, } @article{2782, abstract = {We consider random n×n matrices of the form (XX*+YY*)^{-1/2}YY*(XX*+YY*)^{-1/2}, where X and Y have independent entries with zero mean and variance one. These matrices are the natural generalization of the Gaussian case, which are known as MANOVA matrices and which have joint eigenvalue density given by the third classical ensemble, the Jacobi ensemble. We show that, away from the spectral edge, the eigenvalue density converges to the limiting density of the Jacobi ensemble even on the shortest possible scales of order 1/n (up to log n factors). This result is the analogue of the local Wigner semicircle law and the local Marchenko-Pastur law for general MANOVA matrices.}, author = {Erdös, László and Farrell, Brendan}, journal = {Journal of Statistical Physics}, number = {6}, pages = {1003 -- 1032}, publisher = {Springer}, title = {{Local eigenvalue density for general MANOVA matrices}}, doi = {10.1007/s10955-013-0807-8}, volume = {152}, year = {2013}, } @article{9459, abstract = {Nucleosome remodelers of the DDM1/Lsh family are required for DNA methylation of transposable elements, but the reason for this is unknown. How DDM1 interacts with other methylation pathways, such as small-RNA-directed DNA methylation (RdDM), which is thought to mediate plant asymmetric methylation through DRM enzymes, is also unclear. Here, we show that most asymmetric methylation is facilitated by DDM1 and mediated by the methyltransferase CMT2 separately from RdDM. We find that heterochromatic sequences preferentially require DDM1 for DNA methylation and that this preference depends on linker histone H1. RdDM is instead inhibited by heterochromatin and absolutely requires the nucleosome remodeler DRD1. Together, DDM1 and RdDM mediate nearly all transposon methylation and collaborate to repress transposition and regulate the methylation and expression of genes. Our results indicate that DDM1 provides DNA methyltransferases access to H1-containing heterochromatin to allow stable silencing of transposable elements in cooperation with the RdDM pathway.}, author = {Zemach, Assaf and Kim, M. Yvonne and Hsieh, Ping-Hung and Coleman-Derr, Devin and Eshed-Williams, Leor and Thao, Ka and Harmer, Stacey L. and Zilberman, Daniel}, issn = {1097-4172}, journal = {Cell}, number = {1}, pages = {193--205}, publisher = {Elsevier}, title = {{The Arabidopsis nucleosome remodeler DDM1 allows DNA methyltransferases to access H1-containing heterochromatin}}, doi = {10.1016/j.cell.2013.02.033}, volume = {153}, year = {2013}, } @article{9481, abstract = {Arabidopsis thaliana endosperm, a transient tissue that nourishes the embryo, exhibits extensive localized DNA demethylation on maternally inherited chromosomes. Demethylation mediates parent-of-origin–specific (imprinted) gene expression but is apparently unnecessary for the extensive accumulation of maternally biased small RNA (sRNA) molecules detected in seeds. Endosperm DNA in the distantly related monocots rice and maize is likewise locally hypomethylated, but whether this hypomethylation is generally parent-of-origin specific is unknown. Imprinted expression of sRNA also remains uninvestigated in monocot seeds. Here, we report high-coverage sequencing of the Kitaake rice cultivar that enabled us to show that localized hypomethylation in rice endosperm occurs solely on the maternal genome, preferring regions of high DNA accessibility. Maternally expressed imprinted genes are enriched for hypomethylation at putative promoter regions and transcriptional termini and paternally expressed genes at promoters and gene bodies, mirroring our recent results in A. thaliana. However, unlike in A. thaliana, rice endosperm sRNA populations are dominated by specific strong sRNA-producing loci, and imprinted 24-nt sRNAs are expressed from both parental genomes and correlate with hypomethylation. Overlaps between imprinted sRNA loci and imprinted genes expressed from opposite alleles suggest that sRNAs may regulate genomic imprinting. Whereas sRNAs in seedling tissues primarily originate from small class II (cut-and-paste) transposable elements, those in endosperm are more uniformly derived, including sequences from other transposon classes, as well as genic and intergenic regions. Our data indicate that the endosperm exhibits a unique pattern of sRNA expression and suggest that localized hypomethylation of maternal endosperm DNA is conserved in flowering plants.}, author = {Rodrigues, Jessica A. and Ruan, Randy and Nishimura, Toshiro and Sharma, Manoj K. and Sharma, Rita and Ronald, Pamela C and Fischer, Robert L. and Zilberman, Daniel}, issn = {1091-6490}, journal = {Proceedings of the National Academy of Sciences}, keywords = {Multidisciplinary}, number = {19}, pages = {7934--7939}, publisher = {National Academy of Sciences}, title = {{Imprinted expression of genes and small RNA is associated with localized hypomethylation of the maternal genome in rice endosperm}}, doi = {10.1073/pnas.1306164110}, volume = {110}, year = {2013}, } @article{9520, abstract = {Plants undergo alternation of generation in which reproductive cells develop in the plant body ("sporophytic generation") and then differentiate into a multicellular gamete-forming "gametophytic generation." Different populations of helper cells assist in this transgenerational journey, with somatic tissues supporting early development and single nurse cells supporting gametogenesis. New data reveal a two-way relationship between early reproductive cells and their helpers involving complex epigenetic and signaling networks determining cell number and fate. Later, the egg cell plays a central role in specifying accessory cells, whereas in both gametophytes, companion cells contribute non-cell-autonomously to the epigenetic landscape of the gamete genomes.}, author = {Feng, Xiaoqi and Zilberman, Daniel and Dickinson, Hugh}, issn = {1878-1551}, journal = {Developmental Cell}, number = {3}, pages = {215--225}, publisher = {Elsevier}, title = {{A conversation across generations: Soma-germ cell crosstalk in plants}}, doi = {10.1016/j.devcel.2013.01.014}, volume = {24}, year = {2013}, } @misc{9749, abstract = {Cooperative behavior, where one individual incurs a cost to help another, is a wide spread phenomenon. Here we study direct reciprocity in the context of the alternating Prisoner's Dilemma. We consider all strategies that can be implemented by one and two-state automata. We calculate the payoff matrix of all pairwise encounters in the presence of noise. We explore deterministic selection dynamics with and without mutation. Using different error rates and payoff values, we observe convergence to a small number of distinct equilibria. Two of them are uncooperative strict Nash equilibria representing always-defect (ALLD) and Grim. The third equilibrium is mixed and represents a cooperative alliance of several strategies, dominated by a strategy which we call Forgiver. Forgiver cooperates whenever the opponent has cooperated; it defects once when the opponent has defected, but subsequently Forgiver attempts to re-establish cooperation even if the opponent has defected again. Forgiver is not an evolutionarily stable strategy, but the alliance, which it rules, is asymptotically stable. For a wide range of parameter values the most commonly observed outcome is convergence to the mixed equilibrium, dominated by Forgiver. Our results show that although forgiving might incur a short-term loss it can lead to a long-term gain. Forgiveness facilitates stable cooperation in the presence of exploitation and noise.}, author = {Zagorsky, Benjamin and Reiter, Johannes and Chatterjee, Krishnendu and Nowak, Martin}, publisher = {Public Library of Science}, title = {{Forgiver triumphs in alternating prisoner's dilemma }}, doi = {10.1371/journal.pone.0080814.s001}, year = {2013}, } |
3c91d00773f309d0 | The amazing engineering of Fractons
fracton is a collective quantized vibration on a substrate with a fractal structure.[1][2]
Fractons are the fractal analog of phonons. Phonons are the result of applying translational symmetry to the potential in a Schrödinger equation. Fractal self-similarity can be thought of as a symmetry somewhat comparable to translational symmetry. Translational symmetry is symmetry under displacement or change of position, and fractal self-similarity is symmetry under change of scale. The quantum mechanical solutions to such a problem in general lead to a continuum of states with different frequencies. In other words, a fracton band is comparable to a phonon band. The vibrational modes are restricted to part of the substrate and are thus not fully delocalized, unlike phonon vibrational modes. Instead, there is a hierarchy of vibrational modes that encompass smaller and smaller parts of the substrate. Source Wiki
Theorists are in a frenzy over “fractons,” bizarre, but potentially useful, hypothetical particles that can only move in combination with one another.
The theoretical possibility of fractons surprised physicists in 2011 (;
). Recently, these strange states of matter have been leading physicists toward new theoretical frameworks that could help them tackle some of the grittiest problems in fundamental physics.
Fractons are quasiparticles — particle-like entities that emerge out of complicated interactions between many elementary particles inside a material. But fractons are bizarre even compared to other exotic quasiparticles, because they are totally immobile or able to move only in a limited way. There’s nothing in their environment that stops fractons from moving; rather it’s an inherent property of theirs. It means fractons’ microscopic structure influences their behavior over long distances.
Partial Particles
In 2011, Jeongwan Haah, then a graduate student at Caltech, was searching for unusual phases of matter that were so stable they could be used to secure quantum memory, even at room temperature. Using a computer algorithm, he turned up a new theoretical phase that came to be called the Haah code. The phase quickly caught the attention of other physicists because of the strangely immovable quasiparticles that make it up.
They seemed, individually, like mere fractions of particles, only able to move in combination. Soon, more theoretical phases were found with similar characteristics, and so in 2015 Haah — along with Sagar Vijay and Liang Fu — coined the term “fractons” for the strange partial quasiparticles. (An earlier but overlooked paper by Claudio Chamon is now credited with the original discovery of fracton behavior.(This Letter presents solvable examples of quantum many-body Hamiltonians of systems that are unable to reach their ground states as the environment temperature is lowered to absolute zero. These examples, three-dimensional generalizations of quantum Hamiltonians proposed for topological quantum computing, (1) have no quenched disorder, (2) have solely local interactions, (3) have an exactly solvable spectrum, (4) have topologically ordered ground states, and (5) have slow dynamical relaxation rates akin to those of strong structural glasses.))
The resultant movement is that of a particle-antiparticle pair moving sideways in a straight line. In this world — an example of a fracton phase — a single particle’s movement is restricted, but a pair can move easily.
The Haah code takes the phenomenon to the extreme: Particles can only move when new particles are summoned in never-ending repeating patterns called fractals. Say you have four particles arranged in a square, but when you zoom in to each corner you find another square of four particles that are close together. Zoom in on a corner again and you find another square, and so on. For such a structure to materialize in the vacuum requires so much energy that it’s impossible to move this type of fracton. This allows very stable qubits — the bits of quantum computing — to be stored in the system, as the environment can’t disrupt the qubits’ delicate state.
The immovability of fractons makes it very challenging to describe them as a smooth continuum from far away. Because particles can usually move freely, if you wait long enough they’ll jostle into a state of equilibrium, defined by bulk properties such as temperature or pressure. Particles’ initial locations cease to matter. But fractons are stuck at specific points or can only move in combination along certain lines or planes. Describing this motion requires keeping track of fractons’ distinct locations, and so the phases cannot shake off their microscopic character or submit to the usual continuum description.
“Without a continuous description, how do we define these states of matter?”
Fractons have yet to be made in the lab, but that will probably change. Certain crystals with immovable defects have been shown to be mathematically similar to fractons. And the theoretical fracton landscape has unfurled beyond what anyone anticipated, with new models popping up every month.
“Probably in the near future someone will take one of these proposals and say, ‘OK, let’s do some heroic experiment with cold atoms and exactly realize one of these fracton models,’” said Brian Skinner, a condensed matter physicist at Ohio State University who has devised fracton models.
Fractons do not fit into [the quantum field theory] framework. So my take is that the framework is incomplete.
Nathan Seiberg
Even without their experimental realization, the mere theoretical possibility of fractons rang alarm bells for Seiberg, a leading expert in quantum field theory, the theoretical framework in which almost all physical phenomena are currently described.
Quantum field theory depicts discrete particles as excitations in continuous fields that stretch across space and time. It’s the most successful physical theory ever discovered, and it encompasses the Standard Model of particle physics — the impressively accurate equation governing all known elementary particles.
“Fractons do not fit into this framework. So my take is that the framework is incomplete,” said Seiberg.
There are other good reasons for thinking that quantum field theory is incomplete — for one thing, it so far fails to account for the force of gravity. If they can figure out how to describe fractons in the quantum field theory framework, Seiberg and other theorists foresee new clues toward a viable quantum gravity theory.
“Fractons’ discreteness is potentially dangerous, as it can ruin the whole structure that we already have,” said Seiberg. “But either you say it’s a problem, or you say it’s an opportunity.”
He and his colleagues are developing novel quantum field theories that try to encompass the weirdness of fractons by allowing some discrete behavior on top of a bedrock of continuous space-time (We discuss nonstandard continuum quantum field theories in 2+1 dimensions. They exhibit exotic global symmetries, a subtle spectrum of charged excitations, and dualities similar to dualities of systems in 1+1 dimensions. These continuum models represent the low-energy limits of certain known lattice systems. One key aspect of these continuum field theories is the important role played by discontinuous field configurations. In two companion papers, we will present 3+1-dimensional versions of these systems. In particular, we will discuss continuum quantum field theories of some models of fractons.).
“Quantum field theory is a very delicate structure, so we would like to change the rules as little as possible,” he said. “We are walking on very thin ice, hoping to get to the other side.”
1. Alexander, S; C. Laermans; R. Orbach; H.M. Rosenberg (15 October 1983). “Fracton interpretation of vibrational properties of cross-linked polymers, glasses, and irradiated quartz”. Physical Review B28 (8): 4615–4619. Bibcode:1983PhRvB..28.4615Adoi:10.1103/physrevb.28.4615.
2. ^ Srivastava, G. P. (1990), The Physics of Phonons, CRC Press, pp. 328–329, ISBN 9780852741535.
1. Sounds a lot like phase prime metrics and meander flower garden of garden concept applied to smallest quanta we can in particle physics.
Also sounds like alpha and beta tubulin dimers creating topologies that lock their vibrations into a stable time crystal of information. Would need to read the original papers referenced on the “fracton” terminology, but sounds like they are proposing these fracton stable topologies are a method to create artificial stable topological qantum qubit technology with standard particle physics math models. Seems like a simplified lower complexity version of organic microtubule or the helical mw CNT synthetic nano brain systems.
Leave a Reply to Billy Broch Cancel reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Connecting to %s |
031e6ca934be07cc | Ground state
From formulasearchengine
Jump to navigation Jump to search
1D ground state has no nodes
In 1D the ground state of the Schrödinger equation has no nodes. This can be proved considering an average energy in the state with a node at , i.e. . Consider the average energy in this state
where is the potential. Now consider a small interval around , i.e. . Take a new wavefunction to be defined as and and constant for . If epsilon is small enough then this is always possible to do so that is continuous. So assuming around , we can write the new function as
where is the norm. Note that the kinetic energy density everywhere because of the normalization. Now consider the potential energy. For definiteness let us choose . Then it is clear that outside the interval the potential energy density is smaller for the because there. On the other hand in the interval we have
which is correct to this order of and indicate higher order corrections. On the other hand the potential energy in the state is
which is the same as that of the state to the order shown. Therefore the potential energy unchanged to leading order in by deforming the state with a node into a state without a node . We can do this by removing all nodes thereby reducing the energy, which implies that the ground state energy must not have a node. This completes the proof.
• The exact definition of one second of time since 1997 has been the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest at a temperature of 0 K.[1]
• {{#invoke:citation/CS1|citation
|CitationClass=book }} |
d49ffb1e0ddb78a0 | Molecular Geometry
Get Molecular Geometry essential facts below. View Videos or join the Molecular Geometry discussion. Add Molecular Geometry to your PopFlock.com topic list for future reference or share this resource on social media.
Molecular Geometry
Geometry of the water molecule with values for O-H bond length and for H-O-H bond angle between two bonds
Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity.[1][2][3] The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties.
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances,[4][5][6] dihedral angles,[7][8] angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The influence of thermal excitation
Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but an exponential of finite width (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule.
To get a feeling for the probability that the vibration of molecule may be thermally excited, we inspect the Boltzmann factor , where ?E is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor ? are:
• ? = 0.0890 for ?E = 0500 cm-1
• ? = 0.0080 for ?E = 1000 cm-1
• ? = 0.0007 for ?E = 1500 cm-1.
(The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm-1 corresponds to ). When an excitation energy is 500 cm-1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm-1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero.
As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster, which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm-1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated.
Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion).
Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms.
There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (Note that the angles ?11, ?22, ?33, and ?44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.)
Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised.
An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry.
Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties:
• A pure substance is composed of only one type of isomer of a molecule (all have the same geometrical structure).
• Structural isomers have the same chemical formula but different physical arrangements, often forming alternate molecular geometries with very different properties. The atoms are not bonded (connected) together in the same orders.
• Functional isomers are special kinds of structural isomers, where certain groups of atoms exhibit a special kind of behavior, such as an ether or an alcohol.
• Stereoisomers may have many similar physicochemical properties (melting point, boiling point) and at the same time very different biochemical activities. This is because they exhibit a handedness that is commonly found in living systems. One manifestation of this chirality or handedness is that they have the ability to rotate polarized light in different directions.
• Protein folding concerns the complex geometries and different isomers that proteins can take.
Types of molecular structure
A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include:
• Linear: In a linear model, atoms are connected in a straight line. The bond angles are set at 180°. For example, carbon dioxide and nitric oxide have a linear molecular shape.
• Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). Consequently, the bond angles are set at 120°. For example, boron trifluoride.
• Angular: Angular molecules (also called bent or V-shaped) have a non-linear shape. For example, water (H2O), which has an angle of about 105°. A water molecule has two pairs of bonded electrons and two unshared lone pairs.
• Tetrahedral: Tetra- signifies four, and -hedral relates to a face of a solid, so "tetrahedral" literally means "having four faces". This shape is found when there are four bonds all on one central atom, with no extra unshared electron pairs. In accordance with the VSEPR (valence-shell electron pair repulsion theory), the bond angles between the electron bonds are arccos(-1/3) = 109.47°. For example, methane (CH4) is a tetrahedral molecule.
• Octahedral: Octa- signifies eight, and -hedral relates to a face of a solid, so "octahedral" means "having eight faces". The bond angle is 90 degrees. For example, sulfur hexafluoride (SF6) is an octahedral molecule.
• Trigonal pyramidal: A trigonal pyramidal molecule has a pyramid-like shape with a triangular base. Unlike the linear and trigonal planar shapes but similar to the tetrahedral orientation, pyramidal shapes require three dimensions in order to fully separate the electrons. Here, there are only three pairs of bonded electrons, leaving one unshared lone pair. Lone pair - bond pair repulsions change the bond angle from the tetrahedral angle to a slightly lower value.[9] For example, ammonia (NH3).
VSEPR table
The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory")[], followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does.
Atoms bonded to
central atom
Lone pairs Electron domains
(Steric number)
Shape Ideal bond angle
(example's bond angle)
Example Image
2 0 2 linear 180° CO2 Linear-3D-balls.png
3 0 3 trigonal planar 120° BF3 Trigonal-3D-balls.png
2 1 3 bent 120° (119°) SO2 Bent-3D-balls.png
4 0 4 tetrahedral 109.5° CH4 AX4E0-3D-balls.png
3 1 4 trigonal pyramidal 109.5° (106.8°)[10] NH3 Pyramidal-3D-balls.png
2 2 4 bent 109.5° (104.48°)[11][12] H2O Bent-3D-balls.png
5 0 5 trigonal bipyramidal 90°, 120° PCl5 Trigonal-bipyramidal-3D-balls.png
4 1 5 seesaw ax-ax 180° (173.1°),
eq-eq 120° (101.6°),
ax-eq 90°
SF4 Seesaw-3D-balls.png
3 2 5 T-shaped 90° (87.5°), 180° (175°) ClF3 T-shaped-3D-balls.png
2 3 5 linear 180° XeF2 Linear-3D-balls.png
6 0 6 octahedral 90°, 180° SF6 AX6E0-3D-balls.png
5 1 6 square pyramidal 90° (84.8°) BrF5 Square-pyramidal-3D-balls.png
4 2 6 square planar 90°, 180° XeF4 Square-planar-3D-balls.png
7 0 7 pentagonal bipyramidal 90°, 72°, 180° IF7 Pentagonal-bipyramidal-3D-balls.png
6 1 7 pentagonal pyramidal 72°, 90°, 144° Pentagonal-pyramidal-3D-balls.png
5 2 7 pentagonal planar 72°, 144° Pentagonal-planar-3D-balls.png
8 0 8 square antiprismatic Square-antiprismatic-3D-balls.png
9 0 9 tricapped trigonal prismatic AX9E0-3D-balls.png
3D representations
• Line or stick - atomic nuclei are not represented, just the bonds as sticks or lines. As in 2D molecular structures of this type, atoms are implied at each vertex.
Endohedral fullerene.png
NorbornylCation ElectronDensity.jpg
• Ball and stick - atomic nuclei are represented by spheres (balls) and the bonds as sticks.
Methanol struktur.png
3LRI SolutionStructureAndBackboneDynamicsOfHumanLong arg3 insulin-Like Growth Factor 1 02.png
Ubiquitin spheres.png
3GF1 Insulin-Like Growth Factor Nmr 10 01.png
• Cartoon - a representation used for proteins where loops, beta sheets, and alpha helices are represented diagrammatically and no atoms or bonds are explicitly represented (e.g. the protein backbone is represented as a smooth pipe).
Anthrax toxin protein key motif.svg
8tim TIM barrel.png
The greater the amount of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them.
See also
3. ^ Alexandros Chremos; Jack F. Douglas (2015). "When does a branched polymer become a particle?". J. Chem. Phys. 143 (11): 111104. Bibcode:2015JChPh.143k1104C. doi:10.1063/1.4931483. PMID 26395679.
4. ^ FRET description Archived 2008-09-18 at the Wayback Machine
5. ^ Hillisch, A; Lorenz, M; Diekmann, S (2001). "Recent advances in FRET: distance determination in protein-DNA complexes". Current Opinion in Structural Biology. 11 (2): 201-207. doi:10.1016/S0959-440X(00)00190-1. PMID 11297928.
6. ^ FRET imaging introduction Archived 2008-10-14 at the Wayback Machine
7. ^ obtaining dihedral angles from 3J coupling constants Archived 2008-12-07 at the Wayback Machine
8. ^ Another Javascript-like NMR coupling constant to dihedral Archived 2005-12-28 at the Wayback Machine
9. ^ Miessler G.L. and Tarr D.A. Inorganic Chemistry (2nd ed., Prentice-Hall 1999), pp.57-58
10. ^ Haynes, William M., ed. (2013). CRC Handbook of Chemistry and Physics (94th ed.). CRC Press. pp. 9-26. ISBN 9781466571143.
11. ^ Hoy, AR; Bunker, PR (1979). "A precise solution of the rotation bending Schrödinger equation for a triatomic molecule with application to the water molecule". Journal of Molecular Spectroscopy. 74 (1): 1-8. Bibcode:1979JMoSp..74....1H. doi:10.1016/0022-2852(79)90019-5.
12. ^ "CCCBDB Experimental bond angles page 2". Archived from the original on 2014-09-03. Retrieved .
External links
Music Scenes |
07f3bd3790099b68 | Open main menu
Wikipedia β
Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr and many others, current scientific theory holds that all particles exhibit a wave nature (and vice versa).[2] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[3]
Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved; see Interpretations of quantum mechanics.
Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity.[4] Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account.[5]
Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory.[6][7]
Brief history of wave and particle viewpointsEdit
Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom).[8] At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and was subsequently supported by Thomas Young's 1801 discovery of double-slit interference.[9][10] The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.[11]
Thomas Young's sketch of two-slit diffraction of waves, 1803
James Clerk Maxwell discovered that he could apply his equations for electromagnetism, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to.
While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. Antoine Lavoisier deduced the law of conservation of mass and categorized many new chemical elements and compounds; and Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to propose that elements were invisible sub components; Amedeo Avogadro discovered diatomic gases and completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry.
Animation showing the wave-particle duality with a double slit experiment and effect of an observer. Increase size to see explanations in the video itself. See also quiz based on this animation.
Particle impacts make visible the interference pattern of waves.
A quantum particle is represented by a wave packet.
Interference of a quantum particle with itself.
Click images for animations.
Turn of the 20th century and the paradigm shiftEdit
Particles of electricityEdit
Radiation quantizationEdit
In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. Einstein later proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms.
Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. That thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose: if each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe.
Photoelectric effect illuminatedEdit
While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black body radiation would yield a fully continuous and 'wave-like' electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their existence was theorized eight years previously, phenomenon had been studied with the electron model in mind in physics laboratories worldwide.
In 1902 Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.[12]
Einstein's explanation of the photoelectric effectEdit
where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high-intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.[14]
De Broglie's wavelengthEdit
In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter,[15][16] not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = and the wavelength (in a vacuum) by λ = , where c is the speed of light in vacuum.
Heisenberg's uncertainty principleEdit
here indicates standard deviation, a measure of spread or uncertainty;
x and p are a particle's position and linear momentum respectively.
is the reduced Planck's constant (Planck's constant divided by 2 ).
Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. The thought is now, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made.
de Broglie–Bohm theoryEdit
Couder experiments,[17] "materializing" the pilot wave model.
In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics,[18] the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored",[19] J.S.Bell.
The best illustration of the pilot-wave model was given by Couder's 2010 "walking droplets" experiments,[20] demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.[17]
Wave behavior of large objectsEdit
Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929.[21] Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves.
A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer.[22] Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before.
In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported.[23] Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength of the incident beam was about 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.[24][25]
In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin[26]—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.[27][28] In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms.[26] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms.[29][30] In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer.[31] In 2013, the interference of molecules beyond 10,000 u has been demonstrated.[32]
Recently Couder, Fort, et al. showed[34] that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment,[35] unpredictable tunneling[36] (depending in complicated way on practically hidden state of field), orbit quantization[37] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.[38]
Treatment in modern quantum mechanicsEdit
Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave.
Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results.
There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived.
Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.
Alternative viewsEdit
Both-particle-and-wave viewEdit
At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains:
The Afshar experiment[40] (2007) may suggest that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, disputed by other scientists.[41][42][43][44]
Wave-only viewEdit
Carver Mead, an American scientist and professor at Caltech, proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:[45]
The three wave hypothesis of R. Horodecki relates the particle to wave.[48][49] The hypothesis implies that a massive particle is an intrinsically spatially, as well as temporally extended, wave phenomenon by a nonlinear law.
Particle-only viewEdit
Still in the days of the old quantum theory, a pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane,[50] and developed by others including Alfred Landé.[51] Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was explained as due to quantized momentum transfer from the spatially regular structure of the diffracting crystal.[52]
Neither-wave-nor-particle viewEdit
It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington[53] coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:[54]
"Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]."
Relational approach to wave–particle dualityEdit
Relational quantum mechanics has been developed as a point of view that regards the event of particle detection as having established a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg’s uncertainty principle is consequently avoided; hence there is no wave-particle duality.[55]
• Photos are now able to show this dual nature, which may lead to new ways of examining and recording this behaviour.[56]
See alsoEdit
Notes and referencesEdit
4. ^ Kumar, Manjit (2011). Quantum: Einstein, Bohr, and the Great Debate about the Nature of Reality (Reprint ed.). W. W. Norton & Company. pp. 242, 375–376. ISBN 978-0393339888.
6. ^ Camilleri, K. (2009). Heisenberg and the Interpretation of Quantum Mechanics: the Physicist as Philosopher, Cambridge University Press, Cambridge UK, ISBN 978-0-521-88484-6.
7. ^ Preparata, G. (2002). An Introduction to a Realistic Quantum Physics, World Scientific, River Edge NJ, ISBN 978-981-238-176-7.
9. ^ Young, Thomas (1804). "Bakerian Lecture: Experiments and calculations relative to physical optics". Philosophical Transactions of the Royal Society. 94: 1–16. Bibcode:1804RSPT...94....1Y. doi:10.1098/rstl.1804.0001.
10. ^ Thomas Young: The Double Slit Experiment
13. ^ "Observing the quantum behavior of light in an undergraduate laboratory". American Journal of Physics. 72: 1210. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397.
14. ^ Zhang, Q (1996). "Intensity dependence of the photoelectric effect induced by a circularly polarized laser beam". Physics Letters A. 216 (1–5): 125–128. Bibcode:1996PhLA..216..125Z. doi:10.1016/0375-9601(96)00259-9.
17. ^ a b See this Science Channel production (Season II, Episode VI "How Does The Universe Work?"), presented by Morgan Freeman,
18. ^ Bohmian Mechanics, Stanford Encyclopedia of Philosophy.
20. ^ Y. Couder, A. Boudaoud, S. Protière, Julien Moukhtar, E. Fort: Walking droplets: a form of wave-particle duality at macroscopic level?, doi:10.1051/epn/2010101, (PDF)
21. ^ Estermann, I.; Stern O. (1930). "Beugung von Molekularstrahlen". Zeitschrift für Physik. 61 (1–2): 95–125. Bibcode:1930ZPhy...61...95E. doi:10.1007/BF01340293.
24. ^ Juffmann, Thomas; et al. (25 March 2012). "Real-time single-molecule imaging of quantum interference". Nature Nanotechnology. Retrieved 27 March 2012.
26. ^ a b Hackermüller, Lucia; Stefan Uttenthaler; Klaus Hornberger; Elisabeth Reiger; Björn Brezger; Anton Zeilinger; Markus Arndt (2003). "The wave nature of biomolecules and fluorofullerenes". Phys. Rev. Lett. 91 (9): 090408. arXiv:quant-ph/0309016 . Bibcode:2003PhRvL..91i0408H. doi:10.1103/PhysRevLett.91.090408. PMID 14525169.
27. ^ Clauser, John F.; S. Li (1994). "Talbot von Lau interefometry with cold slow potassium atoms". Phys. Rev. A. 49 (4): R2213–17. Bibcode:1994PhRvA..49.2213C. doi:10.1103/PhysRevA.49.R2213. PMID 9910609.
28. ^ Brezger, Björn; Lucia Hackermüller; Stefan Uttenthaler; Julia Petschinka; Markus Arndt; Anton Zeilinger (2002). "Matter-wave interferometer for large molecules". Phys. Rev. Lett. 88 (10): 100404. arXiv:quant-ph/0202158 . Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334. Archived from the original on 2016-05-21.
29. ^ Hornberger, Klaus; Stefan Uttenthaler; Björn Brezger; Lucia Hackermüller; Markus Arndt; Anton Zeilinger (2003). "Observation of Collisional Decoherence in Interferometry". Phys. Rev. Lett. 90 (16): 160401. arXiv:quant-ph/0303093 . Bibcode:2003PhRvL..90p0401H. doi:10.1103/PhysRevLett.90.160401. PMID 12731960. Archived from the original on 2016-05-21.
30. ^ Hackermüller, Lucia; Klaus Hornberger; Björn Brezger; Anton Zeilinger; Markus Arndt (2004). "Decoherence of matter waves by thermal emission of radiation". Nature. 427 (6976): 711–714. arXiv:quant-ph/0402146 . Bibcode:2004Natur.427..711H. doi:10.1038/nature02276. PMID 14973478.
31. ^ Gerlich, Stefan; et al. (2011). "Quantum interference of large organic molecules". Nature Communications. 2 (263). Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC 3104521 . PMID 21468015.
32. ^ Eibenberger, S.; Gerlich, S.; Arndt, M.; Mayor, M.; Tüxen, J. (2013). "Matter–wave interference of particles selected from a molecular library with masses exceeding 10 000 amu". Physical Chemistry Chemical Physics. 15 (35): 14696–14700. arXiv:1310.8343 . Bibcode:2013PCCP...1514696E. doi:10.1039/c3cp51500a. PMID 23900710.
34. ^ - You Tube video - Yves Couder Explains Wave/Particle Duality via Silicon Droplets
37. ^ Fort, E.; Eddi, A.; Boudaoud, A.; Moukhtar, J.; Couder, Y. (2010). "Path-memory induced quantization of classical orbits". PNAS. 107 (41): 17515–17520. arXiv:1307.6051 . Bibcode:2010PNAS..10717515F. doi:10.1073/pnas.1007386107.
38. ^ - Level Splitting at Macroscopic Scale
39. ^ (Buchanan pp. 29–31)
40. ^ Afshar, S.S.; et al. (2007). "Paradox in Wave Particle Duality". Found. Phys. 37: 295. arXiv:quant-ph/0702188 . Bibcode:2007FoPh...37..295A. doi:10.1007/s10701-006-9102-8.
41. ^ Kastner, R (2005). "Why the Afshar experiment does not refute complementarity". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 36 (4): 649–658. arXiv:quant-ph/0502021 . Bibcode:2005SHPMP..36..649K. doi:10.1016/j.shpsb.2005.04.006 – via Elsevier Science Direct.
42. ^ Steuernagel, Ole (2007-08-03). "Afshar's Experiment Does Not Show a Violation of Complementarity". Foundations of Physics. 37 (9): 1370–1385. arXiv:quant-ph/0512123 . Bibcode:2007FoPh...37.1370S. doi:10.1007/s10701-007-9153-5. ISSN 0015-9018.
43. ^ Jacques, V.; Lai, N. D.; Dréau, A.; Zheng, D.; Chauvat, D.; Treussart, F.; Grangier, P.; Roch, J.-F. (2008-01-01). "Illustration of quantum complementarity using single photons interfering on a grating". New Journal of Physics. 10 (12): 123009. arXiv:0807.5079 . Bibcode:2008NJPh...10l3009J. doi:10.1088/1367-2630/10/12/123009. ISSN 1367-2630.
44. ^ Georgiev, Danko (2012-01-26). "Quantum Histories and Quantum Complementarity". ISRN Mathematical Physics. 2012: 1–37. doi:10.5402/2012/327278.
46. ^ Paul Arthur Schilpp, ed, Albert Einstein: Philosopher-Scientist, Open Court (1949), ISBN 0-87548-133-7, p 51.
48. ^ Horodecki, R. (1981). "De broglie wave and its dual wave". Phys. Lett. A. 87 (3): 95–97. Bibcode:1981PhLA...87...95H. doi:10.1016/0375-9601(81)90571-5.
49. ^ Horodecki, R. (1983). "Superluminal singular dual wave". Lett. Novo Cimento. 38: 509–511.
50. ^ Duane, W. (1923). The transfer in quanta of radiation momentum to matter, Proc. Natl. Acad. Sci. 9(5): 158–164.
53. ^ Eddington, Arthur Stanley (1928). The Nature of the Physical World. Cambridge, UK.: MacMillan. p. 201.
54. ^ Penrose, Roger (2007). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage. p. 521, §21.10. ISBN 978-0-679-77631-4.
56. ^ "Press release: The first ever photograph of light as both a particle and wave". Ecole Polytechnique Federale de Lausanne. 2 March 2015.
External linksEdit |
50b98a9e0da8d8fc | Interpreting the Quantum World I: Measurement & Non-Locality
In previous posts Aron introduced us to the strange, yet compelling world of quantum mechanics and its radical departures from our everyday experience. We saw that the classical world we grew up with, where matter is composed of solid particles governed by strictly deterministic equations of state and motion, is in fact somewhat “fuzzy.” The atoms, molecules, and subatomic particles in the brightly colored illustrations and stick models of our childhood chemistry sets and schoolbooks are actually probabilistic fields that somehow acquire the properties we find in them when they’re observed. Even a particle’s location is not well-defined until we see it here, and not there. Furthermore, because they are ultimately fields, they behave in ways the little hard “marbles” of classical systems cannot, leading to all sort of paradoxes. Physicists, philosophers, and theologians alike have spent nearly a century trying to understand these paradoxes. In this series of posts, we’re going to explore what they tell us about the universe, and our place in it.
To quickly recap earlier posts, in quantum mechanics (QM) the fundamental building block of matter is a complex-valued wave function \Psi whose squared amplitude is a real-valued number that gives the probability density of observing a particle/s in any given state. \Psi is most commonly given as a function of the locations of its constituent particles, \Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ), or their momentums, \Psi\left ( \vec{p_{1}}, \vec{p_{2}}... \vec{p_{n}} \right ) (but not both, which as we will see, is important), but will also include any of the system’s other variables we wish to characterize (e.g. spin states). The range of possible configurations these variables span is known as the system’s Hilbert space. As the system evolves, its wave function wanders through this space exploring its myriad probabilistic possibilities. The time evolution of its journey is derived from its total energy in a manner directly analogous to the Hamiltonian formalism of classical mechanics, resulting in the well-known time-dependent Schrödinger equation. Because \left | \Psi \right |^{2} is a probability density, its integral over all of the system’s degrees of freedom must equal 1. This irreducibly probabilistic aspect of the wave function is known as the Born Rule (after Max Born who first proposed it), and the mathematical framework that preserves it in QM is known as unitarity. [Fun fact: Pop-singer Olivia Newton John is Born’s granddaughter!]
Notice that \Psi is a single complex-valued wave function of the collective states of all its constituent particles. This makes for some radical departures from classical physics. Unlike a system of little hard marbles, it can interfere with itself—not unlike the way the countless harmonics in sound waves give us melodies, harmonies, and the rich tonalities of Miles Davis’ muted trumpet or Jimi Hendrix’s Stratocaster. The history of the universe is a grand symphony—the music of the spheres! Its harmonies also lead to entangled states, in which one part may not be uniquely distinguishable from another. So, it will not generally be true that the wave function of the particle sum is the sum of the individual particle wave functions,
\Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ) \neq \Psi\left ( \vec{r_{1}} \right )\Psi\left ( \vec{r_{2}} \right )... \Psi\left ( \vec{r_{n}} \right )
until the symphony progresses to a point where individual particle histories decohere enough to be distinguished from each other—melodies instead of harmonies.
Another consequence of this wave-like behavior is that position and momentum can be converted into each other with a mathematical operation known as a Fourier transform. As a result, the Hilbert space may be specified in terms of position or momentum, but not both, which leads to the famous Heisenberg Uncertainty principle,
\Delta x\Delta p \geqslant \hbar/2
where \hbar is the reduced Planck constant. It’s important to note that this uncertainty is not epistemic—it’s an unavoidable consequence of wave-like behavior. When I was first taught the Uncertainty Principle in my undergraduate Chemistry series, it was derived by modeling particles as tiny pool ball “wave packets” whose locations couldn’t be observed by bouncing a tiny cue-ball photon off them without batting them into left field with a momentum we couldn’t simultaneously see. As it happens, this approach does work, and is perhaps easier for novice physics and chemistry students to wrap their heads around. But unfortunately, it paints a completely wrong-heading picture of the underlying reality. We can pin down the exact location of a particle, but in so doing we aren’t simply batting it away—we’re destroying whatever information about momentum it originally had, rendering it completely ambiguous, and vice versa (in the quantum realm paired variables that are related to each other like this are said to be canonical). The symphony is, to some extent, irreducibly fuzzy!
So… the unfolding story of the universe is a grand symphony of probability amplitudes exploring their Hilbert space worlds along deterministic paths, often in entangled states where some of their parts aren’t entirely distinct from each other, and acquiring whatever properties we find them to have only when they’re measured, many of which cannot simultaneously have exact values even in principle. Strange stuff to say the least! But the story doesn’t end there. Before we can decipher what it all means (or, I should say, get as close as doing so as we ever will) there are two more subtleties to this bizarre quantum world we still need to unpack… measurement and non-locality.
The first thing we need to wrap our heads around is observation, or in quantum parlance, measurement. In classical systems matter inherently possesses the properties that it does, and we discover what those properties are when we observe them. My sparkling water objectively exists in a red glass located about one foot to the right of my keyboard, and I learned this by looking at it (and roughly measuring the distance with my thumb and fingers). In the quantum realm things are messier. My glass of water is really a bundle of probabilistic particle states that in some sense acquired its redness, location, and other properties by the very act of my looking at it and touching it. That’s not to say that it doesn’t exist when I’m not doing that, only that its existence and nature aren’t entirely independent of me.
How does this work? In quantum formalism, the act of observing a system is described by mathematical objects known as operators. You can think of an operator as a tool that changes one function into another one in a specific way—like say, “take the derivative and multiply by ten.” The act of measuring some property A (like, say, the weight or color of my water glass) will apply an associated operator \hat A to its initial wave function state \Psi_{i} and change it to some final state \Psi_{f},
\hat A \Psi_{i} = \Psi_{f}
For every such operator, there will be one or more states \Psi_{i} could be in at the time of this measurement for which \hat A would end up changing its magnitude but not its direction,
\begin{bmatrix} \hat A \Psi_{1} = a_{1}\Psi_{1}\\ \hat A \Psi_{2} = a_{2}\Psi_{2}\\ ...\\ \hat A \Psi_{n} = a_{n}\Psi_{n} \end{bmatrix}
These states are called eigenvectors, and the constants a_{n} associated with them are the values of A we would measure if \Psi is in any of these states when we observe it. Together, they define a coordinate system associated with A in the Hilbert space that \Psi can be specified in at any given moment in its history. If \Psi_{i} is not in one of these states when we measure A, doing so will force it into one of them. That is,
\hat A \Psi_{i} \rightarrow \Psi_{n}
and a_{n} will be the value we end up with. The projection of \Psi_{i} on any of the n axes gives the probability amplitude that measuring A will put the system into that state with the associated eigenvalue being what we measure,
P(a_{n}) = \left | \Psi_{i} \cdot \Psi_{n} \right |^{2}
So… per the Schrödinger equation, our wave function skips along its merry, deterministic way through a Hilbert space of unitary probabilistic states. Following a convention used by Penrose (2016), let’s designate this part of the universe’s evolution as \hat U. All progresses nicely, until we decide to measure something—location, momentum, spin state, etc. When we do, our wave function abruptly (some would even be tempted to say magically) jumps to a different track and spits out whatever value we observe, after which \hat U starts over again in the new track.
This event—let’s call it \hat M—has nothing whatsoever to do with the wave function itself. The tracks it jumps to are determined by whatever properties we observe, and the outcome of these jumps are irreducibly indeterminate. We cannot say ahead of time which track we’ll end up on even in principle. The best we can do is state that some property A has such and such probability of knocking \Psi into this or that state and returning its associated value. When this happens, the wave function is said to have “collapsed.” [Collapsed is in quotes here for a reason… as we shall see, not all interpretations of quantum mechanics accept that this is what actually happens!]
It’s often said that quantum mechanics only applies to the subatomic world, but on the macroscopic scale of our experience classical behavior reigns. For the most part this is true. But… as we’ve seen, \Psi is a wave function, and waves are spread out in space. Subatomic particles are only tiny when we observe them to be located somewhere. So, if \hat M involves a discrete collapse, it happens everywhere at once, even over distances that according to special relativity cannot communicate with each other—what some have referred to as “spooky action at a distance.” This isn’t mere speculation, nor a problem with our methods—it can be observed.
Consider two electrons in a paired state with zero total spin. Such states (which are known as singlets) may be bound or unbound, but once formed they will conserve whatever spin state they originated with. In this case, since the electron cannot have zero spin, the paired electrons would have to preserve antiparallel spins that cancel each other. If one were observed to have a spin of, say, +1/2 about a given axis, the other would necessarily have a spin of -1/2. Suppose we prepared such a state unbound, and sent the two electrons off in opposite direction. As we’ve seen, until the spin state of one of them is observed, neither will individually be in any particular spin state. The wave function will be an entangled state of two possible outcomes, +/- and -/+ about any axis. Once we observe one of them and find it in, say, a “spin-up” state (+1/2 about a vertical axis), the wave function will have collapsed to a state in which the other must be “spin-down” (-1/2), and that will be what we find if it’s observed a split second later as shown below.
But what would happen if the two measurements were made over a distance too large for a light signal to travel from the first observation point to the second one during the time delay between the two measurements? Special relativity tells us that no signal can communicate faster than the speed of light, so how would the second photon know that it was supposed to be in a spin-down state? Light travels 11.8 inches in one nanosecond, so it’s well within existing microcircuit technology to test this, and it has been done on many occasions. The result…? The second photon is always found in a spin state opposite that of the first. Somehow, our second electron knows what happened to its partner… instantaneously!
If so, this raises some issues. Traditional QM asserts that the wave function gives us a complete description of a system’s physical reality, and the properties we observe it to have are instantiated when we see them. At this point we might ask ourselves two questions;
1) How do we really know that prior to our observing it, the wave function truly is in an entangled state of two as-yet unrealized outcomes? What if it’s just probabilistic scaffolding we use to cover our lack of understanding of some deeper determinism not captured by our current QM formalism?
2) What if the unobserved electron shown above actually had a spin-up property that we simply hadn’t learned about yet, and would’ve had it whether it was ever observed or not (a stance known as counterfactual definiteness)? How do we know that one or more “hidden” variables of some sort hadn’t been involved in our singlet’s creation, and sent the two electrons off with spin state box lunches ready for us to open without violating special relativity (a stance known as local realism)?
Together, these comprise what’s known as local realism, or what Physicist John Bell referred to as the “Bertlmann’s socks” view (after Reinhold Bertlmann, a colleague of his at CERN). Bertlmann was known for never wearing matching pairs of socks to work, so it was all but guaranteed that if one could observe one of his socks, the other would be found to be differently colored. But unlike our collapsed electron singlet state, this was because Bertlmann had set that state up ahead of time when he got dressed… a “hidden variable” one wouldn’t be privy to unless they shared a flat with him. His socks would already have been mismatched when we discovered them to be, so no “spooky action at a distance” would be needed to create that difference when we first saw them.
In 1964 Bell proposed a way to test this against the entangled states of QM. Spin state can only be observed in one axis at a time. Our experiment can look for +/- states about any axis, but not together. If an observer “Alice” finds one of the electrons in a spin-up state, the second photon will be in a spin-down state. What would happen if another observer “Bob” then measured its spin state about an axis at, say, a 45-deg. angle to vertical as shown below?
The projection of the spin-down wave function on the eigenvector coordinate system of Bob’s measurement will translate into probabilities of observing + or – states in that plane. Bell produced a set of inequalities bearing his name which showed that if the electrons in our singlet state had in fact been dressed in different colored socks from the start, experiments like this will yield outcomes that differ statistically from those predicted by traditional QM. This too has been tested many times, and the results have consistently favored the predictions of QM, leaving us with three options;
a) Local realism is not valid in QM. Particles do not inherently possess properties prior to our observing them, and indeterminacy and/or some degree of “spooky action at a distance” cannot be fully exorcised from \hat M.
b) Our understanding of QM is incomplete. Particles do possess properties (e.g. spin, location, or momentum) whether we observe them or not (i.e. – counterfactuals about measurement outcomes exist), but our understanding of \hat U and \hat M doesn’t fully reflect the local realism that determines them.
c) QM is complete, and the universe is both deterministic and locally real without the need for hidden variables, but counterfactual definiteness is an ill-formed concept (as in the "Many Worlds Interpretation" for instance).
Nature seems to be telling us that we can’t have our classical cake and eat it. There’s only room on the bus for one of these alternatives. Several possible loopholes have been suggested to exist in Bell’s inequalities through which underlying locally real mechanics might slip through. These have led to ever more sophisticated experiments to close them, which continue to this day. So far, the traditional QM frameworks has survived every attempt to up the ante, painting Bertlmann’s socks into an ever-shrinking corner. In 1966, Bell, and independently in 1967, Simon Kochen and Ernst Specker, proved what has since come to be known as the Kochen-Specker Theorem, which tightens the noose around hidden variables even further. What they showed, was that regardless of non-locality, hidden variables cannot account for indeterminacy in QM unless they’re contextual. Essentially, this all but dooms counterfactual definiteness in \hat M. There are ways around this (as there always are if one is willing to go far enough to make a point about something). The possibility of “modal” interpretations of QM have been floated, as has the notion of a “subquantum” realm where all of this is worked out. But these are becoming increasingly convoluted, and poised for Occam’s ever-present razor. As of this writing, hidden variables theories aren’t quite dead yet, but they are in a medically induced coma.
In case things aren’t weird enough for you yet, note that a wave function collapse over spacelike distances raises the specter of the relativity of simultaneity. Per special relativity, over such distances the Lorentz boost blurs the distinction between past and future. In situations like these it’s unclear whether the wave function was collapsed by the first observation or the second one, because which one is in the future of the other is a matter of which inertial reference frame one is viewing the experiment from. Considering that you and I are many-body wave functions, anything that affects us now, like say, stubbing a toe, collapses our wave function everywhere at once. As such, strange as it may sound, in a very real sense it can be said that a short while ago your head experienced a change because you stubbed your toe now, not back then. And… It will experience a change shortly because you did as well. Which of these statements is correct depends only on the frame of reference from which the toe-stubbing event is viewed. It’s important to note that this has nothing to do with the propagation of information along our nerves—it’s a consequence of the fact that as “living wave functions”, our bodies are non-locally spread out across space-time to an extent that slightly blurs the meaning of “now”. Of course, the elapsed times associated with the size of our bodies are too small to be detected, but the basic principle remains.
Putting it all together
Whew… that was a lot of unpacking! And the world makes even less sense now than it did when we started. Einstein once said that he wanted to know God’s thoughts, the rest were just details. Well it seems the mind of God is more inscrutable than we ever imagined! But now we have the tools we need to begin exploring some of the way His thoughts have been written into the fabric of creation. Our mission, should we choose to accept it, is to address the following;
1) What is this thing we call a wave function? Is it ontologically real, or just mathematical scaffolding we use to make sense of things we don’t yet understand?
2) What really happens when a deterministic, well-behaved \hat U symphony runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other?
3) If counterfactual definiteness is an ill-formed concept and every part of the wave function is equally real, why do our observations always leave us with only one experienced outcome? Why don’t we experience entangled realities, or even multiple realities?
In the next installment in this series we’ll delve into a few of the answers that have been proposed so far. The best is yet to come, so stay tuned!
Penrose, R. (2016). Fashion, faith, and fantasy in the new physics of the universe. Princeton University Press, Sept. 13, 2016. ISBN: 0691178534; ASIN: B01AMPQTRU. Available online at Accessed June 11, 2017.
About Scott Church
This entry was posted in Metaphysics, Physics. Bookmark the permalink.
9 Responses to Interpreting the Quantum World I: Measurement & Non-Locality
1. TY says:
Fine article on quantum weirdness -- the apparently weird (unexplainable?) but actual behaviour of sub-atomic particles. What we with our limited minds and "brains" cannot fathom is perfectly logical to the greatest MIND. Nobel Laureate in Physics, Eugene Wigner, wrote in an article, The Unreasonable Effectiveness of Mathematics in the Natural Sciences:
So we have the quantitative tools to explore God's thoughts with respect to the natural world but I wonder if science will ever get us closer to "understanding" or "knowing" God.
Love to hear your thoughts and thanks to Aron for filling the gap with your guest posts.
2. Mactoul says:
Physicists can't possibly mean what they say.
CS Lewis.
This presentation of quantum mechanics illustrates the incredible sloppiness encouraged in physics regarding interpretation of quantum mechanics, beginning with the very claim that the wavefunction psi, defined as the
Thus, the definition itself invokes a particle whose wavefunction is psi. Thus, the psi describes the particle but it is not itself the particle. The particles may be building blocks of the matter but the statement that psi is building block of matter is simply nonsensical. Psi is just numbers and no amount of numbers could generate a single particle of however small a mass.
Then, it is said that of measurement that "This event—let’s call it M̂ —has nothing whatsoever to do with the wave function itself. "
Clearly, then the claim that psi is building block of matter is further undercut. We have other things--measurement events, presumably instruments that measure, conscious observers etc.
The fundamental philosophical error is the mistaken claim that physics can pronounce on the ontology-- of what are the building blocks. In actuality, all physics is capable of doing is to relate material configuration at a time to material configuration at another time. Grand pronouncements as to the ontology are outside the province of physics and it smacks of hubris of scientism to pretend otherwise.
A key mistake made in discussions of the uncertainty principle is the fallacy of assuming that if a property can not be measured exactly, then it does not exist exactly. To have a minimum precision in the measurement of corresponding variables in straightforward but the conclusion to a fundamental ontological defect in nature and absence of microscopic causality is making too much of a mountain out of molehill
3. Philip Wainwright says:
I found this review of Penrose's book helpful:
4. Scott Church says:
Thanks @Ty! I've always loved that phrase of Wigner's. To me, apart from our saving experience of Him, nothing bespeaks the mind of God more clearly than the unreasonable effectiveness of mathematics. Having laid the groundwork here, we'll be getting to some of the interpretations themselves in my next.
@Philip Wainwright, as a matter of fact, it was that very post of Woit's that first introduced me to the book, and prompted me to put it on my Christmas list last fall!
@Mactoul, I'm not sure what you think is "incredibly [sloppy]" about QM. It is among the most rigorously well-formulated & successful scientific theories in history. Quantum electrodynamics for instance, which is built on the foundations laid here, predicts the magnetic dipole moment of the electron to no less than 12 decimal places--one part in a trillion. If that's your idea of "sloppy" then I have no words for you. But if you think you can do better, including a more precise and useful definition of \psi, by all means, do so... and go collect your Nobel and Templeton prizes.
As for "building blocks of matter," that's a figure of speech. When physicists refer to particles and fields it's implicit that those terms are to some extent being used loosely. Wave-like and particle-like properties can be observed separately in different measurements, but clearly, the reality we're studying is something larger than both, and some degree of analogy and/or metaphor is necessary to even speak of them. This is no place to be picking the fly shit out of the pepper. Surely, if I asked you to lend me a hand with something, that wouldn't involve a hacksaw and a bloody mess, would it? ;-)
That said, you are right to point out that physics cannot speak directly to matters of bedrock ontology and epistemology. But, as I noted, this isn't physics territory alone. QM raises philosophical and theological questions as well, and our interpretations of it invite us to explore them. While the hubris of scientism certainly lurks there and many have succumbed to it, that needn't be the case. God hasn't asked us to refrain from ultimate questions, or from seeking knowledge of His thoughts. He only expects us to remember our limits with reverence, and live with the mysteries that will inevitably lie beyond our grasp. We're to take off our sandals and remember that we're standing on holy ground (Exodus 3:5).
As for the ontic and epistemic issues in all this, I'll be getting to some of that in my next, when we review a few of the ideas that have been floated to date. With luck, that'll be sometime this weekend.
5. Johannes says:
From the settled fact that local realism is not valid, you must give up either realism or locality, which is done in each of the main "one world" interpretations of QM: the Copenhaguen Interpretation gives up realism (or in your terms, "counterfactual definiteness") and preserves locality, whereas Bohmian Mechanics preserves realism and gives up locality.
According to CI, the wave function is epistemic and there is physical indeterminism.
According to BM, the wave function is ontic and there is physical determinism.
My hypothesis is that both are formally correct, and that CI describes reality from the viewpoint of observers with limited observational and computational capabilities, whereas BM describes reality from the viewpoint of an Observer with unlimited observational and computational capabilities.
6. Mactoul says:
Scott Church,
That the popular interpretations of QM are philosophically sloppy has been generally noted by people trained in the humanist tradition. See CS Lewis, Stanley Jaki (who was a PhD in physics as well).
The 12 decimal precision can not save the theory from giving a clear and unambiguous answer to Schroedinger's Cat paradox.
It is the particles in QM that have wave-like aspects. The psi merely describes the system. To say that psi is the building block of the universe confuses the theories of physics with the physical reality that the theory seeks to describe. It is like saying that a Lagrangian is the building block of classical world.
The traditional way of introducing QM is with reference to particular microscopic systems, not by assuming that QM encompassing all reality. The QM formalism is set up to describe a microscopic system which is interacting with some measurement apparatus.One may generalize from that provided it is justified.
7. Aron Wall says:
Is "Physicists can't possibly mean what they say." supposed to be a QUOTE by St. Lewis? If so, please provide a citation.
I love the writings of St. Lewis, but he is a terrible choice of authority for the interpretation of QM, since (as he was the first to acknowledge) he didn't understand the relevant math or science and was merely reacting as a layman to what they seemed to be saying.
8. Mactoul says:
Those who like myself have had a philosophical rather than a scientific education find it almost impossible to believe that the scientists really mean what they seem to be saying. I cannot help thinking they mean no more than that the movements of individual units are permanently incalculable to us, not that they are in themselves random and lawless. And even if they mean the latter, a layman can hardly feel any certainty that some scientific development may not tomorrow abolish this whole idea of a lawless subnature.
Miracles, chap 3.
9. Mactoul says:
"he is a terrible choice of authority for the interpretation of QM, since he didn't understand the relevant math or science "
The dispute is not over maths but over words. Is the term "wavefunction of the universe" justified? Borrowing the authority of precise experiments over microscopic world, how is the extrapolation made over the entire universe? The universe, that moreover contains observers and measuring instruments that are in experimentally established QM external to the QM system.
Leave a Reply
|
19186ad5ebba6429 | Double slit experiment
Discussion in 'Physics & Math' started by Xmo1, Nov 15, 2016.
1. Xmo1 Registered Senior Member
I think the interpretation of the double slit experiment is wrong. If you aim your photon or electron at the center between the two slits it will go someplace (roll toward a slit lets say). In doing so it will leave part of itself (maybe just a charge) as a trail, and when it goes over the edge it will again leave part of itself (a bump) glued to the edge. The next time it squiggles along the same trail, it will hit that bump and bounce, but still fall through a slit without touching the inner edges of the slit. After doing that a few times it will hit the inner and opposite wall of the slit, and again bounce maybe toward some outer edge of the receptor.
It hits the wall (the receptor), not as a wave, but still as a particle. That is, the individual photons or electrons are hitting the wall. Otherwise, if a wave were hitting the receptor - the whole wave structure would be hitting the wall at nearly the same time. You would detect the whole wave hitting at once rather than the particle. Your slits would need to be thinner (front to back) than the particle to minimize the bouncing effect. I'll bet if you weighed the particle at start, and then at the wall you would find that it was lighter at the wall receptor due to leaving the trail and bump on the slit material.
2. Google AdSense Guest Advertisement
to hide all adverts.
3. Confused2 Registered Senior Member
Here is some information about the double slit experiment:-
Notice particularly that the peaks (brightest bits) are found (approximately) where the wavelength * (Distance to slits)/(distance between slits) is a whole number.
Note that the material and the thickness of the material the slits are cut from has no effect - it can reasonably be assumed that any photons that don't actually strike the slits are either reflected or absorbed and play no part in the result.
Photons can't lose mass because they don't have any mass to lose. What they could do is lose energy which would cause them to change frequency. It is observed that the photons (in this experiment) don't change frequency therefore they have not lost any energy. You can probably imagine that as a wave spreads out it loses energy - this doesn't happen with light (photons) - the energy of each photon stays the same but there are less of them as you get further from the source.
Last edited: Nov 15, 2016
paddoboy likes this.
4. Google AdSense Guest Advertisement
to hide all adverts.
5. Xmo1 Registered Senior Member
If you are firing a stream of anything the stream is going to act as a fluid, and in accordance with conservation laws (axioms of fluid dynamics). To be accurate the particles must be fired singularly. If then you get an interference pattern, then my statement is that the interference is generated by the interaction between the particle and the slit screen. Another respondent to my inquiry said that 'the laser is fired at both slits.' If so, then my statement is that the experiment is flawed. If you pay attention to the slit screen and the slits, and do the experiment properly you won't get an interference pattern. Hence, no (probability) wave. I have no evidence to support my statements.
I have a problem when the physics community makes irrational 'discoveries.' I'm skeptical and ignorant, so I encourage people to educate me. 25 dimensions is about 20 too many for me, and multiverse is poof-candy.
Last edited: Nov 16, 2016
6. Google AdSense Guest Advertisement
to hide all adverts.
That is a big issue. There is a massive amount of evidence that DOES support the interpretation that you dismiss!
To change the mind of scientists you need to have compelling evidence that your concept BETTER fits observations than the current interpretation. Since you have admitted you do not have the evidence your idea is just and idea that cannot go anywhere.
Another issue is that you must make up new physics to support your idea - photon 'trails'. There is no evidence of these trails and I see no way they could exist with out violating the conservation of energy.
Last edited: Nov 16, 2016
8. exchemist Valued Senior Member
The photons are sent individually in this experiment - that's the whole point. They are sent one by one, but an interference pattern nevertheless appears, over time, as the discrete points of arrival of each photon build up into a pattern. And that pattern is what you would get if waves were diffracting through the slits. So the "probability wave" of a single photon appears to be passing through both slits and interfering with itself, even though the photon can then only be detected at one precise spot.
You are right that the interference is created by interaction between the slits and the photon. Of course it is: no slits, no pattern. But the pattern observed is consistent with that interaction being wavelike diffraction and interference. If you think the experiment must be flawed, all we can tell you is that it has been repeated many times and it gives consistent results, results which are exactly in accordance with the predictions of quantum mechanics, counterintuitive though they certainly are. So you have to understand that, your personal scepticism notwithstanding, people who know what they are talking about are pretty convinced by it.
As a chemist, I am fairly used to thinking of electrons as wavelike, because that is how we model chemical bonding. The way I picture the dual slit experiment is that the photons are waves but, being quantum entities they can only interact - e.g. with a detector - in whole units, i.e. in quanta. I don't pretend this is remotely rigorous, but it helps me to visualise it. At the atomic scale, there seems little doubt that matter behaves in ways that are remote from everyday human experience. As a scientist one has to accept what the observations are telling you, once you have had them corroborated reproducibly. And if that does not feel "normal", tough, it seems to be the way the universe works, so get used to it. In fact this is what makes physical science interesting to many of us.
origin likes this.
9. Write4U Valued Senior Member
I understand that photons propagate as a *probability wave function*.
Question: If this is correct, why should we expect the photons to strike the exact same spot each time?
Is that theoretically even possible?
IMO, while each photon behaves as a probabilistic wave, I can visualize the photons striking the plate (behind the slits) at different probabilistic targets, but as they also behave as a wave function, the result will still show a wavelike interference pattern (the striped lines).
10. Xmo1 Registered Senior Member
I'm not trying to change the mind of anyone. I'm putting forth an idea of my own. If you are shooting center mass between the two slits: The element you are shooting, will behave to spread out, to bounce, or to roll. There will be no 'magical' behavior that splits the element so that it can behave as a wave. You say that you are shooting a single quantized particle. If not you might just as well be shooting a stream of water, rather than a 'bullet.' The pressure going through the slits will naturally produce interference patterns.
Basically, I'm saying that the experiment should exclude the possibility that a stream is being shot at the screen, and also that the element itself is not wide enough to cover both slits. If that single element (a photon or electron or buckyball or whatever) then produces a wave pattern on the receiving screen all at once and around the same time then I would say yes there is evidence of the particle acting as a wave. So far, I haven't seen an experiment that uses these constraints. Rather, they use lasers and other devices that produce a stream. So yes, as I said a stream of any type acts according to the laws of conservation, which produce interference patterns.
I might be seeing a dispersion of energy - like the photon going through a coke bottle. That is just saying that the particle is energetic, not that the particle itself is energy. It is the coke bottle that is interacting with the energy. If it were not there - there would be no reflections. Maybe the energy itself has a wave function that produces the probability of position of the particle. That's like saying that somewhere on the spider web there may be a spider. It doesn't mean that the spider and the web are the same thing.
There is an explanation here that explains that a single photon (itself) has a (big ? mark here for me) wave function that produces a diffraction pattern, but it is invisible until you shoot more photons (information is cumulative?). So that's about where I'm at. I'm still skeptical and ignorant.
Last edited: Nov 17, 2016
11. Xmo1 Registered Senior Member
What came first - the experiment or quantum mechanics? I'm far from understanding physics, but I do understand some of it. I've made a response (to origin) below (or above - don't know where its going to land) that continues my thought. BTW, awhile ago this experiment was said to indicate that light acted as both a particle and a wave. Now probability has entered the explanation. Seems the explanation as to what the experiment reveals has evolved.
Last edited: Nov 17, 2016
12. exchemist Valued Senior Member
QM came first all right, by about 70 years. It is only very recently, with the development of very sensitive detectors, that it has become possible to detect the arrival of individual photons. The experiment was initially just a "thought experiment", intended to illustrate how counterintuitive some predictions of QM would be. It was quite a sensation when it was finally done for real - and agreed exactly with prediction!
Actually, on the probability issue, I think you touch on quite a subtle point. As far as matter "waves" * are concerned, for example the wavelike properties of an electron, the "wave" is a wave of a sort of square root of a probability density. (Mathematically, the probability of finding the electron in a given volume is given by multiplying the wavefunction by its complex conjugate - the equivalent of squaring it - and then integrating over the volume of space of interest.) With light however, the wave is a true wave in the electromagnetic field. Nevertheless, the behaviour of this wave too, in the case of individual photons, corresponds to a probability of detecting the photon at point in space. It appears to me that this probability property that both these type of wave represent, in spite of their different nature, is something that is often rather glossed over. So thanks for highlighting it.
Probability is in fact fundamental to the quantum mechanical model of light and matter. When one learns quantum theory, the concept of "probability density" , as represented by the square modulus of a wave function, is one of the first ideas one encounters. I suspect that if probability was not mentioned in earlier explanations of the double slit experiment that you have across, it would be because it was assumed that the reader would already be aware of this.
* I put waves in quotation marks because the general form of Schroedinger's equation is strictly speaking a diffusion equation rather than a true wave equation, as it has (if I recall this correctly) only a 1st derivative of time rather than a 2nd derivative. However for stationary states, such as the orbitals of an electron in an atom, there is no time dependence, so the equation becomes that of a standing wave.
Last edited: Nov 17, 2016
Write4U likes this.
No, it won't.
If you feel better making up an explanation because you do not understand the scientific explanation that is fine, it doesn't really matter one way or another.
14. rpenner Fully Wired Registered Senior Member
Fact: Photons carry momentum and energy. (Einstein's 1905 description of the photoelectric effect pretty much defines what a photon is.)
Fact: Photons can be localized like particles. That's why low-light images are grainy. The earliest single photon detectors were crystals of photoemulusions, as the process of "developing" a negative turns each crystal activated by light into a dark blob of silver — the bigger the crystal, the bigger the target for each photon, the "faster" the film is to capture the scene given a certain light level, and the grainier the resulting image.
Fact: Photons have a characteristic wavelength like waves. One of the ways we can manipulate light is to use interference, filtering or dispersion to treat different wavelengths of light differently. Monochromatic light is characterized not just by not being subject to any further fractionation by color but by having a single characteristic wavelength.
Fact: Photons interfere to produce dark places as well as light ones, like waves. This is the phenomenon behind iridescence, the colors of a soap film and reflections off a CD, holograms and both single-slit and double-slit interference patterns.
Fact: The momentum each photon carries is proportional to its momentum which is proportional to its frequency which is inversely proportional to its wavelength.
Fact: Except for the simple proportionality, all these facts apply to electrons, protons, etc.
Momentum (p) and Energy (E) and mass (m) and velocity (v) are related for a free particle (in special relativity) by:
\(E^2 = \left( mc^2 \right) ^2 + \left( c \vec{p} \right)^2 \\ E \vec{v} = c^2 \vec{p} \)
Wavelength (λ), angular wavenumber (k), frequency (f), angular frequency (ω), phase velocity (V) and group velocity (v) for any plane wave are related by:
\(\lambda = 2 \pi \left| \vec{k} \right| ^{-1} \\ \omega = 2 \pi f \\ V = \omega \left| \vec{k} \right| ^{-1} = \lambda f \\ v = \partial \omega / \partial \left| \vec{k} \right| = \partial f / \partial (\lambda ^{-1})\)
Relating the two is Planck's constant (h):
\(\hbar = \frac{h}{2 \pi} \\ E = h f = \hbar \omega \\ \vec{p} = \hbar \vec{k} \\ \left| \vec{p} \right| = h / \lambda = \hbar \left| \vec{k} \right| \)
Which means mass controls how a quantum particle's wavelength and frequency are related:
\((h f)^2 = \left( mc^2 \right) ^2 + \left( c h / \lambda \right)^2 \Rightarrow f = \frac{c}{h \lambda} \sqrt{ h^2 + m^2 c^2 \lambda^2 } \)
Thus the phase velocity is:
\( V = f \lambda = c \sqrt{1 + \left( \frac{ m c \lambda}{h} \right)^2 } = c \sqrt{1 + \frac{ (m c)^2 }{ \vec{p}^2} } \geq c \)
while the group velocity is:
\( v = \frac{\partial f }{ \partial (\lambda ^{-1}) } = \frac{ c }{ \sqrt { 1 + \frac{m^2 c^2 }{ h^2 \lambda ^{-2} } } } = \frac{ c }{ \sqrt{1 + \frac{ (m c)^2 }{ \vec{p}^2} } } = c^2/V \leq c\)
And this group velocity is the rate at which signals and energy propagate, which means it is the same as the particle velocity (which is why I used the same symbol.
So for a massless particle, like a photon, we get: \(m = 0 \quad \Rightarrow \quad V=v = c, \quad E = h f = h c / \lambda = c \left| \vec{p} \right| \).
Have you ever done this experiment? The separation between the slits is tiny so one may aim the source at the slits, but since the beam is of finite measure, it hits both slits.
Nothing works like that. The trail a snail leaves behind is made up of substances. But photons and electrons are the smallest parts of their respective phenomena.
Nothing works like that. Quantum particles don't have ambitions, goals, or make multiple attempts to reach a goal. The total intensity of light reaching the screen indicates the light not going through the slits on the first encounter is just plain blocked.
As illustrated above, quantum particles don't flip back and forth between two different behaviors. They have a single behavior which means they travel like waves and interact like point particles as classically imagined.
No. We detect particles because quantum particles always interact like point particles and the intensity is low enough to pay attention to individual events. When the intensity is high, we can't distinguish the individual events and so we only perceive the pattern of dark and light bands as in holograms.
We can do the experiment with a photon counter so we record individual photon strikes as we move the detector across the screen and get different patterns for the one-slit and two-slit setups. (Flash video)
Here they observe individual electrons:
Have you ever done this experiment? The thickness of the object with the slits is immaterial to the results.
All photons have the same mass, zero.
Particle physics, which is more fundamental than fluid dynamics, says you are making unjustified claims.
You get the identical pattern at all intensities, which renders both your fluid dynamics claims and "bouncing" ideas as debunked.
To support your statements with evidence means you care about reality and science. A scientific person who cares about truth and humanity would not pooh-pooh the work of others as "irrational" without demonstrating superior knowledge about the behavior of reality. A skeptical person would first be skeptical about their own conjectures. A worthy person desiring education would not form and express baseless opinions on the subject of their ignorance.
Teaching you would be a service your government already spent funds on, so you should expect to have to pay out of pocket from now on.
You lack a basis to take a position on either topic.
Last edited: Nov 17, 2016
Confused2 and Xelasnave.1947 like this.
15. Farsight Valued Senior Member
• Parts of this are miseducation. Farsight is forbidden to post in the main science forums prior to January 15, 2017.
There's more than one I'm afraid. Some people will claim there's some many-worlds multiverse involved, which I think is pseudoscience nonsense. It is not scientific, it is not falsifiable, I reject it. Others will claim that the electron is some point-particle, and some probability field is at work that surpasseth all human understanding. I reject that too. Because it's quantum field theory, not quantum point-particle theory. The photon has an E=hf wave nature. We make electrons and positrons out of photons in pair production. The de Broglie hypothesis concerns the wave nature of matter, not the point-particle nature of matter. We can diffract electrons. We can even refract them, as per Ehrenberg and Siday’s 1949 paper The Refractive Index in Electron Optics and the Principles of Dynamics. And note that after noticing a point made by Hermann Weyl Erwin Schrödinger gave us the time-independent Schrödinger equation which “predicts that wave functions can form standing waves”.
No. It's a wave. A standing wave thing. Standing wave, standing field. The electron's field is what it is. Only when it isn't quite standing, the electron is moving. Through both slits.
It hits it as a wave, because it is a wave. The electron is not some point-particle that has a field, it is field. But the receptor/detector involves an interaction, and this interaction leaves a dot on the screen, so you might think the electron was a dot. Don't. Instead take a tip from the optical Fourier transform. See Steven Lehar's web page:
Please Register or Log in to view the hidden image!
A convex lens performs an optical Fourier transform in real time. But you don't think the incident light was some point particle. You know that the light-lens interaction localized the light into something pointlike. Apply this logic to the double slit experiment. When you detect the electron at the screen, you convert it into something pointlike, and you see the interference pattern because the electron went through both slits and interfered with itself. When you detect the electron at one of the slits, you convert it into something pointlike, so it goes through one slit only. It then spreads out like the wave that it is. Then when you detect the electron at the screen, you convert it into something pointlike again. But this time you don't see the interference pattern because the electron went through one slit only. No many-worlds multiverse is required, and no magic either.
16. exchemist Valued Senior Member
Oh no.
That's the end of this thread, just when it was getting interesting.
17. Xmo1 Registered Senior Member
You have shown me, at least, that I must understand the thing first before I can disagree with it. I am going to understand this experiment, which for me includes putting the narrative explanations and the math together until it jell's. Thank you rpenner.
18. exchemist Valued Senior Member
Good plan.
Please Register or Log in to view the hidden image!
(But do not take any notice of Farsight. He has a tendency to make his own physics up.)
19. Xmo1 Registered Senior Member
I'm matching up the math with the narrative a bit at a time, so many thanks for the descriptions and links. I wonder how long its going to take for my little bowl of gray matter to integrate the information into something meaningful. Honestly, I must use my calendar to study this. Using the optics and Fourier transforms stuck a nerve. So thanks.
20. Xmo1 Registered Senior Member
Looks like I was going down the same path. I found Farsight's information useful when pointing to the field of optics for further study. I'm going to look there. Also thanks origin, exchemist, and others for your lessons that put me on a better track to understanding these points of physics. Interesting to note how 'excited' people get when talking about it.
Last edited: Nov 17, 2016
21. rpenner Fully Wired Registered Senior Member
Farsight is now banned from posting in the main science forums, including in reply to this thread, for repeated miseducation.
He has denied, without support, that the observed behavior of photons and electrons is, within experimental limits, like the point particles of quantum field theory. Yet he ignores that, in this thread, phenomena related to the particle-like nature of light are discussed. This makes his post resemble a cut-and-paste post collage of his earlier posts for which he was last warned about in July.
Because he has long advocated the baseless idea that it makes sense to assume the electron is composed of light (despite the mismatch of electric charge, weak interaction charge, angular momentum, and phenomenology and complete failure to generalize to other related phenomena), all ambiguous language in mentions of pair production is assumed to be attempting to again promote this worthless abandonment of actually useful electromagnetic physical theories.
The optical Fourier transform is a theorem of the operation of idealized thin lenses which convert between idealized plane waves and spherical wave fronts on the image plane. Thus, it's not a model for wave-particle duality as it does not localize in the indeterminate manner of photons hitting a screen. The math is straightforward to those with enough calculus to do Fourier transforms.
Xmo1, Kristoffer and exchemist like this.
22. exchemist Valued Senior Member
Thank you rpenner, this will certainly help the serious science parts of the forum to become more credible to all readers.
Please Register or Log in to view the hidden image!
23. exchemist Valued Senior Member
Well it is rather exciting, I think. I found this counterintuitive behaviour (and its unexpected links to other things, such as the parallels between atomic structure and musical harmonics) fascinating at university, 40 years ago, and I still do today. Even if I can't do the maths any more. The way the universe works seems to be a story of unending surprises and cross-connections.
Write4U and Xmo1 like this.
Share This Page |
3d127dfdfef65a14 | Take the 2-minute tour ×
I have solved the Schrödinger equation for a triangular well potential and the solution comes in terms of Airy functions...now i am facing the following problems: What are the normalization constants of Airy functions?
What are the asymptotic forms Airy functions?
How to find the matrix elements of the airy function?
If anybody knows the answer please tell me as soon as possible.
share|improve this question
closed as off-topic by j.c., Andrey Rekalo, Ramiro de la Vega, Cam McLeman, Carlo Beenakker Sep 24 '13 at 17:07
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – j.c., Andrey Rekalo, Ramiro de la Vega, Cam McLeman, Carlo Beenakker
This sounds like homework. You want to try over at Math or Physics Stack exchange, or Wikipedia. – Ray Yang Feb 7 '13 at 9:06
3 Answers 3
You can go to the OLver book, Asymptotic and special functions, 1974, and find your answers.
share|improve this answer
I recommend to look at pages 213-4-5 in the first volume of Hörmander's ALPDO, Springer Grundlehren, 256. In my opinion this is the shortest and most elementary introduction to Airy functions.
share|improve this answer
The Airy function can be expressed in terms of a modified Bessel function of the 2nd kind; this amounts to Exercise 20, Ch. IV of Andrews, Askey and Roy's red book on special functions (for which the authors refer the reader to Watson's 1944 treatise on Bessel functions), and an asymptotic formula for modified Bessel functions of the 1st and 2nd kind is given on p. 223 (ed. 1999).
share|improve this answer
|
36f5cae07b9df77b | Monday, March 8, 2010
Rogue Waves
Today I ran across an interesting essay on our changing understanding of scurvy. As often happens when you learn history better, the simple narratives turn out to be wrong. And you get strange things where as science progressed it discovered a good cure for scurvy, they lost the cure, they proved that their understanding was wrong, then wound up unable to provide any protection from the disease, and only accidentally eventually learned the real cause. The question was asked about how much else science has wrong.
This will be a shorter version of a cautionary tale about science getting things wrong. I thought of it because of a a hilarious comedy routine I saw today. (If you should stop reading here, do yourself a favor and watch that for 2 minutes. I guarantee laughter.) That is based on a major 1991 oil spill. There is no proof, but one possibility for the cause of that accident was a rogue wave. (Rogue waves are also called freak waves.) If so then, comedy notwithstanding, the ship owners could in no way be blamed for the ship falling apart. Because the best science of the day said that such waves were impossible.
Here is some background on that. The details of ocean waves are very complex. However if you look at the ratio between the height of waves and the average height of waves around it you get something very close to a Rayleigh distribution, which is what would be predicted based on a Gaussian random model. And indeed if you were patient enough to sit somewhere in the ocean and record waves for a month, the odds are good that you'd find a nice fit with theory. There was a lot of evidence in support of this theory. It was accepted science.
There were stories of bigger waves. Much bigger waves. There were strange disasters. But science discounted them all until New Years Day, 1995. That is when the Draupner platform recorded a wave that should only happen once in 10,000 years. Then in case there was any doubt that something odd was going on, later that year the RMS Queen Elizabeth II encountered another "impossible" wave.
Remember what I said about a month of data providing a good fit to theory? Well Julian Wolfram carried out the same experiment for 4 years. He found that the model fit observations for all but 24 waves. About once every other month there was a wave that was bigger than theory predicted. A lot bigger. If you got one that was 3x the sea height in a 5 foot sea, that was weird but not a problem. If it happened in a 30 foot sea, you had a monster previously thought to be impossible. One that would hit with many times the force that any ship was built to withstand. A wall of water that could easily sink ships.
Once the possibility was discovered, it was not hard to look through records of shipwrecks and damage to see that it had happened. When this was done it was quickly discovered that huge waves appeared to be much more common in areas where wind and wave travel opposite to an ocean current. This data had been littering insurance records and ship yards for decades. But until scientists saw direct proof that such large waves existed, it was discounted.
Unfortunately there were soon reports such as The Bremen and the Caledonian Star of rogue waves that didn't fit this simple theory. Then satellite observations of the open ocean over 3 weeks found about a dozen deadly giants in the open ocean. There was proof that rogue waves could happen anywhere.
Now the question of how rogue waves can form is an active research topic. Multiple possibilities are known, including things from reflections of wave focusing to the Nonlinear Schrödinger equation. While we know a lot more about them, we know we don't know the whole story. But now we know that we must design ships to handle this.
This leads to the question of how bad a 90 foot rogue wave is. Well it turns out that typical storm waves exert about 6 tonnes of pressure per square meter. Ships were designed to handle 15 tonnes of pressure per square meter without damage, and perhaps twice that with denting, etc. But due to their size and shape, rogue waves can hit with about 100 tonnes of pressure per square meter. Are you surprised that a major oil tanker could see its front fall off?
If you want to see what one looks like, see this video.
No comments: |
e480e682fa7a9a11 | Reflections on Quantum Backflow
Yesterday afternoon I attended a very interesting physics seminar by the splendidly-named Gandalf Lechner of the School of Mathematics here at Cardiff University. The topic was one I’d never thought about before, called quantum backflow. I went to the talk because I was intrigued by the abstract which had been circulated previously by email, the first part of which reads:
Suppose you are standing at a bus stop in the hope of catching a bus, but are unsure if the bus has passed the stop already. In that situation, common sense tells you that the longer you have to wait, the more likely it is that the bus has not passed the stop already. While this common sense intuition is perfectly accurate if you are waiting for a classical bus, waiting for a quantum bus is quite different: For a quantum bus, the probability of finding it to your left on measuring its position may increase with time, although the bus is moving from left to right with certainty. This peculiar quantum effect is known as backflow.
To be a little more precise about this, imagine you are standing at the origin (x=0). In the classical version of the situation you know that the bus is moving with some constant definite (but unknown) positive velocity v. In other words you know that it is moving from left to right, but you don’t know with what speed v or at what time t0 or from what position (x0<0) it set out. A little thought, (perhaps with the aid of some toy examples where you assign a probability distribution to v, t0 and x0) will convince you that the resulting probability distribution for moves from left to right with time in such a way that the probability of the bus still being to the left of the observer, L(t), represented by the proportion of the overall distribution that lies at x<0 generally decreases with time. Note that this is not what it says in the second sentence of the abstract; no doubt a deliberate mistake was put in to test the reader!
If we then stretch our imagination and suppose that the bus is not described by classical mechanics but by quantum mechanics then things change a bit. If we insist that it is travelling from left to right then that means that the momentum-space representation of the wave function must be cut off for p<0 (corresponding to negative velocities). Assume that the bus is a “free particle” described by the relevant Schrödinger equation.One can then calculate the evolution of the position-space wave function. Remember that these two representations of the wave function are just related by a Fourier transform. Solving the Schrödinger equation for the time evolution of the spatial wave function (with appropriately-chosen initial conditions) allows one to calculate how the probability of finding the particle at a given value of evolves with time. In contrast to the classical case, it is possible for the corresponding L(t) does not always decrease with time.
To put all this another way, the probability current in the classical case is always directed from left to right, but in the quantum case that isn’t necessarily true. One can see how this happens by thinking about what the wave function actually looks like: an imposed cutoff in momentum can imply a spatial wave function that is rather wiggly which means the probability distribution is wiggly too, but the detailed shape changes with time. As these wiggles pass the origin the area under the probability distribution to the left of the observer can go up as well as down. The particle may be going from left to right, but the associated probability flux can behave in a more complicated fashion, sometimes going in the opposite direction.
Another other way of thinking about it is that the particle velocity corresponds to the phase velocity of the wave function but the probability flux is controlled by the group velocity
For a more technical discussion of this phenomenon see this review article. The exact nature of the effect is dependent on the precise form of the initial conditions chosen and there are some quantum systems for which no backflow happens at all. The effect has never been detected experimentally, but a recent paper has suggested that it might be measured. Here is the abstract:
7 Responses to “Reflections on Quantum Backflow”
1. Steve Warren Says:
Well you’ve lost me. In Chipping Norton if you are watching a bus go from left to right you are on the wrong side of the road, and the probability of catching is zero, or in your language p(catchbus)=0. And it’s a long time to wait for the next one. Maybe in your fancy cities in Wales your buses go the other way, I don’t know. But if you or Gandalf ever get run over by a bus I can see what the problem was.
2. Our bus was canceled.
3. While many might recognize Gandalf as a character in Tolkien’s Lord of the Rings, it is actually a normal (if at the moment somewhat old-fashioned) German name. I used to have a girlfriend (also with a delightful if at the moment old-fashioned German name) who has a brother named Gandalf. He actually has a long, white beard. (He certainly wasn’t named after the character, despite his later appearance; he was born only shortly after Tolkien had written his book, and at that time I doubt that anyone knew that book in any language in the small village where he had been born.)
4. Are these wiggles tiny with respect to the overall probability function or are you saying theRe are conceivable cases where they can be on a comparable scale?
• telescoper Says:
If you read the papers I linked to, you’ll find that the effect is pretty small – a few per cent at most – but is apparently measurable.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
e1f8962e52912c23 | Open Access
• L Villegas-Lelovsky1, 2Email author,
• MD Teodoro1, 3,
• V Lopez-Richard1,
• C Calseverino1, 4,
• A Malachias4,
• E MaregaJr3, 5,
• BL Liang3, 7,
• Yu I Mazur3,
• GE Marques1,
• C Trallero-Giner6 and
• GJ Salamo3
Nanoscale Res Lett20106:56
DOI: 10.1007/s11671-010-9786-8
Received: 6 July 2010
Accepted: 9 September 2010
Published: 1 October 2010
Molecular beam epitaxy Self-assembled quantum dots Inter-dot coupling Anisotropic effects Linear polarized photoluminescence emission Grazing-incidence X-ray diffraction synchrotron Optoelectronic
Recent attention has been given to the study of coupled quantum dot (QD) arrays for their potential application in quantum information processing [13]. The self-assembling process and its control become essential concerns in the search for new proposals of optoelectronic and quantum computing devices. Also, the spinor states in quasi-zero dimensional systems and their electronics have become features of renewed interest [47]. High uniformity of size, shape and distribution control of dot arrays are required in many application proposals like detectors, low-threshold lasers and photonic crystals. The lack of control over the self-assembly process of formation of these QDs leads to inhomogeneous broadening in size and/or shape that may degrade the quality of a device application. Therefore, the need for probing size, shape and effective inter-dot coupling has become an important area of research in recent years [812].
The anisotropy observed in linearly polarized PL-emissions from self-assembled QDs has been studied in recent years, and several works have detected some correlation with the anisotropic shape of the QD array [1316]. There is also an agreement about the complexity of valence-band effects in QDs as a relevant issue when dealing with optical response from transitions between these completely localized states [7, 17, 18].
In the present work, we addressed mechanisms of testing simultaneously one-dimensional (1D) lateral ordering of dots, inter-dot coupling and 2D anisotropy of self-assembled QDs from studies of grazing-incidence X-ray diffraction (GID) and polarized photoluminescence (PL) emissions under different excitation power. This work has been motivated by the plausibility of controlled self-assembling growth of 1D dot arrays (QD chains) [19] and their potential use for testing important quantum effects such as correlation of information and optical coupling between dots where the relevant aspects of effects associated with inter-dot coupling and QD shape, size and distribution deserve special attention. It is also discussed the interplay between shape and strain fields with the inter-dot correlation that is revealed in the GID measurements and PL-emission spectra from QD arrays. Two sets of samples are investigated: one shows chain-like 1D correlation between neighboring dots and the other exhibits a mostly random island distribution. Two different QD shape models are used in order to calculate and test the polarized optical emission spectra dependence with spatial dot correlation and local geometry. The experimental confirmation included in this work highlights and supports the importance of probing correlated distribution in QD arrays for the characterization and improving of the growth-controlled processes.
Theoretical Model
A multi-band k · p model based on the standard Kohn–Luttinger [20] and parabolic Hamiltonians to probe the electronic structure of holes and electrons, respectively, in dots grown along the [100] direction was developed. Due to strong valence-band admixture, such a procedure provides straightforward information on the relaxation of the inter-band optical transition selection rules, using lower computational efforts than in tight-binding calculation model, for example [13, 14]. The built-in strain field distribution, which lead to the formation of self-assembled QD arrays, has been considered within the Bir–Pikus deformation potential model [21]. Uniform strain tensors are assumed, a model that neglects effects caused by variations at the QD interfaces [22, 23]. This approximation works reasonably well for the study of ground-state properties of medium (~150 Å) and large (>250 Å) size dots.
The double quantum dots structures under investigation are schematically illustrated in Figure 1. According to realistic dimensions the dots are assumed to have semi-cylindrical shape with radius ρ, laterally separated by an inter-dot barrier of thickness d. Since the main focus is concentrated in the tunneling along the lateral direction [ 0 1 ¯ 1 ] , the confining potential is defined as V(ρ, z) = V(ρ) + V(z), where the infinite barrier model have been used, as represented in Figure 1b (Figure 1c), at top (left) and bottom (right) interfaces, whereas the finite barrier model at the internal interfaces have been adopted, as represented in Figure 1c, in order to account for inter-dot coupling effects.
Figure 1
(a) Schematic modeling of QD size and inter-dot coupling used in this study of self-assembled dots formed along the indicated crystalline directions. (b) Confinement model for random distribution dots in the (100) plane. (c) Confinement model for testing anisotropic size and plausible inter-dot electronic coupling.
For the GaInAs alloys in consideration, at the center of the Brillouin zone, the split-off band is energetically well separated from the topmost valence subbands. In the limit of decoupled split-off band, the four-band Hamiltonian provides a good description of low-lying hole states by considering the coupling between the heavy-hole (hh) (J = 3/2, j z = ±3/2) and the light-hole (lh) (J = 3/2, j z = ±1/2). In the effective-mass approximation, when spanned in this basis, the kinetic energy of the hole is described by the 4 × 4 Kohn–Luttinger Hamiltonian
K L = 2 m 0 ( D h h A 0 A D l h 0 0 D h h A + 0 A + D l h )
D h h = ( γ 1 + γ 2 2 ) { k ^ + , k ^ } ( γ 1 2 γ 2 2 ) k ^ z 2 , D l h = ( γ 1 γ 2 2 ) { k ^ + , k ^ } ( γ 1 + 2 γ 2 2 ) k ^ z 2 , A ± = 3 γ 3 k ^ ± k ^ z , ± = 3 2 γ 2 + γ 3 2 k ^ ± 2 ,
with the Luttinger parameters γ i (i = 1, 2, 3), and the momentum operators k ^ ± = k ^ x ± i k ^ y , k ^ = i .
The Hamiltonian of the hole in the quantum dot system is
= K L + B P + V ( ρ ) + V ( z )
where V(ρ) is an infinite barrier outside of the semi-cylindrical cross-section, and V(z) is a double quantum well potential with infinite high outside walls, whose finite barrier is due to the offset between the band edges in the well and barrier materials; BP is the Bir–Pikus Hamiltonian [21].
By exploring cylindrical symmetry in the Kohn–Luttinger model, the wave function of a hole state can be written in the form
Ψ v = j , n , m , α C j n , m ( α ) F j ( z ) f n , m ( ρ , φ ) | α .
The indexes (j, n, m) label the quantization along the z-direction (j) and in-plane (n, m) quantum numbers, respectively, α denotes the spin-up (| ↑ >) and spin-down (| ↓ >) periodic Bloch function character, namely: |hh, |lh, |hh or |lh and, finally, C j n , m ( α ) are the weight coefficients in the basis set of envelope wavefunctions, F j (z)f n,m (ρ, φ), at a position (ρ, φ) inside the dot. The solutions for the in-plane motion, f n,m (ρ, φ), are given by [24]
f n , m ( ρ , φ ) = 2 a π | J n + 1 ( μ n , m ) | J n ( μ n , m ρ a ) sin ( n φ ) , n = 1 , 2 , ... ,
for semi-cylindrical confinement (Figure 1c). In these expressions, μn,m is the m th zero of the Bessel function of order n, J n (x), whereas the form of function F j (z) depends on the profile potential along z-direction between the dots. The depth of the quantum well is determined by the offset between the valence-band edges in the dot and the barrier materials. For the GaAs/In0.4Ga0.6As interface, the valence-band offset can be estimated as ΔE v = 214 meV. By analytically solving the Schrödinger equation for holes and regarding the mismatch between the Luttinger parameters in the GaAs/In0.4Ga0.6As interfaces, the transcendental equation is derived, which determines all subband energies (j) and the corresponding wavefunctions (see Appendix 1). The signal (±) in the Eq. 16 provides them, respectively, with symmetric or asymmetric character F j ± ( z ) . Taking advantage of this fact, the Hilbert space for the hole wavefunctions Ψ v (r) can be split into two orthogonal subspaces, labeled I and II, that are classified according to the parity of the quantum number j. As a result, the Hilbert subspace I(II) gathers spinor states with spin-up (spin-down) components having odd j-values (even j-values) that are coupled with states with spin-down (spin-up) and odd j-values (even j-values). Hence, the eigenvalue problem for the Hamiltonian in Eq. 3 can be solved independently for each class of states I and II. The hole state wavefunction (4) for a given subspace can then be written as
Ψ v I ( I I ) = j , n , m ( C 2 j 1 ( 2 j ) n , m ( h h ) F 2 j 1 ( 2 j ) ( z ) f n , m ( ρ , φ ) | h h C 2 j ( 2 j 1 ) n , m ( l h ) F 2 j ( 2 j 1 ) ( z ) f n , m ( ρ , φ ) | l h C 2 j ( 2 j 1 ) n , m ( h h ) F 2 j ( 2 j 1 ) ( z ) f n , m ( ρ , φ ) | h h C 2 j 1 ( 2 j ) n , m ( l h ) F 2 j 1 ( 2 j ) ( z ) f n , m ( ρ , φ ) | l h ) .
The hole states of the semi-cylindrical QDs system are calculated by exact diagonalization of the Hamiltonian , on a finite basis set expansion given by Eq. 6 using a standard numerical diagonalization technique. The matrix elements of the momentum operators k ^ ± 2 , k ^ ± and k ^ z involved in the off-diagonal terms Eq. 2 of the Hamiltonian KL are given in Appendix 2.
As shown schematically in Figure 1, effects associated with isotropic and anisotropic spatial confinements are simulated in the calculation by changing the lateral sizes, D011 and D [ 0 1 ¯ 1 ] , in the (100) plane as well as the inter-dot distance (d). Two geometry cases will be studied: (i) Uncorrelated dots, which consider isotropic spatial confinement in the (100) plane, with D 011 = D [ 0 1 ¯ 1 ] , without inter-dot coupling. The spin quantization axis (z-axis) is chosen along direction [100] (Figure 1a) and 2D dot distribution is random; (ii) Correlated dots, which consider anisotropic spatial confinement ( D [ 011 ] D [ 0 1 ¯ 1 ] ) and include inter-dot coupling (Figure 1b) that leads to a chain-like 1D dot alignment. Here, the spin quantization axis (z-axis) must be set along the [ 0 1 ¯ 1 ] direction [25, 26].
Figure 2
Oscillator strength contours Pol [ 011 ] Pol [ 0 1 ¯ 1 ] for correlated dots with inter-dot distance d = 160 Å and strain order factor ε || = –0.3%.
These two models were tested and compared in order to search for the main qualitative differences between optical emission probabilities for light polarized along and perpendicular to the z-axis, respectively. This modeling tests the different behavior of optical emissions associated fundamentally with the difference between heavy-hole (hh) and light-hole (lh) longitudinal and transversal ellipsoidal effective masses as well as the effects originated from the strain fields on these hole energy levels.
The oscillator strength for optical electric fields linearly polarized along the [ 0 1 ¯ 1 ] and parallel to [011] directions (see Figure 1) can be calculated as Pol [ 011 ] ( [ 0 1 ¯ 1 ] ) = | Ψ cond | p e ^ [ 011 ] ( [ 0 1 ¯ 1 ] ) | Ψ val | 2 . For uncorrelated dot arrays, showing mostly random distribution (case (i)), they are given by
Pol [ 011 ] ( [ 0 1 ¯ 1 ] ) = 2 | C 1 1 , 0 ( h h ) 2 L h h 1 + C 1 1 , 0 ( l h ) 6 L l h 1 ( + ) i C 1 1 , 0 ( h h ) 2 L h h 1 ( + ) i C 1 1 , 0 ( l h ) 6 L l h 1 | 2 P 2 ,
where P = s|p x |x = s|p y |y = s|p z |z is the isotropic conduction-valence-band momentum matrix element between functions at the Γ-point, L h h ( l h ) j = F j cond ( z ) F j h h ( l h ) ( z ) d z is the overlap between j th electron and hole envelope functions along z-axis, and the factor 2 is due to double spin degeneracy.
All coefficients C j n , m ( α ) , shown in Eq. 7, are real when calculated for cylindrical uncorrelated dot array case, and using the expansion set in Eq. 3. This result leads to identical oscillator strengths and, consequently, equal PL intensities for both optical linear polarizations. More specifically,
Pol [ 011 ] = Pol [ 0 1 ¯ 1 ] ,
according to Eq. 4, and this identity is independent of QD size. Besides, neither hydrostatic nor axial strain contributions would induce changes to Eq. 8 in this symmetric case (unless anisotropic strains are applied). Therefore, a distribution of cylindrical uncorrelated dots over the (100) plane would lead to identical linear PL-emission intensities polarized along and perpendicular to the z-axis.
In correlated arrays showing preferential dot diffusion, the compressive strain can be relaxed by forming 1D arrangement, as occurring for strain distribution in free-standing superlattices. In this case, the in-plane strain is defined by ε|| = εxx = εyy = (a - a w )/a w , where the lateral lattice constant (a) can be calculated as [27]
a = a w L w / S w a w 2 + a b L b / S b a b 2 L w / S w a w 2 + L b / S b a b 2 .
Here, Sα = (S11 + S12)α is the sum of elastic compliance constants, Lα (aα) is the width (bulk lattice constant) of the corresponding layers regions α = w (well) or b (barrier). In this way, a 3% strain can be relaxed to a value near 1%. Although shear strain contribution, which affects the separation between hh and lh subbands, becomes relaxed, the hydrostatic strain component leads to the effective reduction of the inter-dot potential barrier, which enhances the inter-dot coupling and tunneling. The envelope function spreading along the direction [ 0 1 ¯ 1 ] favors the confinement of a carrier with higher in-plane effective mass, which leads to the exchange of the ground-state character, since m l h [ 0 1 ¯ 1 ] > m h h [ 011 ] .
The effects associated with the anisotropic confinement, within the inter-dot coupled model and simulated by a semi-cylindrical dot shape (see Figure 1c), uses only the subset of the expansion functions in Eq. 5 that complies with null boundary conditions at the flat part of the semi-cylinder. The corresponding linear crossed polarized optical matrix elements, for this correlated dot array model (case (ii)), are given by
Pol [ 0 1 ¯ 1 ] = 2 { | 2 3 C 1 1 , 0 ( h h ) L h h 1 | 2 + | 2 3 C 2 1 , 0 ( h h ) L h h 2 | 2 } P 2 ,
Pol [ 011 ] = 2 { | C 1 1 , 0 ( h h ) 2 L h h 1 C 1 1 , 0 ( l h ) 6 L l h 1 | 2 + | C 2 1 , 0 ( h h ) 2 L h h 2 C 2 1 , 0 ( l h ) 6 L l h 2 | 2 } P 2 .
Here, the factor 2 occurs due to the summation over subbands j = 1,2 since these states are nearly degenerate for large inter-dot separation, d. It is clear that the identity in Eq. 8 has changed and no longer holds for all values of the inter-dot distance and QD sizes. We will be showing below that mass anisotropy of hole ground-state might be hold responsible for these anisotropic optical emission intensities once the dot confinement strength becomes relaxed in certain directions, whether by dot size anisotropy and/or by inter-dot coupling tuned by the strain fields.
First of all, let us analyze the effect of the spatial confinement in the case of a single dot with the semi-cylindrical shape, namely: the limiting case d → ∞ shown in Figure 1c. As the strength of the spatial confinement is relaxed along the direction [ 0 1 ¯ 1 ] by the QD size increase, the topmost valence band becomes occupied by a state with a stronger lh-character and reduced hh-contribution [4, 28, 29]. This effect is caused by the strong hole mass anisotropy, namely: m h h [ 0 1 ¯ 1 ] > m l h [ 011 ] while m l h [ 0 1 ¯ 1 ] > m h h [ 011 ] . It can be noted, from simple arguments, that hh- or lh-mass character of the valence-band ground-state can be interchanged by weakening the spatial confinement strength in the direction [ 0 1 ¯ 1 ] . Under weak confinement regime, the total energy determining the level position is mainly inversely proportional to the effective mass, as
E h h 1 m h h [ 0 1 ¯ 1 ] D [ 0 1 ¯ 1 ] 2 + 1 m h h [ 011 ] D [ 011 ] 2 ,
E l h 1 m l h [ 0 1 ¯ 1 ] D [ 0 1 ¯ 1 ] 2 + 1 m l h [ 011 ] D [ 011 ] 2 ,
where D[011] and D [ 0 1 ¯ 1 ] denote mean confining lengths. Consequently, by tuning the confinement anisotropically, the condition E lh < E hh can be attained due to the mass anisotropy of carriers. As a result, the corresponding envelope functions must be more extended in one direction than the other. Thus, the corresponding PL transitions allowed for certain light polarization can probe the anisotropic character of the Bloch functions that, in the multi-band calculations, are determined by the values of the expansion coefficients in Eq. 4. It is noted, from Eq. 10, that a state having small hh-character and, consequently, small values of coefficients C 1 1 , 0 ( h h ) and C 2 1 , 0 ( h h ) , produces smaller oscillator strength for optical transition polarized along the inter-dot coupling direction [ 0 1 ¯ 1 ] .
Figure 2 shows the oscillator strength values calculated for two coupled semi-cylindrical QDs with two values of the transverse diameter, D[011], as a function of the axial length, D [ 0 1 ¯ 1 ] (see Figure 1c). Here, we have estimated the strain strength to hold with the uncorrelated dot array condition D [ 0 1 ¯ 1 ] = D [ 011 ] and confirm that the bigger the transverse size of dot array is (Figure 1b) the smaller must the strain order factor be. Furthermore, for compressive strain ε|| > ε, the crossing point Pol [ 0 1 ¯ 1 ] = Pol [ 011 ] can be shifted toward the dotted line. For dilation strain, with ε|| < ε, the crossing point is shifted away from the uncorrelated dot condition, and this condition can be attained in self-assembled QDs grown along the [100] direction. Certainly, shear strain field distribution is able to tune the equal oscillator strength condition for these mutually perpendicular polarized emissions in isolated anisotropic QDs.
Analogously to the exchange of ground-state character induced by anisotropic confinement and shear strain fields, this effect can be also produced by electronic coupling between nearest-neighboring QDs, an effect that leads to the enhancement of the effective value D [ 0 1 ¯ 1 ] . The interchange of ground-state character is highly favored in coupled dots by increasing the inter-dot tunneling, as can be seen in Figure 3, which leads to the envelope function spreading along the coupling direction, [ 0 1 ¯ 1 ] . In order to show this effect, we have used the combination of dots with finite inter-dot separation, d. Note, in Figure 3, that coupled dots will show a left-shifted crossing point for equal oscillator strength, when compared to the uncorrelated dot case. As discussed before, this shift can be further modified by shear strain fields.
Figure 3
Oscillator strength contours fulfilling Pol [ 011 ] = Pol [ 0 1 ¯ 1 ] for correlated dots with inter-dot distances d = 160 Å ( solid line ) and 330 Å ( dashed line ) and strain order factors ε || = –0.3% ( red ), ε || = –0.4% ( green ) and ε || = –0.5% ( blue ).
For the limiting cases (see Figure 4), D [ 0 1 ¯ 1 ] D [ 011 ] and D [ 0 1 ¯ 1 ] D [ 011 ] , the oscillator strengths for polarized emissions attain the conditions Pol [ 0 1 ¯ 1 ] Pol [ 011 ] and Pol [ 0 1 ¯ 1 ] Pol [ 011 ] , respectively, and these results are attributed to the anisotropy of hole effective masses. The crossing point where the polarized emissions have equal intensities can be shifted by the shear strain contribution to hh- and lh-energy level positions. In Figure 5, it can be observed that the crossing points are shifted to the right as the strain order factor and/or inter-dot distance are increased. Furthermore, two asymptotic limits D [ 0 1 ¯ 1 ] D [ 011 ] > 400 Å and D [ 0 1 ¯ 1 ] D [ 011 ] < 200 Å where the crossing points coincide were found out, respectively, for various strain strengths and for different inter-dot distances.
Figure 4
Calculated oscillator strengths for crossed linear optical polarizations along the directions [ 0 1 ¯ 1 ] ( red line ) and [011] ( blue line ) for two coupled QD's (Figure 1c) with semicylindrical shape and axis in the [ 0 1 ¯ 1 ] direction with (a) D [ 0 1 ¯ 1 ] = 280 Å, d = 160 Å upon strain order of ε || = - 0.3% ( solid line ) and -0.9% ( dashed line ). (b) D [ 0 1 ¯ 1 ] = 350 Å, d = 330 Å and ε || = - 0.1% ( solid line ), -0.2% ( dashed line ) and -0.3% ( dashed-dotted line ). The crossing point stands for isotropic optical emission.
Figure 5
Calculated oscillator strengths for crossed linear optical polarizations for a strained system of two coupled QDs with different inter-dot distances d = 160 Å ( dashed line ), 330 Å ( dashed-dotted line ) and infinite ( solid line ) corresponding to a single (isolated) QD. Here was taken a lateral size D[011] = 350 Å and a strain order factor ε|| = - 0.2%.
Experimental Confirmation of the Purposed Theory
Experiments that confirm this modeling were performed using In0.4Ga0.6As QDs grown by molecular beam epitaxy on semi-insulating (100)GaAs. The QDs were obtained using the Stranski–Krastanov growth mode. Two set of samples were prepared for the experiments: (A) QDs with strong anisotropy in shape along [ 0 1 ¯ 1 ] direction and with partial ordering along that; (B) QDs with weak or no anisotropy on the (100) surface and large separation in both in-plane directions. The shape and the distribution of QDs were controlled by the Arsenic background. The use of As2 or As4 background during the growth allows the control of group III element diffusion on GaAs (100) surfaces, providing choices for different dot samples with the same composition but different shapes and distribution. Details of growth mechanisms and the processes involved in diffusion controlling by the background Arsenic environment are described in Ref. [19].
Two sets of samples A were grown under As4 background. In one set, the layer of dots was left uncapped for morphology analysis, and in the other, the QDs were buried with GaAs for low-temperature PL analysis. The other two sets samples B were grown under the same conditions as the sets A, except that under As2 background. Surface morphologies of the two uncapped samples were performed by using atomic force microscopy (AFM), as shown in Figure 6, imaged by Nanoscope IV in the tapping mode and using a high-resolution Silicon tip. The (1 × 1) μm AFM images show the morphologies of the In0.4Ga0.6As uncapped dot samples. The mean dot size and the center-to-center distance along the [ 0 1 ¯ 1 ] direction of both sets are displayed in Table 1. The AFM pictures show clearly the effect of different Arsenic background both on dot formation and distribution. The predominantly anisotropic dot shape and distribution obtained along [ 0 1 ¯ 1 ] direction is for samples grown under As4 environment. Finally, these sets of samples enable us to use sample (B) as the reference for uncoupled QD arrays with mostly isotropic distribution on the (100) plane.
Figure 6
One layer AFM 1 × 1 μm image of In 0.4 Ga 0.6 As QDs in samples grown under different conditions. Sample A (left) shows 1D chain-like ordering along the [ 0 1 ¯ 1 ] direction. Sample B (right) shows mostly isotropic or randomized dot distribution in the (001) plane.
Table 1
Average QD parameters with dispersion obtained from a Gaussian fit of the AFM data
Density (cm-2)
D [ 0 1 ¯ 1 ] (Å)
h (Å)
d (Å)
3.9 × 1010
280 ± 12
220 ± 10
63 ± 12
160 ± 30
1.9 × 1010
350 ± 15
350 ± 15
90 ± 15
330 ± 75
D [ 0 1 ¯ 1 ] : Dot width along [ 0 1 ¯ 1 ] ; D[011]: dot width along [011]; h: QD height; d: inter-dot distance along the "chain-direction" [ 0 1 ¯ 1 ]
Grazing-incidence X-ray diffraction (GID) measurements were performed in both samples at the XRD2 beamline of the Brazilian Synchrotron Light Laboratory (LNLS), using a 4 + 2 axis diffractometer. The X-ray photon energy was fixed to 10 keV. Since both samples were capped by a GaAs 50 nm layer, the incident angle was fixed at 0.28°, slightly above the GaAs critical angle, maximizing the signal from the buried quantum dots. The diffracted signal was measured by integrating the exit angle from 0 to 1.2° [30].
Figure 7a and 7b show longitudinal θ - 2θ scans in the vicinity of the in-plane (022) and ( 0 2 ¯ 2 ) reflections for samples A and B, respectively. Such scans are sensitive to the strain relaxation inside the In0.4Ga0.6As QDs and GaAs surrounding lattice. For all scans, diffuse intensity is observed surrounding the narrow and intense GaAs Bragg peak, located at |H| = |K| = 2. For sample A, the longitudinal scan performed at the vicinity of the GaAs (022) reflection exhibits a much broader profile than the scan measured with the sample rotated by 90°, close to the G a A s ( 0 2 ¯ 2 ) reflection. Such a behavior indicates that a more effective strain relaxation for the islands may take place along the [022] direction, while a more strained lattice profile is found along the [ 0 2 ¯ 2 ] direction. The intensity distribution in both profiles of Figure 7a is almost symmetric with respect to the GaAs peak position, denoting the existence of compressively strained InGaAs inside the QDs, as well as on the GaAs matrix surrounding the QDs [31]. Similar diffraction profiles are observed in the longitudinal scans performed on sample B (Figure 7b). For this sample, the difference of widths of diffuse intensity on (022) and ( 0 2 ¯ 2 ) scans is not as pronounced as observed for sample A, indicating a less anisotropic relaxation.
Figure 7
Radial scans at the vicinity of the GaAs (022) ( solid dots ) and ( 0 2 ¯ 2 ) ( open dots ) reflections for sample A (a) and B (b). Lateral size from iso-strain regions in samples A (c) and B (d) obtained from the width of transversal scans.
In order to quantify the strain relaxation inside QDs in both samples, transversal scans were performed at several positions along the longitudinal profiles shown in Figure 1a and 1b. These scans (not shown here) are measured by fixing the θ - 2θ condition and varying the sample rotation angle θ solely. In momentum transfer space, the angular momentum transfer q a = (4π/λ)sin(2θ/2)sing(Δθ) is varied, where Δθ = (θ/2θ)/2. Such a procedure allows to obtain the average lateral size L of regions inside the QDs with constant strain status by evaluating the width Δq a of transversal scans, L = 2π/Δq a [30, 32]. Values obtained for the local lateral size of iso-strain regions as a function of the in-plane strain status for samples A and B in the [022] and [ 0 2 ¯ 2 ] directions are shown in Figure 7c and 7d, respectively. For both samples, the lateral size of iso-strain regions along the QDs chain direction is larger than along the axis parallel to the chains. The ratio L [ 0 2 ¯ 2 ] / L [ 022 ] , which is a quantitative indicator of the anisotropic lattice relaxation inside QDs, is larger for sample A than for sample B, corroborating the qualitative information inferred from the widths of longitudinal scans.
Some considerations must be drawn before extending the analysis of the data shown in Figure 7c and 7d. For uncapped QDs, the relaxation of lattice parameter is monotonic from their base to their apex [32]. Capped QDs, in contrast, exhibit a non-monotonic gradient, with lateral and vertical strain variations. This condition generally implies in the existence of similar in-plane strain status on the island base and apex, both in contact with the GaAs matrix. It is therefore impossible to resolve vertically the position of iso-strain areas for the capped QDs with our GID measurements. Nevertheless, the lateral sizes observed represent a good approximation of the in-plane area of iso-strain regions projected on the substrate surface plane. Such approach allows for a visualization of the anisotropic strain relaxation. Since the diffraction signal observed at (|H|, |K|) > 2 is related to the existence of compressively strained GaAs surrounding the QDs [31], maps with the projected view of iso-strain areas were extracted from the experimental data by taking into account the L values for (|H|, |K|) < 2, which are directly related to compressively strained In0.4Ga0.6As from the islands (Tensile strained GaAs at the bottom and at the apex of the island also contribute to the diffracted intensity at |H| = |K| < 2. However, the total volume of material with local lattice parameter larger than a GaAs outside the island is considerably smaller than the amount of material contained inside the islands. For a discussion on tensile strained substrate material see [33]). These projection maps for QDs from samples A and B are shown in Figure 8a and 8b, respectively. The iso-strain projection areas were drawn following the condition that they are contained on curves delimited by
x 2 L [ 022 ] 2 + y 2 L [ 0 2 ¯ 2 ] 2 1 ,
where x and y are the in-plane coordinates along the [011] and [ 0 1 ¯ 1 ] directions, respectively, considering the plane origin at the central QD position. Figure 8 shows the iso-strain areas for an in-plane region of approximately 1,100 Å × 700 Å, which contains 9 QDs for sample A and 4 QDs for sample B (see Table 1). The color scale in these maps refers to the in-plane strain with respect to bulk GaAs.
Figure 8
In-plane projection of iso-strain regions for a field of view with several islands for samples A (a) and B (b). The in-plane strain represented in the color scale is relative to the GaAs bulk lattice.
From Figure 8a, one clearly observes that iso-strain contour lines from one QD of sample A almost reach the neighbor QDs along the [ 0 1 ¯ 1 ] chain direction. An asymmetric ratio L [ 0 2 ¯ 2 ] / L [ 022 ] of 1.7 is found for the broader iso-strain contour lines of QDs in this sample, pointing out again to a more pronounced strain relaxation along the [011] axis. The physical presence of very close QDs along the chains may therefore induce a modulation of the strain field that allows for a gentle strain relaxation in the [ 0 1 ¯ 1 ] direction. In sample B (Figure 8b), the asymmetric shape of iso-strain regions is still observed, but with a ratio L [ 0 2 ¯ 2 ] / L [ 022 ] of 1.35. Although an elongation is observed along the [ 0 1 ¯ 1 ] direction, the QDs are too apart from each other and do not strongly influence the strain field of the neighbor QDs in this direction.
Since the GID measurements do not reveal directly the height above the substrate of each iso-strain region finite element method, simulations were performed using a commercial software package to provide complementary information on the strain configuration of capped islands. In our simulations, a three-dimensional box containing a single GaAs capped In0.4Ga0.6As island was created for each sample, with periodic contour conditions at all lateral edges in order to take into account the symmetry of QD chains and the possible interaction with the strain field from neighbor QDs. A 15 Å thick wetting layer of nominal concentration was inserted between the islands and the substrate, following Ref. [34]. The island profiles used in this simulation were extracted from the AFM measurements in uncapped islands (Figure 6) that resulted in the dimensions from Table 1. The nominal composition was kept, assuming thus a negligible deviation of island stoichiometry from the nominal values (Anomalous grazing-incidence diffraction measurements performed at the Ga - K edge do not point out to deviations (within an error bar of 7%) from the nominal In/Ga content inside QDs.). Such assumptions consider that islands do not undergo dramatic changes in morphology or composition under capping, which is a valid approximation for the growth temperatures used here and the reduced strain with respect to pure InAs islands [35]. Finally, a 500 Å-thick cap layer was added to the simulation, as represented in Figure 9a.
Two-dimensional cuts of the simulated data are shown in Figure 9b, d, and 9f for sample A and Figure 9c, e, and 9g for sample B. The selected cuts are schematically depicted at Figure 9a and were chosen to be at the island bottom (b) and (c), middle (d) and (e), and top (f) and (g). Since the representation used in Figure 8 cannot be directly correlated to the Cartesian in-plane strain components x and y, the maps of Figure 9b–g were drawn as a function of the axial (first) principal strain component. Such principal component analysis allows for the reduction of the dimensionality of the data set, providing a resulting representation with radial symmetry. The axial strain component is given by [36]
ε A = ε x x + ε y y 2 + ( ε x x + ε y y 2 ) 2 + ε x y 2 ,
where ε xy is the in-plane shear strain and ε xx , ε yy the normal in-plane strains. For all principal strain maps, the color scale represents the deviation of the local lattice parameter with respect to the bulk local lattice parameter. Therefore, higher principal strain values are found in positions where the In0.4Ga0.6As lattice of the islands is fully strained to the GaAs lattice constant. Finite values of the axial strain component are also observed in regions surrounding the islands, in which the GaAs local lattice is affected by the proximity to the island. Selected contour level edges were marked by dark lines in all maps as a guide to the eyes.
Figure 9
(a) Representation of the two-dimensional cuts shown in maps panels (b–g) performed on the finite element method simulations with periodic contour conditions at the substrate box edges. The color contours represent variations on the first axial principal strain, which allows a qualitative comparison with the GID data of Figure 8. Cuts on the bottom (b), middle (d) and top (f) of the average island of sample A show an elongated strain profile along the [ 0 1 ¯ 1 ] directions. Similar cuts for the average island of sample B are seen on (c), (e) and (g).
The maps generated by in-plane cuts in the simulation of QDs in sample A clearly exhibit elongated contours along the [ 0 1 ¯ 1 ] direction, most notably for the cuts at the island basis and middle. This indicates that for lower in-plane strain conditions, the lattice surrounding the islands behave as semi-continuous wires along the [ 0 1 ¯ 1 ] direction. In the QDs of sample B, an elongation of axial strain contour levels is also observed along the chain directions for all maps. However, the anisotropic effect is much more reduced with respect to the results obtained for sample A.
The effects of different dot size distributions and inter-dot coupling have been analyzed by low-temperature linear polarized PL measurements carried out on the samples A and B buried with GaAs. The samples were placed in a closed-cycle Helium cryostat (Janis—CCS-150) and excited using a 532-nm continuous wave YAG laser (Coherent Verdi V10—10 W). The PL signal was carried out by a monochromator (SpectraPro 2500i—0.5 m focal length) and detected by a liquid-nitrogen-cooled InGaAs photodiode detector array (Princeton Instruments—model 7498-0001). Figure 10a and 10b show the PL intensity at 10 K for samples A and B, respectively, where the emission spectra, for each sample type, collected with two linear polarizations, namely: along [ 0 1 ¯ 1 ] and along [011].
Figure 10
PL spectra for crossed linear polarizations, taken at T = 10 K with excitation wave length λ = 532 nm along [ 0 1 ¯ 1 ] and [011] directions for samples (a) A and (b) B. The degree of linear polarization: ( Pol [ 0 1 ¯ 1 ] Pol [ 011 ] ) / ( Pol [ 0 1 ¯ 1 ] + Pol [ 011 ] ) has been included in these panels. (c) PL peak position as a function of the excitation intensity.
In Figure 10a, one may see a polarization degree around 6%, as might be expected due to the elongation in the quantum dots profile revealed by the AFM images (Figure 6) and strain distribution (Figure 8). As highlighted in Figures 2 and 3, the oscillator strength grows for emissions linearly polarized along the larger dot size direction. This behavior is enhanced for inter-dot separation up to d ~ 160 Å. When d is further reduced, the inter-dot tunneling probability increases considerably, and this behavior is also enhanced. The PL intensity polarized along coupling direction [ 0 1 ¯ 1 ] is also enhanced in coupled QDs by the reduction of barrier heights due to hydrostatic strain of the order of 1%. Besides, the anisotropic PL-emissions from sample A, as shown in Figure 5a, can be qualitatively reproduced by the oscillator strengths, shown in Figure 3, calculated by using the nominal values for both samples. As seen in Figure 3, an effective increase in inter-dot tunneling (distance d ~ 160) Å would lead to the relaxation of the confinement along the [ 0 1 ¯ 1 ] direction. These effects would lead to a hole ground-state character exchange from predominant-hh to -lh, and to the intensity difference between these cross-polarized emissions, experimentally confirmed by Figure 10a.
For the isotropic case, PL-emissions occur when D [ 011 ] D [ 0 1 ¯ 1 ] , a condition well-fulfilled for the cylindrical model of Figure 1b. By changing the dot shape and coupling along direction in (100) plane, the model shows that Pol [ 011 ] Pol [ 0 1 ¯ 1 ] condition can be obtained for semi-cylindrical geometry only for a small combination of values that emulates uncoupled dot distribution in the (100) plane if strain effects are included into the Hamiltonian. According to the theoretical modeling, an isotropic dot distribution on the (100) plane (case (i)) accounts for isotropic crossed polarized PL-emissions, as shown in Figure 5b for sample B. However, according to Figure 10b, a small polarization degree is still present in symmetric QDs, associated with the elongation that remains, as revealed by the Figure 8b. Such feature might come from the anisotropic diffusion rate of Indium atoms during the growth, which presents a higher mobility than the Gallium atoms. Furthermore, the Indium diffusion coefficient is faster along the [ 0 1 ¯ 1 ] than along the [110] direction, and as a result, the quantum dots of sample B are not completely symmetric [37, 38].
To confirm the results from X-ray measurements, Figure 10c displays the shift in peak position of the spectra as a function of the excitation intensity. Note, for sample A, a shift toward higher energies as the excitation intensity grows. Such a blue-shift for the elongated dots has been associated with the screening of the built-in electric field due to the presence of strain. On the other hand, for sample B no remarkable energy shift is observed showing that the strain is not so pronounced as in the previous case [39].
The control and simulation of size anisotropy and effective inter-dot tunneling effects, as described in this work, is an important issue to be addressed during the characterization of ordered sets of coupled dots. The strain fields, present during the growth process of these QDs have led to the appearance of anisotropic geometric shapes, mostly elongated along the preferential direction. These effects can be probed by polarized optical responses from different samples. In summary, we have shown that the shape, spatial distribution and the inter-dot coupling of InGaAs self-assembled QDs can be probed and characterized by using linearly polarized PL-emissions. Valence-band effects due to admixing between hole states and strong anisotropic effective masses have led to different PL intensities in samples with lateral QD ordering forming "chain-like" structures. The envelope function model used here to describe the polarized optical responses showed fairly good agreement with structural AFM and X-ray data and may be used to predict or characterize the strength of inter-dot coupling and/or anisotropic dot shape and distribution.
Appendix 1: Double Quantum Well Potential
After matching the wavefunctions fulfilling the hole-Shchrödinger equation at the interfaces using ∂ z ln F j (z|z=l = β∂ z ln F j (z)|z=l and ∂ z ln F j (z|z=l+d = β-1 z ln F j (z)|z=l+d, we are able to obtain the transcendental equation
β K k tan ( k l ) = tanh ± 1 ( K d 2 )
where k = 2 m b | E j | / and K = 2 m w | V E j | / and β = m b / m w is the rate between the hole effective masses in the barrier and the well region, respectively, m b and m w , as well as l is the well width, d the inter-dot distance and V the barrier height. By carrying out numerical calculation, solutions of the Eq. 16 yields to the energy levels E j ± with the corresponding wavefunctions of the symmetric (+) and antisymmetric (-) hole states,
F j ± ( z ) = { A ± ( l , d ) sin ( k z ) , left well A ± ( l , d ) sin ( k l ) exp ( K ( z l ) ) ± exp ( K ( z l ) d ) 1 ± exp ( K d ) , barrier ± A ± ( l , d ) sin ( k ( 2 l + d z ) ) , right well
A + ( l , d ) = 2 k K / ( 2 k K l K sin ( 2 k l ) + k sinh 2 ( K d 2 ) sin 2 ( k l ) ( sinh ( K d ) K d ) ) A ( l , d ) = 2 k K / ( 2 k K l K sin ( 2 k l ) + k cosh 2 ( K d 2 ) sin 2 ( k l ) ( sinh ( K d ) + K d ) )
Appendix 2: Matrix Elements
The matrix elements of the momentum operators are necessary in order to build the Hamiltonian matrix form of the KL . In polar coordinates the operators k ^ ± are written as,
k ^ ± = i e ± i φ ( ρ ± i ρ φ ) .
Projecting on the wavefunctions (5) g t , t ± a t | k ^ ± | t with t = (n, m) and t' = (n', m'), it is straightforward to show that
g t , t + = i n n | n n | T n ± 1 , m n , m ; if n = n ± 1 ,
T n , m n , m = μ n , m / μ n , m ( μ n , m / μ n , m ) 2 1 ,
g t , t + = 2 n μ t π 1 + ( 1 ) n + n J n + 1 ( μ t ) J n + 1 ( μ t ) × ζ n , m n , m ; if n n ± 1 ,
where ζ t t are numbers ruled by
ζ t t = 0 μ t η J n ( μ t μ t η ) [ J n 1 ( η ) ( 1 n ) 2 n 2 J n + 1 ( η ) ( 1 + n ) 2 n 2 ] d η .
In the particular case where t = t' Eq. 20 can be reduced to ζ t t = 0
Also, it follows the relation
g t , t = ( 1 ) n + n + 1 g t , t + .
The other matrix elements for the high-order operators k ^ ± 2 are evaluated numerically using Eqs. 18–21 and the matrix identity t | A ^ B ^ | t = p t | A ^ | p p | B ^ | t .
It is worth to show that the element matrix of the diagonal terms in Dhh(lh) accomplished
n , m | { k ^ + , k ^ } | n , m = μ n , m 2 a 2 δ n , n δ m , m .
Taking into account the loss of translational invariance along the z direction by replacing the wave vector-component k z by the operator -i z it is therefore convenient to write the resulting expression for the element matrix in a symmetrized form
1 2 ( j | k ^ z | j + j | k ^ z | j )
where index j stands for the piecewise wavefunctions (17). The resulting integrals in z-direction are solved numerically.
Authors acknowledge financial support from the agencies FAPESP and CNPq (GEM, VL-R), CONACYT/Mexico and FAPEMIG (LV-L) and LNLS-MCT (AM) and ICTP/Trieste (CT-G) and the National Science Foundation of the U.S. trough Grant DMR-0520550 (BLL, YuIM). LV-L thanks E. Gomez for technical assistance.
Authors’ Affiliations
Departamento de Física, Universidade Federal de São Carlos
Instituto de Física, Universidade Federal de Uberlândia
Arkansas Institute for Nanoscale Materials Science and Engineering, University of Arkansas
Laboratório Nacional de Luz Síncrotron
Instituto de Física de São Carlos, Universidade de São Paulo
Faculty of Physics, Havana University
Department of Electrical Engineer, University of California
1. Ohshima T: Phys Rev A. 2000, 62: 062316. 10.1103/PhysRevA.62.062316View ArticleGoogle Scholar
2. Li SS, Xia JB, Liu JL, Yang FH, Niu ZC, Feng SL, Zheng HZ: J Appl Phys. 2001, 90: 6151. 10.1063/1.1416855View ArticleGoogle Scholar
3. Li SS, Long GL, Bai FS, Feng SL, Zheng HZ: Proc Natl Acad Sci USA. 2001,98(21):11847. 10.1073/pnas.191373698View ArticleGoogle Scholar
4. Prado SJ, Trallero-Giner C, Alcalde AM, Lopez-Richard V, Marques GE: Phys Rev B. 2004, 69: 201310. 10.1103/PhysRevB.69.201310View ArticleGoogle Scholar
5. Lopez-Richard V, Alcalde AM, Prado SJ, Marques GE, Trallero-Giner C: Appl Phys Lett. 2005, 87: 231101. 10.1063/1.2138354View ArticleGoogle Scholar
6. Lopez-Richard V, Prado SJ, Marques GE, Trallero-Giner C, Alcalde AM: Appl Phys Lett. 2006, 88: 052101. 10.1063/1.2168499View ArticleGoogle Scholar
7. Mlinar V, Tadić M, Peeters FM: Phys Rev B. 2006, 73: 235336. 10.1103/PhysRevB.73.235336View ArticleGoogle Scholar
8. Wang ZM, Holmes K, Mazur YI, Salamo GJ: Appl Phys Lett. 2004, 84: 1931. 10.1063/1.1669064View ArticleGoogle Scholar
9. Karlsson KF, Troncale V, Oberli DY, Malko A, Pelucchi E, Rudra A, Kapon E: Appl Phys Lett. 2006, 89: 251113. 10.1063/1.2402241View ArticleGoogle Scholar
10. Troncale V, Karlsson KF, Oberli DY, Byszewski M, Malko A, Pelucchi E, Rudra A, Kapon E: J Appl Phys. 2007, 101: 081703. 10.1063/1.2722729View ArticleGoogle Scholar
11. Švrček V: Nano Micro Lett. 2009, 1: 40.View ArticleGoogle Scholar
12. Botsoa J, Lysenko V, Géloën A, Marty O, Bluet JM, Guillot G: Appl Phys Lett. 2008, 92: 173902. 10.1063/1.2919731View ArticleGoogle Scholar
13. Sheng W, Xu SJ: Phys Rev B. 2008, 77: 113305. 10.1103/PhysRevB.77.113305View ArticleGoogle Scholar
14. Sheng W: Appl Phys Lett. 2006, 89: 173129. 10.1063/1.2370871View ArticleGoogle Scholar
15. Favero I, Cassabois G, Jankovic A, Ferreira R, Darson D, Voisin C, Delalande C, Roussignol P, Badolato A, Petroff PM, Gerard JM: Appl Phys Lett. 2005, 86: 041904. 10.1063/1.1854733View ArticleGoogle Scholar
16. Cortez S, Krebs O, Voisin P, Gerard JM: Phys Rev B. 2001, 63: 233306. 10.1103/PhysRevB.63.233306View ArticleGoogle Scholar
17. Mlinar V, Tadić M, Partoens B, Peeters FM: Phys Rev B. 2005, 71: 235336. 10.1103/PhysRevB.71.205305View ArticleGoogle Scholar
18. Margapoti E, Worschech L, Mahapatra S, Brunner K, Forchel A, Alves FM, Lopez-Richard V, Marques GE, Bougerol C: Phys Rev B. 2008, 77: 073308. 10.1103/PhysRevB.77.073308View ArticleGoogle Scholar
19. Marega E Jr, Waar ZA, Hussein M, Salamo GJ: Mater Res Soc Symp Proc. 2007., 959: 0959-M17–16 0959-M17-16Google Scholar
20. Luttinger JM, Kohn W: Phys Rev. 1955, 97: 869. 10.1103/PhysRev.97.869View ArticleGoogle Scholar
21. Fishman G: Phys Rev B. 1995, 52: 11 132. 10.1103/PhysRevB.52.11132View ArticleGoogle Scholar
22. Tadić M, Peeters FM, Janssens KL: Phys Rev B. 2002, 65: 165333. 10.1103/PhysRevB.65.165333View ArticleGoogle Scholar
23. Cesar DF, Teodoro MD, Tsuzuki H, Lopez-Richard V, Marques GE, Rino JP, Lourenço SA, Marega E Jr, Dias IFL, Duarte JL, González-Borrero PP, Salamo GJ: Phys Rev B. 2010, 81: 233301. 10.1103/PhysRevB.81.233301View ArticleGoogle Scholar
24. Trallero-Herrero C, Trallero-Giner C, Ulloa S, Perez-Alvarez R: Phys Rev E. 2001, 64: 056237. 10.1103/PhysRevE.64.056237View ArticleGoogle Scholar
25. Xia JB: Phys Rev B. 1991, 43: 9856. 10.1103/PhysRevB.43.9856View ArticleGoogle Scholar
26. Fishman G: Phys Rev B. 1995, 52: 11132. 10.1103/PhysRevB.52.11132View ArticleGoogle Scholar
27. Mathieu H, Allegre J, Chatt A, Lefebvre P, Faurie JP: Phys Rev B. 1988, 38: 7740. 10.1103/PhysRevB.38.7740View ArticleGoogle Scholar
28. Saito H, Nishi K, Sugou S, Sugimoto Y: Appl Phys Lett. 1997, 71: 590. 10.1063/1.119802View ArticleGoogle Scholar
29. Noda S, Abe T, Tamura M: Phys Rev B. 1998, 58: 7181. 10.1103/PhysRevB.58.7181View ArticleGoogle Scholar
30. Malachias A, Magalhães-Paniago R, Neves BRA, Rodrigues WN, Moreira MVB, Pfannes H-D, de Oliveira AG, Kycia S, Metzger TH: Appl Phys Lett. 2001, 79: 4342. 10.1063/1.1427421View ArticleGoogle Scholar
31. Roch T, Holý V, Hesse A, Stangl J, Fromherz T, Bauer G, Metzger TH, Ferrer S: Phys Rev B. 2002, 65: 245324. 10.1103/PhysRevB.65.245324View ArticleGoogle Scholar
32. Kegel I, Metzger TH, Fratz P, Peisl J, Lorke A, Garcia JM, Petroff PM: Europhys Lett. 1999, 45: 222. 10.1209/epl/i1999-00150-yView ArticleGoogle Scholar
33. Magalhães-Paniago R, et al.: Phys Rev B. 2002, 66: 245312.View ArticleGoogle Scholar
34. Morkoc D, Sverdlov B, Gao G-B: Proc IEEE. 1993, 81: 493. 10.1109/5.219338View ArticleGoogle Scholar
35. Mashita M, Hiyama Y, Arai K, Koo B-H, Yao T: Jpn J Appl Phys. 2000, 39: 4435. 10.1143/JJAP.39.4435View ArticleGoogle Scholar
36. Chou PC, Pagano NJ: Elasticity: Tensor, Dyadic, and Engineering Approaches. Dover Publications, New York; 1992.Google Scholar
37. Granados D, García JM: Appl Phys Lett. 2303, 82: 2401. 10.1063/1.1566799View ArticleGoogle Scholar
38. Lorke A, Blossey R, García JM, Bichler M, Abstreiter G: Mater Sci Eng B. 2002, 88: 225. 10.1016/S0921-5107(01)00870-4View ArticleGoogle Scholar
39. Teodoro MD, Campo VL Jr, Lopez-Richard V, Marega E Jr, Marques GE, Galvao Gobato Y, Iikawa F, Brasil MJSP, AbuWaar ZY, Dorogan VG, Mazur YI, Benamara M, Salamo GJ: Phys Rev Lett. 2010, 104: 086401. 10.1103/PhysRevLett.104.086401View ArticleGoogle Scholar
© Villegas-Lelovsky et al. 2010
|
67d3fe8886e838aa | What's New
About the Project
33 Coulomb FunctionsPhysical Applications
§33.22 Particle Scattering and Atomic and Molecular Spectra
§33.22(i) Schrödinger Equation
With e denoting here the elementary charge, the Coulomb potential between two point particles with charges Z1e,Z2e and masses m1,m2 separated by a distance s is V(s)=Z1Z2e2/(4πε0s)=Z1Z2αc/s, where Zj are atomic numbers, ε0 is the electric constant, α is the fine structure constant, and is the reduced Planck’s constant. The reduced mass is m=m1m2/(m1+m2), and at energy of relative motion E with relative orbital angular momentum , the Schrödinger equation for the radial wave function w(s) is given by
33.22.1 (-22m(d2ds2-(+1)s2)+Z1Z2αcs)w=Ew,
With the substitutions
33.22.2 k =(2mE/2)1/2,
Z =mZ1Z2αc/,
x =s,
(33.22.1) becomes
33.22.3 d2wdx2+(k2-2Zx-(+1)x2)w=0.
§33.22(ii) Definitions of Variables
k Scaling
The k-scaled variables ρ and η of §33.2 are given by
33.22.4 ρ =s(2mE/2)1/2,
η =Z1Z2αc(m/(2E))1/2.
At positive energies E>0, ρ0, and:
Attractive potentials: Z1Z2<0, η<0.
Zero potential (V=0): Z1Z2=0, η=0.
Repulsive potentials: Z1Z2>0, η>0.
Positive-energy functions correspond to processes such as Rutherford scattering and Coulomb excitation of nuclei (Alder et al. (1956)), and atomic photo-ionization and electron-ion collisions (Bethe and Salpeter (1977)).
At negative energies E<0 and both ρ and η are purely imaginary. The negative-energy functions are widely used in the description of atomic and molecular spectra; see Bethe and Salpeter (1977), Seaton (1983), and Aymar et al. (1996). In these applications, the Z-scaled variables r and ϵ are more convenient.
Z Scaling
The Z-scaled variables r and ϵ of §33.14 are given by
33.22.5 r =-Z1Z2(mcα/)s,
ϵ =E/(Z12Z22mc2α2/2).
For Z1Z2=-1 and m=me, the electron mass, the scaling factors in (33.22.5) reduce to the Bohr radius, a0=/(mecα), and to a multiple of the Rydberg constant,
Attractive potentials: Z1Z2<0, r>0.
Zero potential (V=0): Z1Z2=0, r=0.
Repulsive potentials: Z1Z2>0, r<0.
ik Scaling
The ik-scaled variables z and κ of §13.2 are given by
33.22.6 z =2is(2mE/2)1/2,
κ =iZ1Z2αc(m/(2E))1/2.
Attractive potentials: Z1Z2<0, κ<0.
Zero potential (V=0): Z1Z2=0, κ=0.
Repulsive potentials: Z1Z2>0, κ>0.
Customary variables are (ϵ,r) in atomic physics and (η,ρ) in atomic and nuclear physics. Both variable sets may be used for attractive and repulsive potentials: the (ϵ,r) set cannot be used for a zero potential because this would imply r=0 for all s, and the (η,ρ) set cannot be used for zero energy E because this would imply ρ=0 always.
§33.22(iii) Conversions Between Variables
33.22.7 r =-ηρ,
ϵ =1/η2,
Z from k.
33.22.8 z =2iρ,
κ =iη,
ik from k.
33.22.9 ρ =z/(2i),
η =κ/i,
k from ik.
33.22.10 r =κz/2,
ϵ =-1/κ2,
Z from ik.
33.22.11 η =±ϵ-1/2,
ρ =-r/η,
k from Z.
33.22.12 κ =±(-ϵ)-1/2,
z =2r/κ,
ik from Z.
Resolution of the ambiguous signs in (33.22.11), (33.22.12) depends on the sign of Z/k in (33.22.3). See also §§33.14(ii), 33.14(iii), 33.22(i), and 33.22(ii).
§33.22(iv) Klein–Gordon and Dirac Equations
The relativistic motion of spinless particles in a Coulomb field, as encountered in pionic atoms and pion-nucleon scattering (Backenstoss (1970)) is described by a Klein–Gordon equation equivalent to (33.2.1); see Barnett (1981a). The motion of a relativistic electron in a Coulomb field, which arises in the theory of the electronic structure of heavy elements (Johnson (2007)), is described by a Dirac equation. The solutions to this equation are closely related to the Coulomb functions; see Greiner et al. (1985).
§33.22(v) Asymptotic Solutions
The Coulomb solutions of the Schrödinger and Klein–Gordon equations are almost always used in the external region, outside the range of any non-Coulomb forces or couplings.
For scattering problems, the interior solution is then matched to a linear combination of a pair of Coulomb functions, F(η,ρ) and G(η,ρ), or f(ϵ,;r) and h(ϵ,;r), to determine the scattering S-matrix and also the correct normalization of the interior wave solutions; see Bloch et al. (1951).
For bound-state problems only the exponentially decaying solution is required, usually taken to be the Whittaker function W-η,+12(2ρ). The functions ϕn,(r) defined by (33.14.14) are the hydrogenic bound states in attractive Coulomb potentials; their polynomial components are often called associated Laguerre functions; see Christy and Duck (1961) and Bethe and Salpeter (1977).
§33.22(vi) Solutions Inside the Turning Point
The penetrability of repulsive Coulomb potential barriers is normally expressed in terms of the quantity ρ/(F2(η,ρ)+G2(η,ρ)) (Mott and Massey (1956, pp. 63–65)). The WKBJ approximations of §33.23(vii) may also be used to estimate the penetrability.
§33.22(vii) Complex Variables and Parameters
The Coulomb functions given in this chapter are most commonly evaluated for real values of ρ, r, η, ϵ and nonnegative integer values of , but they may be continued analytically to complex arguments and order as indicated in §33.13.
Examples of applications to noninteger and/or complex variables are as follows.
• Scattering at complex energies. See for example McDonald and Nuttall (1969).
• Searches for resonances as poles of the S-matrix in the complex half-plane k<0. See for example Csótó and Hale (1997).
• Regge poles at complex values of . See for example Takemasa et al. (1979).
• Eigenstates using complex-rotated coordinates rreiθ, so that resonances have square-integrable eigenfunctions. See for example Halley et al. (1993).
• Solution of relativistic Coulomb equations. See for example Cooper et al. (1979) and Barnett (1981b).
• Gravitational radiation. See for example Berti and Cardoso (2006).
For further examples see Humblet (1984). |
57c77c27295bf840 |
Blog on Math Blogs
Math in the Media 1001
Mail to a friend · Print this article · Previous Columns
Tony PhillipsTony Phillips' Take on Math in the Media
A monthly survey of math news
October 2001
*The Gordian unknot. Alexander the Great cut the knot in 333 BC, and thereby destroyed important mathematical evidence. What was this knot that no one could untie? Keith Delvin reports in the September 13, 2001 Guardian that ``A Polish physicist [Piotr Pieranski of Poznan] and a Swiss biologist [Andrzej Stasiak of Lausanne] have used computer simulation to recreate what might have been the Gordian knot.'' His piece is entitled ``Unravelling the myth.'' Pieranski and Stasiak argue that the knot could not have had any free ends, so the cord was actually a circle. But if the circle had been topologically knotted, the problem would have been mathematically impossible, and therefore not a fair challenge. So the circle itself was tied into what had to be an unknot, and only the thickness of the cord made it impossible to loosen it. For example, the knot might have been tied in a wet cord which was then allowed to dry, and perhaps to shrink itself into an impossible configuration. Pieranski and Stasiak, motivated by interest in string theory and in the knotting of biological molecules, respectively, used a computer program to simulate the manipulation of such knots, and have found one so obdurate that maybe it has the structure of the original puzzle that Alexander ``solved.'' Devlin's article is available online. Pieranski's home page has animations of the computer program in action.
*Answering an Age-Old Cry: When Will I Use This Math? is the title of a piece by Timothy Jack Ward in the House and Home section of the September 6, 2001 New York Times. The answer is found in the work of Jhane Barnes, a designer of menswear, carpets, textiles, furniture and throws. The math gets used in the designs, and Ward's enthusiasm is over Barnes' use of fractals. These are not your run-of-the-mill Julia sets, but rather a harnessing of fractal rythm into the succession of graphic elements. Wired Online also has a piece on Jhana Barnes, ``Fashion Nerd,'' by Michael Sand: ``Using a host of computer programs to incorporate symmetry, mathematics, and fractal geometry into her work, she's the only major fashion designer out there who's using technology as a true creative tool.''
*The Abel prize is the name of a new ``top maths prize,'' as Nature puts it in their September 13 2001 ``News in brief.'' The prize is being set up by the Norwegian government in honor of that country's greatest mathematician. The prize reportedly is aimed at bringing recognition of research achievements in mathematics up to the Nobel level. It will be given every year (starting in 2003) and the money is good: NKr 5 million (approx US$ 550,000).
*Photo Solitons. Solitons are solutions to a non-linear wave equation. They have been observed in nature since 1844, when John Scott Russell chased a ``solitary wave'' as it sped down the Edinburgh to Glascow canal without losing its shape. This phenomenon in another context turned out to be the key to understanding a strange phenomenon called ``Fermi-Pasta-Ulam recurrence'' (1953). In the computer simulation of the oscillations of a string consisting of 64 particles with non-linear interaction, the initial shape of the string dissolved as expected into a superposition of non-coherent modes, but after a certain time the modes magically reassembled into the original configuration. This was the ``recurrence.'' In a News and Views piece (``Déjà vu in optics'') in the September 20, 2001 Nature, Nail Akhmediev explains how the phomenon was initially understood theoretically as a a solitary wave in the solutions of the Korteweg-de Vries equation, the mathematical model for the original system, and how it is now understood that ``essentially the Fermi-Pasta-Ulam recurrence is a periodic solution of the non-linear Schrödinger equation.'' Now this phenomenon has been observed in a real physical system, using light beams in an optical fibre. The experiment was reported this year in Physical Review Letters by Van Simaeys, Emplit and Haelterman. ``Because they took great care when setting up the initial conditions, the recurrence they saw was almost perfect.''
-Tony Phillips
Stony Brook
* Math in the Media Archive |
7ec16aeeaa14a6f3 | You are currently browsing the tag archive for the ‘RAGE theorem’ tag.
Perhaps the most fundamental differential operator on Euclidean space {{\bf R}^d} is the Laplacian
\displaystyle \Delta := \sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}.
The Laplacian is a linear translation-invariant operator, and as such is necessarily diagonalised by the Fourier transform
\displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx.
Indeed, we have
\displaystyle \widehat{\Delta f}(\xi) = - 4 \pi^2 |\xi|^2 \hat f(\xi)
for any suitably nice function {f} (e.g. in the Schwartz class; alternatively, one can work in very rough classes, such as the space of tempered distributions, provided of course that one is willing to interpret all operators in a distributional or weak sense).
Because of this explicit diagonalisation, it is a straightforward manner to define spectral multipliers {m(-\Delta)} of the Laplacian for any (measurable, polynomial growth) function {m: [0,+\infty) \rightarrow {\bf C}}, by the formula
\displaystyle \widehat{m(-\Delta) f}(\xi) := m( 4\pi^2 |\xi|^2 ) \hat f(\xi).
(The presence of the minus sign in front of the Laplacian has some minor technical advantages, as it makes {-\Delta} positive semi-definite. One can also define spectral multipliers more abstractly from general functional calculus, after establishing that the Laplacian is essentially self-adjoint.) Many of these multipliers are of importance in PDE and analysis, such as the fractional derivative operators {(-\Delta)^{s/2}}, the heat propagators {e^{t\Delta}}, the (free) Schrödinger propagators {e^{it\Delta}}, the wave propagators {e^{\pm i t \sqrt{-\Delta}}} (or {\cos(t \sqrt{-\Delta})} and {\frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}}}, depending on one’s conventions), the spectral projections {1_I(\sqrt{-\Delta})}, the Bochner-Riesz summation operators {(1 + \frac{\Delta}{4\pi^2 R^2})_+^\delta}, or the resolvents {R(z) := (-\Delta-z)^{-1}}.
Each of these families of multipliers are related to the others, by means of various integral transforms (and also, in some cases, by analytic continuation). For instance:
1. Using the Laplace transform, one can express (sufficiently smooth) multipliers in terms of heat operators. For instance, using the identity
\displaystyle \lambda^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{-t\lambda}\ dt
(using analytic continuation if necessary to make the right-hand side well-defined), with {\Gamma} being the Gamma function, we can write the fractional derivative operators in terms of heat kernels:
\displaystyle (-\Delta)^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{t\Delta}\ dt. \ \ \ \ \ (1)
2. Using analytic continuation, one can connect heat operators {e^{t\Delta}} to Schrödinger operators {e^{it\Delta}}, a process also known as Wick rotation. Analytic continuation is a notoriously unstable process, and so it is difficult to use analytic continuation to obtain any quantitative estimates on (say) Schrödinger operators from their heat counterparts; however, this procedure can be useful for propagating identities from one family to another. For instance, one can derive the fundamental solution for the Schrödinger equation from the fundamental solution for the heat equation by this method.
3. Using the Fourier inversion formula, one can write general multipliers as integral combinations of Schrödinger or wave propagators; for instance, if {z} lies in the upper half plane {{\bf H} := \{ z \in {\bf C}: \hbox{Im} z > 0 \}}, one has
\displaystyle \frac{1}{x-z} = i\int_0^\infty e^{-itx} e^{itz}\ dt
for any real number {x}, and thus we can write resolvents in terms of Schrödinger propagators:
\displaystyle R(z) = i\int_0^\infty e^{it\Delta} e^{itz}\ dt. \ \ \ \ \ (2)
In a similar vein, if {k \in {\bf H}}, then
\displaystyle \frac{1}{x^2-k^2} = \frac{i}{k} \int_0^\infty \cos(tx) e^{ikt}\ dt
for any {x>0}, so one can also write resolvents in terms of wave propagators:
\displaystyle R(k^2) = \frac{i}{k} \int_0^\infty \cos(t\sqrt{-\Delta}) e^{ikt}\ dt. \ \ \ \ \ (3)
4. Using the Cauchy integral formula, one can express (sufficiently holomorphic) multipliers in terms of resolvents (or limits of resolvents). For instance, if {t > 0}, then from the Cauchy integral formula (and Jordan’s lemma) one has
\displaystyle e^{itx} = \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} \frac{e^{ity}}{y-x+i\epsilon}\ dy
for any {x \in {\bf R}}, and so one can (formally, at least) write Schrödinger propagators in terms of resolvents:
\displaystyle e^{-it\Delta} = - \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} e^{ity} R(y+i\epsilon)\ dy. \ \ \ \ \ (4)
5. The imaginary part of {\frac{1}{\pi} \frac{1}{x-(y+i\epsilon)}} is the Poisson kernel {\frac{\epsilon}{\pi} \frac{1}{(y-x)^2+\epsilon^2}}, which is an approximation to the identity. As a consequence, for any reasonable function {m(x)}, one has (formally, at least)
\displaystyle m(x) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} \frac{1}{x-(y+i\epsilon)}) m(y)\ dy
which leads (again formally) to the ability to express arbitrary multipliers in terms of imaginary (or skew-adjoint) parts of resolvents:
\displaystyle m(-\Delta) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} R(y+i\epsilon)) m(y)\ dy. \ \ \ \ \ (5)
Among other things, this type of formula (with {-\Delta} replaced by a more general self-adjoint operator) is used in the resolvent-based approach to the spectral theorem (by using the limiting imaginary part of resolvents to build spectral measure). Note that one can also express {\hbox{Im} R(y+i\epsilon)} as {\frac{1}{2i} (R(y+i\epsilon) - R(y-i\epsilon))}.
Remark 1 The ability of heat operators, Schrödinger propagators, wave propagators, or resolvents to generate other spectral multipliers can be viewed as a sort of manifestation of the Stone-Weierstrass theorem (though with the caveat that the spectrum of the Laplacian is non-compact and so the Stone-Weierstrass theorem does not directly apply). Indeed, observe the *-algebra type properties
\displaystyle e^{s\Delta} e^{t\Delta} = e^{(s+t)\Delta}; \quad (e^{s\Delta})^* = e^{s\Delta}
\displaystyle e^{is\Delta} e^{it\Delta} = e^{i(s+t)\Delta}; \quad (e^{is\Delta})^* = e^{-is\Delta}
\displaystyle e^{is\sqrt{-\Delta}} e^{it\sqrt{-\Delta}} = e^{i(s+t)\sqrt{-\Delta}}; \quad (e^{is\sqrt{-\Delta}})^* = e^{-is\sqrt{-\Delta}}
\displaystyle R(z) R(w) = \frac{R(w)-R(z)}{z-w}; \quad R(z)^* = R(\overline{z}).
Because of these relationships, it is possible (in principle, at least), to leverage one’s understanding one family of spectral multipliers to gain control on another family of multipliers. For instance, the fact that the heat operators {e^{t\Delta}} have non-negative kernel (a fact which can be seen from the maximum principle, or from the Brownian motion interpretation of the heat kernels) implies (by (1)) that the fractional integral operators {(-\Delta)^{-s/2}} for {s>0} also have non-negative kernel. Or, the fact that the wave equation enjoys finite speed of propagation (and hence that the wave propagators {\cos(t\sqrt{-\Delta})} have distributional convolution kernel localised to the ball of radius {|t|} centred at the origin), can be used (by (3)) to show that the resolvents {R(k^2)} have a convolution kernel that is essentially localised to the ball of radius {O( 1 / |\hbox{Im}(k)| )} around the origin.
In this post, I would like to continue this theme by using the resolvents {R(z) = (-\Delta-z)^{-1}} to control other spectral multipliers. These resolvents are well-defined whenever {z} lies outside of the spectrum {[0,+\infty)} of the operator {-\Delta}. In the model three-dimensional case {d=3}, they can be defined explicitly by the formula
\displaystyle R(k^2) f(x) = \int_{{\bf R}^3} \frac{e^{ik|x-y|}}{4\pi |x-y|} f(y)\ dy
whenever {k} lives in the upper half-plane {\{ k \in {\bf C}: \hbox{Im}(k) > 0 \}}, ensuring the absolute convergence of the integral for test functions {f}. (In general dimension, explicit formulas are still available, but involve Bessel functions. But asymptotically at least, and ignoring higher order terms, one simply replaces {\frac{e^{ik|x-y|}}{4\pi |x-y|}} by {\frac{e^{ik|x-y|}}{c_d |x-y|^{d-2}}} for some explicit constant {c_d}.) It is an instructive exercise to verify that this resolvent indeed inverts the operator {-\Delta-k^2}, either by using Fourier analysis or by Green’s theorem.
Henceforth we restrict attention to three dimensions {d=3} for simplicity. One consequence of the above explicit formula is that for positive real {\lambda > 0}, the resolvents {R(\lambda+i\epsilon)} and {R(\lambda-i\epsilon)} tend to different limits as {\epsilon \rightarrow 0}, reflecting the jump discontinuity in the resolvent function at the spectrum; as one can guess from formulae such as (4) or (5), such limits are of interest for understanding many other spectral multipliers. Indeed, for any test function {f}, we see that
\displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda+i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy
\displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda-i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{-i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy.
Both of these functions
\displaystyle u_\pm(x) := \int_{{\bf R}^3} \frac{e^{\pm i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy
solve the Helmholtz equation
\displaystyle (-\Delta-\lambda) u_\pm = f, \ \ \ \ \ (6)
but have different asymptotics at infinity. Indeed, if {\int_{{\bf R}^3} f(y)\ dy = A}, then we have the asymptotic
\displaystyle u_\pm(x) = \frac{A e^{\pm i \sqrt{\lambda}|x|}}{4\pi|x|} + O( \frac{1}{|x|^2}) \ \ \ \ \ (7)
as {|x| \rightarrow \infty}, leading also to the Sommerfeld radiation condition
\displaystyle u_\pm(x) = O(\frac{1}{|x|}); \quad (\partial_r \mp i\sqrt{\lambda}) u_\pm(x) = O( \frac{1}{|x|^2}) \ \ \ \ \ (8)
where {\partial_r := \frac{x}{|x|} \cdot \nabla_x} is the outgoing radial derivative. Indeed, one can show using an integration by parts argument that {u_\pm} is the unique solution of the Helmholtz equation (6) obeying (8) (see below). {u_+} is known as the outward radiating solution of the Helmholtz equation (6), and {u_-} is known as the inward radiating solution. Indeed, if one views the function {u_\pm(t,x) := e^{-i\lambda t} u_\pm(x)} as a solution to the inhomogeneous Schrödinger equation
\displaystyle (i\partial_t + \Delta) u_\pm = - e^{-i\lambda t} f
and using the de Broglie law that a solution to such an equation with wave number {k \in {\bf R}^3} (i.e. resembling {A e^{i k \cdot x}} for some amplitide {A}) should propagate at (group) velocity {2k}, we see (heuristically, at least) that the outward radiating solution will indeed propagate radially away from the origin at speed {2\sqrt{\lambda}}, while inward radiating solution propagates inward at the same speed.
There is a useful quantitative version of the convergence
\displaystyle R(\lambda \pm i\epsilon) f \rightarrow u_\pm, \ \ \ \ \ (9)
known as the limiting absorption principle:
Theorem 1 (Limiting absorption principle) Let {f} be a test function on {{\bf R}^3}, let {\lambda > 0}, and let {\sigma > 0}. Then one has
\displaystyle \| R(\lambda \pm i\epsilon) f \|_{H^{0,-1/2-\sigma}({\bf R}^3)} \leq C_\sigma \lambda^{-1/2} \|f\|_{H^{0,1/2+\sigma}({\bf R}^3)}
for all {\epsilon > 0}, where {C_\sigma > 0} depends only on {\sigma}, and {H^{0,s}({\bf R}^3)} is the weighted norm
\displaystyle \|f\|_{H^{0,s}({\bf R}^3)} := \| \langle x \rangle^s f \|_{L^2_x({\bf R}^3)}
and {\langle x \rangle := (1+|x|^2)^{1/2}}.
This principle allows one to extend the convergence (9) from test functions {f} to all functions in the weighted space {H^{0,1/2+\sigma}} by a density argument (though the radiation condition (8) has to be adapted suitably for this scale of spaces when doing so). The weighted space {H^{0,-1/2-\sigma}} on the left-hand side is optimal, as can be seen from the asymptotic (7); a duality argument similarly shows that the weighted space {H^{0,1/2+\sigma}} on the right-hand side is also optimal.
We prove this theorem below the fold. As observed long ago by Kato (and also reproduced below), this estimate is equivalent (via a Fourier transform in the spectral variable {\lambda}) to a useful estimate for the free Schrödinger equation known as the local smoothing estimate, which in particular implies the well-known RAGE theorem for that equation; it also has similar consequences for the free wave equation. As we shall see, it also encodes some spectral information about the Laplacian; for instance, it can be used to show that the Laplacian has no eigenvalues, resonances, or singular continuous spectrum. These spectral facts are already obvious from the Fourier transform representation of the Laplacian, but the point is that the limiting absorption principle also applies to more general operators for which the explicit diagonalisation afforded by the Fourier transform is not available. (Igor Rodnianski and I are working on a paper regarding this topic, of which I hope to say more about soon.)
In order to illustrate the main ideas and suppress technical details, I will be a little loose with some of the rigorous details of the arguments, and in particular will be manipulating limits and integrals at a somewhat formal level.
Read the rest of this entry »
RSS Google+ feed
Get every new post delivered to your Inbox.
Join 3,781 other followers |
5082e852dacce79a |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
We're making a video presentation on the topic of eigenvectors and eigenvalues. Unfortunately we have only reached the theoretical part of the discussion. Any comments on practical applications would be appreciated.
share|cite|improve this question
closed as no longer relevant by Andy Putman, Mark Meckes, Benoît Kloeckner, Mark Sapir, Tom Church Feb 3 '12 at 10:26
4… – Daniel Moskovich Sep 29 '10 at 11:48
I'd say that physics was pretty much an application of eigenvalues and eigenvectors. :-) In particular normal modes ( of oscillations for a system with $n$ degrees of freedom comes down to finding eigenvalues/vectors of an $n$-by-$n$ matrix. – Robin Chapman Sep 29 '10 at 11:50
Please add more context: Who is your intended audience, and what scientific background can be assumed? What form is the presentation (e.g., video lecture like OpenCourseWare, animated demo like the Geometry Center, interpretive dance, etc.)? – S. Carnahan Sep 29 '10 at 12:00
10 Answers 10
The problem of ranking the outcomes of a search engine like Google is solved in terms of an invariant measure on the net, seen as a Markov chain. Finding the invariant measure requires the spectral analysis of the associated matrix.
share|cite|improve this answer
I would comment on Peitro's answer, but I don't have enough reputation; for a marvelously-titled explanation of Google's Pagerank, see The $25,000,000,000 Eigenvector.
share|cite|improve this answer
Google's pagerank system is most likely the most canonical example, however others include,
-Dynamical System If you are able to express a model in terms of a matrix acting on vectors, one can look at the iterations and ask what occurs? This can be done to model the life cycle of some species in an environment (bacteria on a petri dish, wolf/sheep interaction, fibonacci sequence as the spread of a population of bunnies, etc...). These examples are fairly small, however you can certainly have massive systems to model, and if your matrix is diagonalizable, the iterations of this map correspond to iterations of a diagonal matrix (very easy to do!) instead of the standard $m^{2}$ operations to multiply out an $m\times m$ matrix. Think about a $1 000 000 \times 1 000 000$ matrix $M$, where you're looking at whether a certain species will die out (i.e., itererating $M^{n}$ and checking as $n\to\infty$. Quite the time saver!)
-Graph theory As an undergrad one of my summer research projects looked into special graphs called (3,6)-fullerenes, where we were finding that, looking at the adjacency matrix of the graph, one could pick 3 well chosen eigenvalues and their corresponding eigenvetors, and generate nice 3d plots of the graphs, whereas other choices would produce degenerate images, involving some twisted 2d surface.
-Differential equations One can use eigenvalues and eigenvectors to express the solutions to certain differential equations, which is one of the main reasons theory was developed in the first place!
I would highly recommend reading the wikipedia article, as it covers many more examples than any one reply here will likely contain, with examples along to way! (Schrödinger equation, Molecular Orbitals, Geology and Glaciology, Factor Analysis, Vibration Analysis, Eigenfaces, Tensor of Inertia, Stress Tensor, Eigenvalues of a Graph)
share|cite|improve this answer
All of Quantum Mechanics is based on the notion of eigenvectors and eigenvalues. Observables are represented by hermitian operators Q, their determinate states are eigenvectors of Q, a measure of the observable can only yield an eigenvalue of the corresponding operator Q. If you measure an observable in the state $\psi$ in a system and find as result the eigenvalue $a$, the state of the system just after the measurement will be the normed projection of $\psi$ onto the eigenvector associated to $a$. And so on and so forth.
Of course Quantum Physics is not mathematically trivial: the arena is infinite dimensional Hilbert Space (or more complicated functional analytic structures like Gelfand triples), operators are not bounded, etc...However, in the extremely fast growing field of Quantum Computing the algebra is mostly limited to finite-dimensional spaces and their operators.
Finally, let me mention that Frank Wilczek, a winner of the 2004 Nobel Prize in Physics, has interestingly reminisced that as a student he found Quantum Mechanics easier than Classical Mechanics because of its nice axiomatization alluded to above..
share|cite|improve this answer
For visual appeal, you should look into the area of pendulums. There is a good demonstration with swinging bottles, I recall, and this does depend on eigenvalues that are nearly equal. Do a Web search on "coupled pendulums".
share|cite|improve this answer
Principal Component Analysis is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. It is very difficult to visualize data in high dimensional space, but PCA can be used their to analyze data. From the data set covariance matrix is formed and then eigen values and eigen vectors of that covariance matrix are found. These eigne values and eigen vectors then can be compared to figure out the contribution of a particular feature in the data set. Thus PCA can be successfully applied to reduce dimension of the data.
share|cite|improve this answer
In telecommunications the so-called "beam-forming" algorithm in case of multiple antennas requires calculation of eigenvectors.
share|cite|improve this answer
I think the book $Spectra$ $of$ $Graphs$$:$ $Theory$ $and$ $Applications$ by Dragos M. Cvetkovic, Michael Doob, Horst Sachs and M. Cvetkovic is very good source for practical applications of eigenvalues and eigenvectors.
In communication theory, coding theory and cryptography, the minimum distance of codes is very important parameter in decoding and also is very important in coding based cryptography (for example McEliece cryptosystem). It is interesting that the second largest eigenvalue of related graph to a code, can determine a good lower-bond for minimum distance of code.
share|cite|improve this answer
Another interesting application is rigid body rotation theory. No matter how complicated an object looks, there's always (at least) a set of three mutually orthogonal directions around which it can rotate perfectly without precession.
Maybe not something you can base a whole lecture on, but it's a nice remark.
share|cite|improve this answer
|
fab2c8a25d0c025e | Time evolution of the density operator next up previous
Next: The quantum equilibrium ensembles Up: Principles of quantum statistical Previous: The density matrix and
Time evolution of the density operator
The time evolution of the operator tex2html_wrap_inline509 can be predicted directly from the Schrödinger equation. Since tex2html_wrap_inline599 is given by
the time derivative is given by
where the second line follows from the fact that the Schrödinger equation for the bra state vector tex2html_wrap_inline601 is
Note that the equation of motion for tex2html_wrap_inline599 differs from the usual Heisenberg equation by a minus sign! Since tex2html_wrap_inline599 is constructed from state vectors, it is not an observable like other hermitian operators, so there is no reason to expect that its time evolution will be the same. The general solution to its equation of motion is
The equation of motion for tex2html_wrap_inline599 can be cast into a quantum Liouville equation by introducing an operator
In term of iL, it can be seen that tex2html_wrap_inline599 satisfies
What kind of operator is iL? It acts on an operator and returns another operator. Thus, it is not an operator in the ordinary sense, but is known as a superoperator or tetradic operator (see S. Mukamel, Principles of Nonlinear Optical Spectroscopy, Oxford University Press, New York (1995)).
Defining the evolution equation for tex2html_wrap_inline509 this way, we have a perfect analogy between the density matrix and the state vector. The two equations of motion are
We also have an analogy with the evolution of the classical phase space distribution tex2html_wrap_inline617 , which satisfies
with tex2html_wrap_inline619 being the classical Liouville operator. Again, we see that the limit of a commutator is the classical Poisson bracket.
Mark Tuckerman
Tue May 9 19:40:24 EDT 2000 |
997e45a0895b3c9d | Electron configuration
From Wikipedia, the free encyclopedia - View original article
Jump to: navigation, search
Electron atomic and molecular orbitals
A Bohr Diagram of lithium
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals.[1] For example, the electron configuration of the neon atom is 1s2 2s2 2p6.
Electronic configurations describe electrons as each moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, an energy is associated with each electron configuration and, upon certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. The concept is also useful for describing the chemical bonds that hold atoms together. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells[edit]
See also: Electron shell
s (=0)p (=1)
Electron configuration was first conceived of under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states, which share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow).
A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell.
The numbers of electrons that can occupy each shell and each subshell arises from the equations of quantum mechanics,[2] in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.[3]
See also: Atomic orbital
Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic orbital labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each orbital (or set of orbitals sharing the same label) placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-s-two, two-s-one"). Phosphorus (atomic number 15) is as follows: 1s2 2s2 2p6 3s2 3p3.
For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used, since all but the last few subshells are identical to those of one or another of the noble gases. Phosphorus, for instance, differs from neon (1s2 2s2 2p6) only by the presence of a third shell. Thus, the electron configuration of neon is pulled out, and phosphorus is written as follows: [Ne] 3s2 3p3. This convention is useful as it is the electrons in the outermost shell which most determine the chemistry of the element.
For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s2 3d2 or [Ar] 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies which is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti4+, Ti3+, Ti2+, Ti+, Ti.
The superscript 1 for a singly occupied orbital is not compulsory. It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fundamental" (or "fine"), based on their observed fine structure: their modern usage indicates orbitals with an azimuthal quantum number, l, of 0, 1, 2 or 3 respectively. After "f", the sequence continues alphabetically "g", "h", "i"... (l = 4, 5, 6...), skipping "j", although orbitals of these types are rarely required.[4][5]
The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below).
Energy — ground state and excited states[edit]
The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state. Any other configuration is an excited state.
As an example, the ground state configuration of the sodium atom is 1s22s22p63s, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p orbital, to obtain the 1s22s22p63p configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm.
Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to x-ray photons. This would be the case for example to excite a 2p electron to the 3s level and form the excited 1s22s22p53s2 configuration.
The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule.
Niels Bohr (1923) was the first to propose that the periodicity in the properties of the elements might be explained by the electronic structure of the atom.[6] His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6).
The following year, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6.[7] However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect).
Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli realized that the Zeeman effect must be due only to the outermost electrons of the atom, and was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925):[8]
It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [l], j [ml] and m [ms].
The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom:[2] this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936),[9] see below) for the order in which atomic orbitals are filled with electrons.
Atoms: Aufbau principle and Madelung rule[edit]
The Aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:[10]
a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy orbitals are filled before electrons are placed in higher-energy orbitals.
The approximate order of filling of atomic orbitals, following the arrows from 1s to 7p. (After 7p the order includes orbitals outside the range of the diagram, starting with 8s.)
The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936,[9] and later given a theoretical justification by V.M. Klechkowski[11]
1. Orbitals are filled in the order of increasing n+l;
2. Where two orbitals have the same value of n+l, they are filled in order of increasing n.
This gives the following order for filling the orbitals:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, (8s, 5g, 6f, 7d, 8p, and 9s)
In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (Uuo, Z = 118).
The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics and nuclear chemistry.
Periodic table[edit]
Electron configuration table
The form of the periodic table is closely related to the electron configuration of the atoms of the elements. For example, all the elements of group 2 have an electron configuration of [E] ns2 (where [E] is an inert gas configuration), and have notable similarities in their chemical properties. In general, the periodicity of the periodic table in terms of periodic table blocks is clearly due to the number of electrons (2, 6, 10, 14...) needed to fill s, p, d, and f subshells.
The outermost electron shell is often referred to as the "valence shell" and (to a first approximation) determines the chemical properties. It should be remembered that the similarities in the chemical properties were remarked on more than a century before the idea of electron configuration.[12] It is not clear how far Madelung's rule explains (rather than simply describes) the periodic table,[13] although some properties (such as the common +2 oxidation state in the first row of the transition metals) would obviously be different with a different order of orbital filling.
Shortcomings of the Aufbau principle[edit]
The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly[14] (although there are mathematical approximations available, such as the Hartree–Fock method).
The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogen-like atom, which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects of the Lamb shift.)
Ionization of the transition metals[edit]
The naïve application of the Aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s1 and [Ar] 4s2 respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n+l = 4 (n = 4, l = 0) while the 3d-orbital has n+l = 5 (n = 3, l = 2). After calcium, most neutral atoms in the first series of transition metals (Sc-Zn) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d5 4s1 and [Ar] 3d10 4s1 respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons".
The apparent paradox arises when electrons are removed from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals.[15] The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe4+, Fe3+, Fe2+, Fe+, Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ...
This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly doesn't. There is no special reason why the Fe2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree-Fock method of atomic structure calculation.[16]
Similar ion-like 3dx4s0 configurations occur in transition metal complexes as described by the simple crystal field theory, even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands. The electron configuration of the central chromium atom is described as 3d6 with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic, meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory, the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom.
Other exceptions to Madelung's rule[edit]
There are several more exceptions to Madelung's rule among the heavier elements, and it is more and more difficult to resort to simple explanations, such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations,[17] which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects[18] tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals.[19] The table below shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page).
Electron shells filled in violation of Madelung's rule[20] (red)
Period 4 Period 5 Period 6 Period 7
ElementZElectron Configuration ElementZElectron Configuration ElementZElectron Configuration ElementZElectron Configuration
Lanthanum57[Xe] 6s2 5d1 Actinium89[Rn] 7s2 6d1
Cerium58[Xe] 6s2 4f1 5d1 Thorium90[Rn] 7s2 6d2
Praseodymium59[Xe] 6s2 4f3 Protactinium91[Rn] 7s2 5f2 6d1
Neodymium60[Xe] 6s2 4f4 Uranium92[Rn] 7s2 5f3 6d1
Promethium61[Xe] 6s2 4f5 Neptunium93[Rn] 7s2 5f4 6d1
Samarium62[Xe] 6s2 4f6 Plutonium94[Rn] 7s2 5f6
Europium63[Xe] 6s2 4f7 Americium95[Rn] 7s2 5f7
Gadolinium64[Xe] 6s2 4f7 5d1 Curium96[Rn] 7s2 5f7 6d1
Terbium65[Xe] 6s2 4f9 Berkelium97[Rn] 7s2 5f9
Scandium21[Ar] 4s2 3d1 Yttrium39[Kr] 5s2 4d1 Lutetium71[Xe] 6s2 4f14 5d1 Lawrencium103[Rn] 7s2 5f14 7p1
Titanium22[Ar] 4s2 3d2 Zirconium40[Kr] 5s2 4d2 Hafnium72[Xe] 6s2 4f14 5d2 Rutherfordium104[Rn] 7s2 5f14 6d2
Vanadium23[Ar] 4s2 3d3 Niobium41[Kr] 5s1 4d4 Tantalum73[Xe] 6s2 4f14 5d3
Chromium24[Ar] 4s1 3d5 Molybdenum42[Kr] 5s1 4d5 Tungsten74[Xe] 6s2 4f14 5d4
Manganese25[Ar] 4s2 3d5 Technetium43[Kr] 5s2 4d5 Rhenium75[Xe] 6s2 4f14 5d5
Iron26[Ar] 4s2 3d6 Ruthenium44[Kr] 5s1 4d7 Osmium76[Xe] 6s2 4f14 5d6
Cobalt27[Ar] 4s2 3d7 Rhodium45[Kr] 5s1 4d8 Iridium77[Xe] 6s2 4f14 5d7
Nickel28[Ar] 4s2 3d8 or
[Ar] 4s1 3d9 (disputed)[21]
Palladium46[Kr] 4d10 Platinum78[Xe] 6s1 4f14 5d9
Copper29[Ar] 4s1 3d10 Silver47[Kr] 5s1 4d10 Gold79[Xe] 6s1 4f14 5d10
Zinc30[Ar] 4s2 3d10 Cadmium48[Kr] 5s2 4d10 Mercury80[Xe] 6s2 4f14 5d10
The electron-shell configuration of elements beyond rutherfordium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120.[22]
Electron configuration in molecules[edit]
In molecules, the situation becomes more complex, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry,[23] rather than the atomic orbital labels used for atoms and monatomic ions: hence, the electron configuration of the dioxygen molecule, O2, is 1σg2 1σu2 2σg2 2σu2 1πu4 3σg2 1πg2.[1] The term 1πg2 represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory.
The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings.
Electron configuration in solids[edit]
In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory.
The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified form of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form.
This approach is taken further in computational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using an ever larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the Aufbau principle. Not all methods in calculational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method which discards the model.
For atoms or molecules with more than one electron, the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems.
A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to supplement the electron configuration with one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined.
See also[edit]
1. ^ a b IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "configuration (electronic)".
2. ^ a b In formal terms, the quantum numbers n, and m arise from the fact that the solutions to the time-independent Schrödinger equation for hydrogen-like atoms are based on spherical harmonics.
4. ^ Weisstein, Eric W. (2007). "Electron Orbital". wolfram.
5. ^ Ebbing, Darrell D.; Gammon, Steven D. (2007-01-12). General Chemistry. p. 284. ISBN 978-0-618-73879-3.
6. ^ Bohr, Niels (1923). "Über die Anwendung der Quantumtheorie auf den Atombau. I". Zeitschrift für Physik 13: 117. Bibcode:1923ZPhy...13..117B. doi:10.1007/BF01328209.
7. ^ Stoner, E.C. (1924). "The distribution of electrons among atomic levels". Philosophical Magazine (6th Ser.) 48 (286): 719–36. doi:10.1080/14786442408634535.
8. ^ Pauli, Wolfgang (1925). "Über den Einfluss der Geschwindigkeitsabhändigkeit der elektronmasse auf den Zeemaneffekt". Zeitschrift für Physik 31: 373. Bibcode:1925ZPhy...31..373P. doi:10.1007/BF02980592. English translation from Scerri, Eric R. (1991). "The Electron Configuration Model, Quantum Mechanics and Reduction". Br. J. Phil. Sci. 42 (3): 309–25. doi:10.1093/bjps/42.3.309.
9. ^ a b Madelung, Erwin (1936). Mathematische Hilfsmittel des Physikers. Berlin: Springer.
11. ^ Wong, D. Pan (1979). "Theoretical justification of Madelung's rule". Journal of Chemical Education 56 (11): 714–18. Bibcode:1979JChEd..56..714W. doi:10.1021/ed056p714.
12. ^ The similarities in chemical properties and the numerical relationship between the atomic weights of calcium, strontium and barium was first noted by Johann Wolfgang Döbereiner in 1817.
13. ^ Scerri, Eric R. (1998). "How Good Is the Quantum Mechanical Explanation of the Periodic System?". Journal of Chemical Education 75 (11): 1384–85. Bibcode:1998JChEd..75.1384S. doi:10.1021/ed075p1384. Ostrovsky, V.N. (2005). "On Recent Discussion Concerning Quantum Justification of the Periodic Table of the Elements". Foundations of Chemistry 7 (3): 235–39. doi:10.1007/s10698-005-2141-y.
14. ^ Electrons are identical particles, a fact that is sometimes referred to as "indistinguishability of electrons". A one-electron solution to a many-electron system would imply that the electrons could be distinguished from one another, and there is strong experimental evidence that they can't be. The exact solution of a many-electron system is a n-body problem with n ≥ 3 (the nucleus counts as one of the "bodies"): such problems have evaded analytical solution since at least the time of Euler.
15. ^ There are some cases in the second and third series where the electron remains in an s-orbital.
16. ^ Melrose, Melvyn P.; Scerri, Eric R. (1996). "Why the 4s Orbital is Occupied before the 3d". Journal of Chemical Education 73 (6): 498–503. Bibcode:1996JChEd..73..498M. doi:10.1021/ed073p498.
17. ^ Meek, Terry L.; Allen, Leland C. (2002). "Configuration irregularities: deviations from the Madelung rule and inversion of orbital energy levels". Chem. Phys. Lett. 362 (5–6): 362–64. Bibcode:2002CPL...362..362M. doi:10.1016/S0009-2614(02)00919-3.
18. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "relativistic effects".
19. ^ Pyykkö, Pekka (1988). "Relativistic effects in structural chemistry". Chem. Rev. 88 (3): 563–94. doi:10.1021/cr00085a006.
20. ^ Miessler, G. L.; Tarr, D. A. (1999). Inorganic Chemistry (2nd ed.). Prentice-Hall. p. 38.
21. ^ Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press. pp. 239–240. ISBN 0-19-530573-6.
23. ^ The labels are written in lowercase to indicate that they correspond to one-electron functions. They are numbered consecutively for each symmetry type (irreducible representation in the character table of the point group for the molecule), starting from the orbital of lowest energy for that type.
External links[edit] |
a3c53b0d64111f2e | Erwin Schrödinger
Erwin Schrödinger
[shroh-ding-er, shrey-; Ger. shrœ-ding-uhr]
Schrödinger, Erwin, 1887-1961, Austrian theoretical physicist. He was educated at Vienna, taught at Breslau and Zürich, and was professor at the Univ. of Berlin (1927-33), fellow of Magdalen College, Oxford (1933-36), and professor at the Univ. of Graz (1936-38), the Dublin Institute for Advanced Studies (1940-57), and the Univ. of Vienna (1957-61). Schrödinger is known for his mathematical development of wave mechanics (1926), a form of quantum mechanics (see quantum theory), and his formulation of the wave equation that bears his name. The Schrödinger equation is the most widely used mathematical tool of the modern quantum theory. For this work he shared the 1933 Nobel Prize in Physics with P. A. M. Dirac.
See studies by C. W. Kilmister, ed. (1987) and W. J. Moore (1989).
Schrödinger's cat is a seemingly paradoxical thought experiment devised by Erwin Schrödinger that attempts to illustrate the incompleteness of the theory of quantum mechanics when going from subatomic to macroscopic systems.
The original formulation of Schrödinger's cat
In 1935, Schrödinger published an essay describing the conceptual problems in quantum mechanics. A brief paragraph in this essay described the cat paradox:
Adaptations in science fiction
It was not long before science-fiction writers picked up this evocative concept, often using it in a humorous vein. Several have taken the thought experiment a step further, pointing out or extra complications which might arise should the experiment actually be performed.
For example, in his novel American Gods, Neil Gaiman has a character observe, "if they don't ever open the box to feed it'll eventually just be two different kinds of dead." Likewise, Terry Pratchett's Lords and Ladies adds the issue of a third possible state, in the case of Greebo, "Bloody Furious." Douglas Adams describes an attempt to enact the experiment in Dirk Gently's Holistic Detective Agency. By using clairvoyance to see inside the box, it was found that the cat was neither alive nor dead, but missing, and Dirk's services were employed in order to recover it.
In "Schrödinger's Cat-Sitter" by F. Gwynplaine MacIntyre (published in Analog magazine, July/August 2001), a time-traveler named Smedley Faversham visits the past to interview Erwin Schrödinger but gets tricked into taking care of Schrödinger's wife's cat while she is away and Schrödinger is visiting Max Planck. In attempting to take care of the cat, Faversham inadvertently locks it in a cabinet with a Geiger counter, a vial of acid, and a hammer, unintentionally enacting Schrödinger's thought experiment, but with results that remain as uncertain as in the original case.
Yet another example of the cat in popular fiction is the cat Quark, from Jeff Noon's book "Automated Alice". In it, Alice has the question "Am I real, or am I fake?" which is much like "Is it alive, or is it dead?" Near the end of the book, Alice encounters a cat named Quark, who is invisible, and got that way by being locked in a box and having a strange substance poured in, mixing it with a chameleon. The cat was both influenced by the Cheshire Cat, and Schrödinger's Cat, the Cheshire Cat and the Alice books being similar to the experiment already.
The American science-fiction writer and psychologist Robert Anton Wilson wrote the novel Schrödinger's Cat trilogy as the spiritual sequel to The Illuminatus! Trilogy. The storyline of this novel interweaves many characters who live in parallel universes. Each part of the novel is numbered as "Part One"
In Dan Simmons' books Endymion and The Rise of Endymion, one of the main protagonists is sentenced to death by being locked in a larger version of a Schrödinger's cat-box, so that random chance, rather than any single person, is responsible for his eventual death.
In Flatterland
On a somewhat more serious level, Ian Stewart's novel Flatterland, (a sequel to Flatland) attempts to explain many concepts in modern mathematics and physics through the device of having a young female Flatlander explore other parts of the "Mathiverse." Schrödinger's Cat is just one of the many strange Mathiverse denizens she and her guide meet; the cat is still uncertain whether it is alive or dead, long after it left the box. Her guide, the Space Hopper, reassures the Cat with a modern view of quantum decoherence. Ursula K. Le Guin wrote a story entitled "Schrödinger's Cat" in 1974 (reprinted in The Compass Rose, published in 1982), which also deals with decoherence. Greg Egan's novel Quarantine, billed as "a story of quantum catastrophe," features an alternative solution to the paradox: in Egan's version of quantum mechanics, the wave function does not collapse naturally. Only certain living things—human beings among them—collapse the wave function of things they observe. Humans are therefore highly dangerous to other lifeforms which require the full diversity of uncollapsed wavefunctions to survive.
The novel OTEC is set inside an artificial reality and raises questions about quantum behavior inside simulated realities. In the simulated realities, the uncertainty principle (and Schrödinger's cat) is expected behavior. (Finite CPU limitations force the reality generator to invoke record locking on quantum level measurements of possession and energy of electrons.)
Even with a cubic mile of nano tech computer, CPU power is finite, and not wasted on real-time calculations of electron position and energy, calculations that would not be referenced in any case. OTEC also asserts that while natural realities may, or may not, have Schrödinger's cats, artificial realities must have them.
In Quarantine
As Egan notes, Schrödinger's hypothetical cat is one of the most familiar illustrations of quantum-mechanical oddities. In Quarantine, a physicist asks the narrator, an ex-cop and private investigator, if he has ever heard of "the quantum measurement problem." The narrator is naturally confused, but when asked if he's heard of Schrödinger's cat, he replies, "Of course."
In The Cat Who Walked Through Walls
The title character (though not a main character) of R.A. Heinlein's The Cat Who Walked Through Walls, a kitten named Pixel, is of indeterminate existence and as such, has the ability to turn up in places that are specifically sealed to outside access. When this ability is questioned, the answer is "He's Schrödinger's cat", leading to the response, "Well, tell Schrödinger to come get his cat.", or words to that effect.
Animals other than cats
Fiction writers have confined other animals besides cats in such contraptions. Dan Simmons's novel Endymion begins with hero Raul Endymion sentenced to death by imprisonment in a Schrödinger box.
In the fortieth-anniversary Doctor Who audio drama "Zagreus" (2003), the Doctor is locked in a lead-lined box also containing cyanide in an effort to explain his situation of being neither dead nor alive. Afterwards, the Doctor does mention that he has met Schrödinger's Cat.
Kosuke Fujishima's manga series Ah! My Goddess featured a play on Schrödinger's Cat. During one storyline, a storage room was expanded to infinite proportions and the main characters encountered a Schrodinger's Whale, a rare species with the ability to travel through space-time in a five-dimensional quantum state.
In the eroge (Japanese erotic game) Itsuka Todoku Anosora ni, made by Lump of Sugar, the setting is the main city of Koumeishi. In one story line, a main heroine, Konome, explains the story of Schrödinger's cat. Later, one can see that Koumeishi itself is the same as the situation of Schrödinger's cat: locked in space unable for anyone to come in, or leave, but the people inside are given all their basic needs. They still live in their city with some understanding of the outside world, but are unable to question their existence, or are unable to gain intention to leave. Thus Koumeishi exists as part of Tokyo, but at the same time, not a part of Tokyo.
Another, less apparent, reference to Schrödinger's cat comes from the popular collection of short, humorous stories, The Bastard Operator From Hell written by Simon Traviglia. While attempting to trick the CEO of the company that he works for into upgrading their telecomms systems, the narrator (affectionately referred to as 'the Bastard') makes up a false explanation for why the company experiences low bandwidth during a videoconferencing session: "It's a problem with Heisenberg's certainty principle of video compression... It's a famous quantum physics experiment which videoed cats in boxes. The more cats, the more certainty that you'll get quantum disturbance in video compression."
In this instance the author pays homage to Heisenberg, who ultimately influenced the creation of Schrödinger's hypothesis. It also, more obviously and more humorously, states that Heisenberg completed the experiment (which he did not even theorize) and the fact in place of the killing apparatus inside of the box there were video cameras. This would make no sense to the educated person, yet fooled his CEO because of the superior's interest in videoconferencing. While the argument that you will experience quantum disturbance in live videoconferences because of the length (or amount of cats sitting in on the session) of the session is unfounded, the rest of his statements involving an upgrade in bandwidth do 'fix' the problem.
In LOLcats
Not surprisingly, Schrodinger's experiment with cats has lead him to severe satirization by the Lolcat community.
Schrödinger's Cat used in television series
In Bones (TV series)
In an episode entitled "The Pain in the Heart", Dr. Jack Hodgins said to Dr. Zack Addy that a crime scene is like Schrödinger's Cat. Aired 5/19/08 The lab was a crime scene, they could not disturb the scene, nor could they solve the crime without entering the lab.
In The Big Bang Theory
In the episode "The Tangerine Factor" which aired 05/19/08, Leonard's attempt to arrange a date with Penny results in both Penny and Leonard seeking Sheldon's advice. Sheldon advises Penny that “just like Schrödinger's cat being alive or dead at same time” her date with Leonard currently has both “good and bad” probabilistic outcomes. The only way to find out is to “open the box”, in other words collapse the wave-function of an uncertain date into a specific outcome. Penny misunderstands Sheldon's argument and interprets his advice as general encouragement to go on the date. Apparently, a long session of Sheldon trying to get his point across to Penny ensues with Sheldon reciting Schrödinger's cat definition every time. Later Sheldon mentions Schrödinger's cat to Leonard who instantly gets the implied wave-function collapse as “brilliant”. (As previously established in the series when Leonard attempts to ask Leslie out “success of the whole date is determined by chemistry of good-bye kiss at the very end”.) At the appointed hour, when Leonard comes by to pick up Penny, she is clearly even more uncomfortable and concerned about going out on a date which may ruin their friendship. Leonard mentions Schrödinger's cat to Penny to which she replies she heard “far too much about Schrödinger's cat”. Leonard interprets that as sign of approval and passionately kisses Penny. Probability functions collapses into a clear determined outcome: Penny enjoys the kiss and clearly has no more fears and concerns about going out with Leonard. Recognizing her own chemistry with Leonard, Penny finally understands Schrödinger's cat analogy by mentioning "the cat is alive".
In CSI: Crime Scene Investigation
In an episode entitled "The Theory of Everything", the dead elderly couple had a gravestone for a cat named Schrödinger Martin. This is a reference to the theme of the episode, which is that everyone in the cases are connected by String Theory.
In Doctor Who
The 2007 episode 'Blink' contains a race of beings referred to as Weeping Angels. They are described as being "quantum-locked", which means they do not exist when being looked at but can prove deadly when unobserved.
In Numb3rs
While Don is burdened by the possibility of him wrongly sending an innocent man to jail based on flawed evidence, his brother Charlie and Physics professor Larry remark that the evidence "proving Don right and wrong at the same time" is the "old paradox of Schrödinger's cat". Don's father then asks if that's "that Persian that keeps hiding out in [his] garage". (Episode first aired 1 April 2005)
In Futurama
A montage of some of Professor Farnsworth's achievements includes Schrödinger's Kit Kat Club. Episode 2ACV10 - "A Clone of my Own"
In Hellsing
A nazi character in the popular manga Hellsing by Kouta Hirano resembles a young boy with cat ears named Schrödinger who can be at several places at once.
In Higurashi no Naku Koro ni
A vague character in the series, Frederica Bernkastel, writes in one of her poems about the experiment Schrödinger performed on the cat, concluding on the sad note that it died.
In House M.D.
In an episode entitled "The Right Stuff", while talking about the appearance of Alison Cameron (supposedly in Arizona), Doctor Wilson remarks, "...since she's not a dead cat, it is scientifically impossible for her to be in two places at once."
The central character Gregory House replies, "Physics joke: don't hear enough of those."
In Six Feet Under
In an episode entitled "A Perfect Circle", the character Nate has a vision of watching a made-up television show that discusses the theory in brief.
In Sliders
Quinn has a pet cat named Schrödinger.
In Stargate SG-1
Samantha Carter gives a pet cat named Schrödinger to Narim, a member of the the Tollan society, who are several centuries ahead of Earth when it comes to technology. After explaining the name Schrödinger, Narim comments, that it's in his society called Kulivrian physics. After Carter asks, if Narim has studied it he replies: "Yeah, I've studied it... in among other misconceptions of elementary science.
In Yu-Gi-Oh! GX
Dr. Eisenstein uses a card known as "Schrödinger's Cat"
In West Wing
In Season 7 Ellie Bartlet gets married at the White House. The name of the band playing the reception is Schrödinger's Cats.
In video games
In Digital Devil Saga
A game produced by Atlus, there is a enigmatic cat-like creature revealed to have some connection to God, whom the main character can see throughout the games.
In Wild Arms 3
The character of Shady the Cat, owned by a Maya Schrödinger, is based on Schrödinger's cat, and is claustrophobic as a result of the "experiment."
In NetHack
One of the monsters encountered in this text based game was called 'Quantum Mechanic', and often carried a chest. Opening the chest, it would (sometimes) contain the corpse of Schrödinger's cat
External links
Search another word or see Erwin Schrödingeron Dictionary | Thesaurus |Spanish
Copyright © 2014, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
31c1e6247e248b42 | Online Test Banks
Score higher
See Online Test Banks
Learning anything is easy
Browse Online Courses
Mobile Apps
Learning on the go
Explore Mobile Apps
Dummies Store
Shop for books and more
Start Shopping
Determining the Radial Part of a Wave Function
In quantum physics, you can determine the radial part of a wave function when you work on problems that have a central potential. With central potential problems, you’re able to separate the wave function into a radial part (which depends on the form of the potential) and an angular part, which is a spherical harmonic.
You can give the radial part of the wave function the name Rnl(r), where n is a quantum number corresponding to the quantum state of the radial part of the wave function and l is the total angular momentum quantum number. The radial part is symmetric with respect to angles, so it can’t depend on m, the quantum number of the z component of the angular momentum. In other words, the wave function for particles in central potentials looks like the following equation in spherical coordinates:
The next step is to solve for Rnl(r) in general. Substituting
from the preceding equation into the Schrödinger equation,
gives you
Okay, what can you make of this? First, note that the spherical harmonics are eigenfunctions of L2 (that’s the whole reason for using them), with eigenvalue
So the last term in this equation is simply
That means that
takes the form
which equals
The preceding equation is the one you use to determine the radial part of the wave function, Rnl(r). It’s called the radial equation for a central potential.
When you solve the radial equation for Rnl(r), you can then find
because you already know
So, you’re simply finding the solution to the radial equation.
• Add a Comment
• Print
• Share
blog comments powered by Disqus
Inside Sweepstakes
Win $500. Easy. |
16030d51a826b8a2 | Schrödinger equation
From Wikipedia, the free encyclopedia
(Redirected from Schrodinger equation)
Jump to: navigation, search
For a more general introduction to the topic, see Introduction to quantum mechanics.
In quantum mechanics, the Schrödinger equation is a partial differential equation that describes how the quantum state of some physical system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger.[1]
In classical mechanics, the equation of motion is Newton's second law, and equivalent formulations are the Euler–Lagrange equations and Hamilton's equations. All of these formulations are used to solve for the motion of a mechanical system and mathematically predict what the system will do at any time beyond the initial settings and configuration of the system.
In quantum mechanics, the analogue of Newton's law is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but (in general) a linear partial differential equation. The differential equation describes the wave function of the system, also called the quantum state or state vector.[2]:1–2
The concept of a state vector is a fundamental postulate of quantum mechanics. But Schrödinger's equation, although often presented as a postulate, can in fact be derived from symmetry principles.[3]:Chapter 3
In the standard interpretation of quantum mechanics, the wave function is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe not only molecular, atomic, and subatomic systems, but also macroscopic systems, possibly even the whole universe.[4]:292ff
Like Newton's second law (F = ma), the Schrödinger equation can be mathematically transformed into other formulations such as Werner Heisenberg's matrix mechanics, and Richard Feynman's path integral formulation. Also like Newton's second law, the Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem that is not as severe in matrix mechanics and completely absent in the path integral formulation.[citation needed]
Time-dependent equation[edit]
The form of the Schrödinger equation depends on the physical situation (see below for special cases). The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:[5]:143
Time-dependent Schrödinger equation (general)
where i is the imaginary unit, ħ is Planck's constant divided by 2π, the symbol "∂/∂t" indicates a partial derivative with respect to time t, Ψ is the wave function of the quantum system, and \hat{H} is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation).
A wave function that satisfies the non-relativistic Schrödinger equation with V=0. In other words, this corresponds to a particle traveling freely through empty space. The real part of the wave function is plotted here.
The most famous example is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field; see the Pauli equation):
Time-dependent Schrödinger equation (single non-relativistic particle)
i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},t) = \left [ \frac{-\hbar^2}{2m}\nabla^2 + V(\mathbf{r},t)\right ] \Psi(\mathbf{r},t)
where m is the particle's mass, V is its potential energy, ∇2 is the Laplacian, and Ψ is the wave function (more precisely, in this context, it is called the "position-space wave function"). In plain language, it means "total energy equals kinetic energy plus potential energy", but the terms take unfamiliar forms for reasons explained below.
Given the particular differential operators involved, this is a linear partial differential equation. It is also a diffusion equation, but unlike the heat equation, this one is also a wave equation given the imaginary unit present in the transient term.
The term "Schrödinger equation" can refer to both the general equation (first box above), or the specific nonrelativistic version (second box above and variations thereof). The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in various complicated expressions for the Hamiltonian. The specific nonrelativistic version is a simplified approximation to reality, which is quite accurate in many situations, but very inaccurate in others (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrödinger equation, the Hamiltonian operator is set up for the system, accounting for the kinetic and potential energy of the particles constituting the system, then inserted into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system.
Time-independent equation[edit]
Each of these three rows is a wave function which satisfies the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row is an example of a state which is not a stationary state. The right column illustrates why stationary states are called "stationary".
The time-independent Schrödinger equation predicts that wave functions can form standing waves, called stationary states (also called "orbitals", as in atomic orbitals or molecular orbitals). These states are important in their own right, and moreover if the stationary states are classified and understood, then it becomes easier to solve the time-dependent Schrödinger equation for any state. The time-independent Schrödinger equation is the equation describing stationary states. (It is only used when the Hamiltonian itself is not dependent on time. In general, the wave function still has a time dependency.)
Time-independent Schrödinger equation (general)
E\Psi=\hat H \Psi
In words, the equation states:
When the Hamiltonian operator acts on a certain wave function Ψ, and the result is proportional to the same wave function Ψ, then Ψ is a stationary state, and the proportionality constant, E, is the energy of the state Ψ.
The time-independent Schrödinger equation is discussed further below. In linear algebra terminology, this equation is an eigenvalue equation.
As before, the most famous manifestation is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field):
Time-independent Schrödinger equation (single non-relativistic particle)
E \Psi(\mathbf{r}) = \left[ \frac{-\hbar^2}{2m}\nabla^2 + V(\mathbf{r}) \right] \Psi(\mathbf{r})
with definitions as above.
The Schrödinger equation, and its solutions, introduced a breakthrough in thinking about physics. Schrödinger's equation was the first of its type, and solutions led to consequences that were very unusual and unexpected for the time.
Total, kinetic, and potential energy[edit]
The overall form of the equation is not unusual or unexpected as it uses the principle of the conservation of energy. The terms of the nonrelativistic Schrödinger equation can be interpreted as total energy of the system, equal to the system kinetic energy plus the system potential energy. In this respect, it is just the same as in classical physics.
The Schrödinger equation predicts that if certain properties of a system are measured, the result may be quantized, meaning that only specific discrete values can occur. One example is energy quantization: the energy of an electron in an atom is always one of the quantized energy levels, a fact discovered via atomic spectroscopy. (Energy quantization is discussed below.) Another example is quantization of angular momentum. This was an assumption in the earlier Bohr model of the atom, but it is a prediction of the Schrödinger equation.
Another result of the Schrödinger equation is that not every measurement gives a quantized result in quantum mechanics. For example, position, momentum, time, and (in some situations) energy can have any value across a continuous range.[6]:165-167
Measurement and uncertainty[edit]
In classical mechanics, a particle has, at every moment, an exact position and an exact momentum. These values change deterministically as the particle moves according to Newton's laws. In quantum mechanics, particles do not have exactly determined properties, and when they are measured, the result is randomly drawn from a probability distribution. The Schrödinger equation predicts what the probability distributions are, but fundamentally cannot predict the exact result of each measurement.
The Heisenberg uncertainty principle is the statement of the inherent measurement uncertainty in quantum mechanics. It states that the more precisely a particle's position is known, the less precisely its momentum is known, and vice versa.
Quantum tunneling[edit]
Main article: Quantum tunneling
Quantum tunneling through a barrier. A particle coming from the left does not have enough energy to climb the barrier. However, it can sometimes "tunnel" to the other side.
In classical physics, when a ball is rolled slowly up a large hill, it will come to a stop and roll back, because it doesn't have enough energy to get over the top of the hill to the other side. However, the Schrödinger equation predicts that there is a small probability that the ball will get to the other side of the hill, even if it has too little energy to reach the top. This is called quantum tunneling. It is related to the uncertainty principle: Although the ball seems to be on one side of the hill, its position is uncertain so there is a chance of finding it on the other side.
Particles as waves[edit]
A double slit experiment showing the accumulation of electrons on a screen as time passes.
The nonrelativistic Schrödinger equation is a type of partial differential equation called a wave equation. Therefore it is often said particles can exhibit behavior usually attributed to waves. In most modern interpretations this description is reversed – the quantum state, i.e. wave, is the only genuine physical reality, and under the appropriate conditions it can show features of particle-like behavior.
Two-slit diffraction is a famous example of the strange behaviors that waves regularly display, that are not intuitively associated with particles. The overlapping waves from the two slits cancel each other out in some locations, and reinforce each other in other locations, causing a complex pattern to emerge. Intuitively, one would not expect this pattern from firing a single particle at the slits, because the particle should pass through one slit or the other, not a complex overlap of both.
However, since the Schrödinger equation is a wave equation, a single particle fired through a double-slit does show this same pattern (figure on left). Note: The experiment must be repeated many times for the complex pattern to emerge. The appearance of the pattern proves that each electron passes through both slits simultaneously.[7][8][9] Although this is counterintuitive, the prediction is correct; in particular, electron diffraction and neutron diffraction are well understood and widely used in science and engineering.
Related to diffraction, particles also display superposition and interference.
The superposition property allows the particle to be in a quantum superposition of two or more states with different classical properties at the same time. For example, a particle can have several different energies at the same time, and can be in several different locations at the same time. In the above example, a particle can pass through two slits at the same time. This superposition is still a single quantum state, as shown by the interference effects, even though that conflicts with classical intuition.
Interpretation of the wave function[edit]
The Schrödinger equation provides a way to calculate the possible wave functions of a system and how they dynamically change in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. Interpretations of quantum mechanics address questions such as what the relation is between the wave function, the underlying reality, and the results of experimental measurements.
An important aspect is the relationship between the Schrödinger equation and wavefunction collapse. In the oldest Copenhagen interpretation, particles follow the Schrödinger equation except during wavefunction collapse, during which they behave entirely differently. The advent of quantum decoherence theory allowed alternative approaches (such as the Everett many-worlds interpretation and consistent histories), wherein the Schrödinger equation is always satisfied, and wavefunction collapse should be explained as a consequence of the Schrödinger equation.
Historical background and development[edit]
Following Max Planck's quantization of light (see black body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in special relativity, it followed that the momentum p of a photon is inversely proportional to its wavelength λ, or proportional to its wavenumber k.
p = \frac{h}{\lambda} = \hbar k
where h is Planck's constant. Louis de Broglie hypothesized that this is true for all particles, even particles such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.[10] These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum L according to:
L = n{h \over 2\pi} = n\hbar.
According to de Broglie the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit:
n \lambda = 2 \pi r.\,
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius r.
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation[11] Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation, and solve for its energy eigenvalues for the hydrogen atom. Unfortunately the paper was rejected by the Physical Review, as recounted by Kamen.[12]
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William R. Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system — the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.[13] A modern version of his reasoning is reproduced below. The equation he found is:[14]
i\hbar \frac{\partial}{\partial t}\Psi(\mathbf{r},\,t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},\,t) + V(\mathbf{r})\Psi(\mathbf{r},\,t).
However, by that time, Arnold Sommerfeld had refined the Bohr model with relativistic corrections.[15][16] Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
\left(E + {e^2\over r} \right)^2 \psi(x) = - \nabla^2\psi(x) + m^2 \psi(x).
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin in December 1925.[17]
While at the cabin, Schrödinger decided that his earlier non-relativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. Despite difficulties solving the differential equation for hydrogen (he had later help from his friend the mathematician Hermann Weyl[18]:3) Schrödinger showed that his non-relativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.[18]:1[19] In the equation, Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave Ψ(x, t), moving in a potential well V, created by the proton. This computation accurately reproduced the energy levels of the Bohr model. In a paper, Schrödinger himself explained this equation as follows:
The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog.
—Erwin Schrödinger[20]
This 1926 paper was enthusiastically endorsed by Einstein, who saw the matter-waves as an intuitive depiction of nature, as opposed to Heisenberg's matrix mechanics, which he considered overly formal.[21]
The Schrödinger equation details the behavior of ψ but says nothing of its nature. Schrödinger tried to interpret it as a charge density in his fourth paper, but he was unsuccessful.[22]:219 In 1926, just a few days after Schrödinger's fourth and final paper was published, Max Born successfully interpreted ψ as the probability amplitude, whose absolute square is equal to probability density.[22]:220 Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities—much like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory— and never reconciled with the Copenhagen interpretation.[23]
Louis de Broglie in his later years proposed a real valued wave function connected to the complex wave function by a proportionality constant and developed the De Broglie–Bohm theory.
The wave equation for particles[edit]
The Schrödinger equation is mathematically a wave equation, since the solutions are functions which describe wave-like motions. Wave equations in physics can normally be derived from other physical laws – the wave equation for mechanical vibrations on strings and in matter can be derived from Newton's laws – where the wave function represents the displacement of matter, and electromagnetic waves from Maxwell's equations, where the wave functions are electric and magnetic fields. The basis for Schrödinger's equation, on the other hand, is the energy of the system and a separate postulate of quantum mechanics: the wave function is a description of the system.[24] The SE is therefore a new concept in itself; as Feynman put it:
—Richard Feynman[25]
The equation is structured to be a linear differential equation based on classical energy conservation, and consistent with the De Broglie relations. The solution is the wave function Ψ, which contains all the information that can be known about the system. In the Copenhagen interpretation, the modulus of Ψ is related to the probability the particles are in some spatial configuration at some instant of time. Solving the equation for Ψ can be used to predict how the particles will behave under the influence of the specified potential and with each other.
The Schrödinger equation was developed principally from the De Broglie hypothesis, a wave equation that would describe particles,[26] and can be constructed as shown informally in the following sections.[27] For a more rigorous description of Schrödinger's equation, see also.[28]
Consistency with energy conservation[edit]
The total energy E of a particle is the sum of kinetic energy T and potential energy V, this sum is also the frequent expression for the Hamiltonian H in classical mechanics:
E = T + V =H \,\!
Explicitly, for a particle in one dimension with position x, mass m and momentum p, and potential energy V which generally varies with position and time t:
E = \frac{p^2}{2m}+V(x,t)=H.
For three dimensions, the position vector r and momentum vector p must be used:
E = \frac{\mathbf{p}\cdot\mathbf{p}}{2m}+V(\mathbf{r},t)=H
This formalism can be extended to any fixed number of particles: the total energy of the system is then the total kinetic energies of the particles, plus the total potential energy, again the Hamiltonian. However, there can be interactions between the particles (an N-body problem), so the potential energy V can change as the spatial configuration of particles changes, and possibly with time. The potential energy, in general, is not the sum of the separate potential energies for each particle, it is a function of all the spatial positions of the particles. Explicitly:
E=\sum_{n=1}^N \frac{\mathbf{p}_n\cdot\mathbf{p}_n}{2m_n} + V(\mathbf{r}_1,\mathbf{r}_2\cdots\mathbf{r}_N,t) = H \,\!
The simplest wavefunction is a plane wave of the form:
\Psi(\mathbf{r},t) = A e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)} \,\!
where the A is the amplitude, k the wavevector, and ω the angular frequency, of the plane wave. In general, physical situations are not purely described by plane waves, so for generality the superposition principle is required; any wave can be made by superposition of sinusoidal plane waves. So if the equation is linear, a linear combination of plane waves is also an allowed solution. Hence a necessary and separate requirement is that the Schrödinger equation is a linear differential equation.
For discrete k the sum is a superposition of plane waves:
\Psi(\mathbf{r},t) = \sum_{n=1}^\infty A_n e^{i(\mathbf{k}_n\cdot\mathbf{r}-\omega_n t)} \,\!
for some real amplitude coefficients An, and for continuous k the sum becomes an integral, the Fourier transform of a momentum space wavefunction:[29]
\Psi(\mathbf{r},t) = \frac{1}{(\sqrt{2\pi})^3}\int\Phi(\mathbf{k})e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}d^3\mathbf{k} \,\!
where d3k = dkxdkydkz is the differential volume element in k-space, and the integrals are taken over all k-space. The momentum wavefunction Φ(k) arises in the integrand since the position and momentum space wavefunctions are Fourier transforms of each other.
Consistency with the De Broglie relations[edit]
Diagrammatic summary of the quantities related to the wavefunction, as used in De broglie's hypothesis and development of the Schrödinger equation.[26]
Einstein's light quanta hypothesis (1905) states that the energy E of a photon is proportional to the frequency ν (or angular frequency, ω = 2πν) of the corresponding quantum wavepacket of light:
E = h\nu = \hbar \omega \,\!
Likewise De Broglie's hypothesis (1924) states that any particle can be associated with a wave, and that the momentum p of the particle is inversely proportional to the wavelength λ of such a wave (or proportional to the wavenumber, k = 2π/λ), in one dimension, by:
p = \frac{h}{\lambda} = \hbar k\;,
while in three dimensions, wavelength λ is related to the magnitude of the wavevector k:
\mathbf{p} = \hbar \mathbf{k}\,,\quad |\mathbf{k}| = \frac{2\pi}{\lambda} \,.
The Planck–Einstein and de Broglie relations illuminate the deep connections between energy with time, and space with momentum, and express wave–particle duality. In practice, natural units comprising ħ = 1 are used, as the De Broglie equations reduce to identities: allowing momentum, wavenumber, energy and frequency to be used interchangeably, to prevent duplication of quantities, and reduce the number of dimensions of related quantities. For familiarity SI units are still used in this article.
\Psi = Ae^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)} = Ae^{i(\mathbf{p}\cdot\mathbf{r}-Et)/\hbar}
and to realize that the first order partial derivatives with respect to space
\nabla\Psi = \dfrac{i}{\hbar}\mathbf{p}Ae^{i(\mathbf{p}\cdot\mathbf{r}-Et)/\hbar} = \dfrac{i}{\hbar}\mathbf{p}\Psi
and time
\dfrac{\partial \Psi}{\partial t} = -\dfrac{i E}{\hbar} Ae^{i(\mathbf{p}\cdot\mathbf{r}-Et)/\hbar} = -\dfrac{i E}{\hbar} \Psi
Another postulate of quantum mechanics is that all observables are represented by linear Hermitian operators which act on the wavefunction, and the eigenvalues of the operator are the values the observable takes. The previous derivatives are consistent with the energy operator, corresponding to the time derivative,
\hat{E} \Psi = i\hbar\dfrac{\partial}{\partial t}\Psi = E\Psi
where E are the energy eigenvalues, and the momentum operator, corresponding to the spatial derivatives (the gradient ∇),
\hat{\mathbf{p}} \Psi = -i\hbar\nabla \Psi = \mathbf{p} \Psi
where p is a vector of the momentum eigenvalues. In the above, the "hats" ( ^ ) indicate these observables are operators, not simply ordinary numbers or vectors. The energy and momentum operators are differential operators, while the potential energy function V is just a multiplicative factor.
Substituting the energy and momentum operators into the classical energy conservation equation obtains the operator:
E= \dfrac{\mathbf{p}\cdot\mathbf{p}}{2m}+V \quad \rightarrow \quad \hat{E} = \dfrac{\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}}{2m} + V
so in terms of derivatives with respect to time and space, acting this operator on the wavefunction Ψ, immediately led Schrödinger to his equation:[citation needed]
i\hbar\dfrac{\partial \Psi}{\partial t}= -\dfrac{\hbar^2}{2m}\nabla^2\Psi +V\Psi
Wave–particle duality can be assessed from these equations as follows. The kinetic energy T is related to the square of momentum p. As the particle's momentum increases, the kinetic energy increases more rapidly, but since the wavenumber |k| increases the wavelength λ decreases. In terms of ordinary scalar and vector quantities (not operators):
\mathbf{p}\cdot\mathbf{p} \propto \mathbf{k}\cdot\mathbf{k} \propto T \propto \dfrac{1}{\lambda^2}
The kinetic energy is also proportional to the second spatial derivatives, so it is also proportional to the magnitude of the curvature of the wave, in terms of operators:
\hat{T} \Psi = \frac{-\hbar^2}{2m}\nabla\cdot\nabla \Psi \, \propto \, \nabla^2 \Psi \,.
As the curvature increases, the amplitude of the wave alternates between positive and negative more rapidly, and also shortens the wavelength. So the inverse relation between momentum and wavelength is consistent with the energy the particle has, and so the energy of the particle has a connection to a wave, all in the same mathematical formulation.[26]
Wave and particle motion[edit]
Increasing levels of wavepacket localization, meaning the particle has a more localized position.
In the limit ħ → 0, the particle's position and momentum become known exactly. This is equivalent to the classical particle.
Schrödinger required that a wave packet solution near position r with wavevector near k will move along the trajectory determined by classical mechanics for times short enough for the spread in k (and hence in velocity) not to substantially increase the spread in r . Since, for a given spread in k, the spread in velocity is proportional to Planck's constant ħ, it is sometimes said that in the limit as ħ approaches zero, the equations of classical mechanics are restored from quantum mechanics.[30] Great care is required in how that limit is taken, and in what cases.
The limiting short-wavelength is equivalent to ħ tending to zero because this is limiting case of increasing the wave packet localization to the definite position of the particle (see images right). Using the Heisenberg uncertainty principle for position and momentum, the products of uncertainty in position and momentum become zero as ħ → 0:
\sigma(x) \sigma(p_x) \geqslant \frac{\hbar}{2} \quad \rightarrow \quad \sigma(x) \sigma(p_x) \geqslant 0 \,\!
where σ denotes the (root mean square) measurement uncertainty in x and px (and similarly for the y and z directions) which implies the position and momentum can only be known to arbitrary precision in this limit.
The Schrödinger equation in its general form
i\hbar \frac{\partial}{\partial t} \Psi\left(\mathbf{r},t\right) = \hat{H} \Psi\left(\mathbf{r},t\right) \,\!
is closely related to the Hamilton–Jacobi equation (HJE)
\frac{\partial}{\partial t} S(q_i,t) = H\left(q_i,\frac{\partial S}{\partial q_i},t \right) \,\!
where S is action and H is the Hamiltonian function (not operator). Here the generalized coordinates qi for i = 1, 2, 3 (used in the context of the HJE) can be set to the position in Cartesian coordinates as r = (q1, q2, q3) = (x, y, z).[30]
\Psi = \sqrt{\rho(\mathbf{r},t)} e^{iS(\mathbf{r},t)/\hbar}\,\!
where ρ is the probability density, into the Schrödinger equation and then taking the limit ħ → 0 in the resulting equation, yields the Hamilton–Jacobi equation.
The implications are:
• The motion of a particle, described by a (short-wavelength) wave packet solution to the Schrödinger equation, is also described by the Hamilton–Jacobi equation of motion.
• The Schrödinger equation includes the wavefunction, so its wave packet solution implies the position of a (quantum) particle is fuzzily spread out in wave fronts. On the contrary, the Hamilton–Jacobi equation applies to a (classical) particle of definite position and momentum, instead the position and momentum at all times (the trajectory) are deterministic and can be simultaneously known.
Non-relativistic quantum mechanics[edit]
The quantum mechanics of particles without accounting for the effects of special relativity, for example particles propagating at speeds much less than light, is known as non-relativistic quantum mechanics. Following are several forms of Schrödinger's equation in this context for different situations: time independence and dependence, one and three spatial dimensions, and one and N particles.
In actuality, the particles constituting the system do not have the numerical labels used in theory. The language of mathematics forces us to label the positions of particles one way or another, otherwise there would be confusion between symbols representing which variables are for which particle.[28]
Time independent[edit]
If the Hamiltonian is not an explicit function of time, the equation is separable into a product of spatial and temporal parts. In general, the wavefunction takes the form:
\Psi(\text{space coords},t)=\psi(\text{space coords})\tau(t)\,.
where ψ(space coords) is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and τ(t) is a function of time only.
Substituting for Ψ into the Schrödinger equation for the relevant number of particles in the relevant number of dimensions, solving by separation of variables implies the general solution of the time-dependent equation has the form:[14]
\Psi(\text{space coords},t) = \psi(\text{space coords}) e^{-i{E t/\hbar}} \,.
Since the time dependent phase factor is always the same, only the spatial part needs to be solved for in time independent problems. Additionally, the energy operator \hat{E} = i \hbar \partial / \partial t \,\! can always be replaced by the energy eigenvalue E, thus the time independent Schrödinger equation is an eigenvalue equation for the Hamiltonian operator:[5]:143ff
\hat{H} \psi = E \psi
This is true for any number of particles in any number of dimensions (in a time independent potential). This case describes the standing wave solutions of the time-dependent equation, which are the states with definite energy (instead of a probability distribution of different energies). In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels.
The energy eigenvalues from this equation form a discrete spectrum of values, so mathematically energy must be quantized. More specifically, the energy eigenstates form a basis – any wavefunction may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
One-dimensional examples[edit]
For a particle in one dimension, the Hamiltonian is:
\hat{H} = \frac{\hat{p}^2}{2m} + V(x) \,, \quad \hat{p} = -i\hbar \frac{d}{d x}
and substituting this into the general Schrödinger equation gives:
This is the only case the Schrödinger equation is an ordinary differential equation, rather than a partial differential equation. The general solutions are always of the form:
\Psi(x,t)=\psi(x) e^{-iEt/\hbar} \, .
For N particles in one dimension, the Hamiltonian is:
\hat{H} = \sum_{n=1}^{N}\frac{\hat{p}_n^2}{2m_n} + V(x_1,x_2,\cdots x_N) \,,\quad \hat{p}_n = -i\hbar \frac{\partial}{\partial x_n}
where the position of particle n is xn. The corresponding Schrödinger equation is:
-\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\frac{\partial^2}{\partial x_n^2}\psi(x_1,x_2,\cdots x_N) + V(x_1,x_2,\cdots x_N)\psi(x_1,x_2,\cdots x_N) = E\psi(x_1,x_2,\cdots x_N) \, .
so the general solutions have the form:
\Psi(x_1,x_2,\cdots x_N,t) = e^{-iEt/\hbar}\psi(x_1,x_2\cdots x_N)
For non-interacting distinguishable particles,[6](refactored from 458) the potential of the system only influences each particle separately, so the total potential energy is the sum of potential energies for each particle:
V(x_1,x_2,\cdots x_N) = \sum_{n=1}^N V(x_n) \, .
and the wavefunction can be written as a product of the wavefunctions for each particle:
\Psi(x_1,x_2,\cdots x_N,t) = e^{-i{E t/\hbar}}\prod_{n=1}^N\psi(x_n) \, ,
For non-interacting identical particles, the potential is still a sum, but wavefunction is a bit more complicated - it is a sum over the permutations of products of the separate wavefunctions to account for particle exchange. In general for interacting particles, the above decompositions are not possible.
Free particle[edit]
For no potential, V = 0, so the particle is free and the equation reads:[5]:151ff
- E \psi = \frac{\hbar^2}{2m}{d^2 \psi \over d x^2}\,
which has oscillatory solutions for E > 0 (the Cn are arbitrary constants):
\psi_E(x) = C_1 e^{i\sqrt{2mE/\hbar^2}\,x} + C_2 e^{-i\sqrt{2mE/\hbar^2}\,x}\,
and exponential solutions for E < 0
\psi_{-|E|}(x) = C_1 e^{\sqrt{2m|E|/\hbar^2}\,x} + C_2 e^{-\sqrt{2m|E|/\hbar^2}\,x}.\,
The exponentially growing solutions have an infinite norm, and are not physical. They are not allowed in a finite volume with periodic or fixed boundary conditions.
Constant potential[edit]
Animation of a de Broglie wave incident on a barrier.
For a constant potential, V = V0, the solution is oscillatory for E > V0 and exponential for E < V0, corresponding to energies that are allowed or disallowed in classical mechanics. Oscillatory solutions have a classically allowed energy and correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small amount of quantum bleeding into the classically disallowed region, due to quantum tunneling. If the potential V0 grows to infinity, the motion is classically confined to a finite region. Viewed far enough away, every solution is reduced an exponential; the condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies.[29]
Harmonic oscillator[edit]
A harmonic oscillator in classical mechanics (A–B) and quantum mechanics (C–H). In (A–B), a ball, attached to a spring, oscillates back and forth. (C–H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. Stationary states, or energy eigenstates, which are solutions to the time-independent Schrödinger Equation, are shown in C,D,E,F, but not G or H.
The Schrödinger equation for this situation is
E\psi = -\frac{\hbar^2}{2m}\frac{d^2}{d x^2}\psi + \frac{1}{2}m\omega^2x^2\psi
It is a notable quantum system to solve for; since the solutions are exact (but complicated – in terms of Hermite polynomials), and it can describe or at least approximate a wide variety of other systems, including vibrating atoms, molecules,[31] and atoms or ions in lattices,[32] and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.
There is a family of solutions – in the position basis they are
where n = 0,1,2..., and the functions Hn are the Hermite polynomials.
Three-dimensional examples[edit]
The extension from one dimension to three dimensions is straightforward, all position and momentum operators are replaced by their three-dimensional expressions and the partial derivative with respect to space is replaced by the gradient operator.
The Hamiltonian for one particle in three dimensions is:
\hat{H} = \frac{\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}}{2m} + V(\mathbf{r}) \,, \quad \hat{\mathbf{p}} = -i\hbar \nabla
generating the equation:
-\frac{\hbar^2}{2m}\nabla^2\psi(\mathbf{r}) + V(\mathbf{r})\psi(\mathbf{r}) = E\psi(\mathbf{r})
with stationary state solutions of the form:
\Psi(\mathbf{r},t) = \psi(\mathbf{r}) e^{-iEt/\hbar}
where the position of the particle is r. Two useful coordinate systems for solving the Schrödinger equation are Cartesian coordinates so that r = (x, y, z) and spherical polar coordinates so that r = (r, θ, φ), although other orthogonal coordinates are useful for solving the equation for systems with certain geometric symmetries.
For N particles in three dimensions, the Hamiltonian is:
\hat{H} = \sum_{n=1}^{N}\frac{\hat{\mathbf{p}}_n\cdot\hat{\mathbf{p}}_n}{2m_n} + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N) \,,\quad \hat{\mathbf{p}}_n = -i\hbar \nabla_n
where the position of particle n is rn and the gradient operators are partial derivatives with respect to the particle's position coordinates. In Cartesian coordinates, for particle n, the position vector is rn = (xn, yn, zn) while the gradient and Laplacian operator are respectively:
\nabla_n = \mathbf{e}_x \frac{\partial}{\partial x_n} + \mathbf{e}_y\frac{\partial}{\partial y_n} + \mathbf{e}_z\frac{\partial}{\partial z_n}\,,\quad \nabla_n^2 = \nabla_n\cdot\nabla_n = \frac{\partial^2}{{\partial x_n}^2} + \frac{\partial^2}{{\partial y_n}^2} + \frac{\partial^2}{{\partial z_n}^2}
The Schrödinger equation is:
-\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\nabla_n^2\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N) + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N)\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N) = E\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N)
with stationary state solutions:
\Psi(\mathbf{r}_1,\mathbf{r}_2\cdots \mathbf{r}_N,t) = e^{-iEt/\hbar}\psi(\mathbf{r}_1,\mathbf{r}_2\cdots \mathbf{r}_N)
Again, for non-interacting distinguishable particles the potential is the sum of particle potentials
V(\mathbf{r}_1,\mathbf{r}_2,\cdots \mathbf{r}_N) = \sum_{n=1}^N V(\mathbf{r}_n)
and the wavefunction is a product of the particle wavefuntions
\Psi(\mathbf{r}_1,\mathbf{r}_2\cdots \mathbf{r}_N,t) = e^{-i{E t/\hbar}}\prod_{n=1}^N\psi(\mathbf{r}_n) \, .
For non-interacting identical particles, the potential is a sum but the wavefunction is a sum over permutations of products. The previous two equations do not apply to interacting particles.
Following are examples where exact solutions are known. See the main articles for further details.
Hydrogen atom[edit]
This form of the Schrödinger equation can be applied to the hydrogen atom:[24][26]
E \psi = -\frac{\hbar^2}{2\mu}\nabla^2\psi - \frac{e^2}{4\pi\epsilon_0 r}\psi
where e is the electron charge, r is the position of the electron (r = |r| is the magnitude of the position), the potential term is due to the Coulomb interaction, wherein ε0 is the electric constant (permittivity of free space) and
\mu = \frac{m_em_p}{m_e+m_p}
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass mp and the electron of mass me. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common centre of mass, and constitute a two-body problem to solve. The motion of the electron is of principle interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.
The wavefunction for hydrogen is a function of the electron's coordinates, and in fact can be separated into functions of each coordinate.[33] Usually this is done in spherical polar coordinates:
\psi(r,\theta,\phi) = R(r)Y_\ell^m(\theta, \phi) = R(r)\Theta(\theta)\Phi(\phi)
where R are radial functions and \scriptstyle Y_{\ell}^{m}(\theta, \phi ) \, are spherical harmonics of degree and order m. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximative methods. The family of solutions are:[34]
\psi_{n\ell m}(r,\theta,\phi) = \sqrt {{\left ( \frac{2}{n a_0} \right )}^3\frac{(n-\ell-1)!}{2n[(n+\ell)!]} } e^{- r/na_0} \left(\frac{2r}{na_0}\right)^{\ell} L_{n-\ell-1}^{2\ell+1}\left(\frac{2r}{na_0}\right) \cdot Y_{\ell}^{m}(\theta, \phi )
n & = 1,2,3, \dots \\
\ell & = 0,1,2, \dots, n-1 \\
m & = -\ell,\dots,\ell \\
NB: generalized Laguerre polynomials are defined differently by different authors—see main article on them and the hydrogen atom.
Two-electron atoms or ions[edit]
The equation for any two-electron system, such as the neutral helium atom (He, Z = 2), the negative hydrogen ion (H, Z = 1), or the positive lithium ion (Li+, Z = 3) is:[27]
E\psi = -\hbar^2\left[\frac{1}{2\mu}\left(\nabla_1^2 +\nabla_2^2 \right) + \frac{1}{M}\nabla_1\cdot\nabla_2\right] \psi + \frac{e^2}{4\pi\epsilon_0}\left[ \frac{1}{r_{12}} -Z\left( \frac{1}{r_1}+\frac{1}{r_2} \right) \right] \psi
where r1 is the position of one electron (r1 = |r1| is its magnitude), r2 is the position of the other electron (r2 = |r2| is the magnitude), r12 = |r12| is the magnitude of the separation between them given by
|\mathbf{r}_{12}| = |\mathbf{r}_2 - \mathbf{r}_1 | \,\!
μ is again the two-body reduced mass of an electron with respect to the nucleus of mass M, so this time
\mu = \frac{m_e M}{m_e+M} \,\!
and Z is the atomic number for the element (not a quantum number).
The cross-term of two laplacians
is known as the mass polarization term, which arises due to the motion of atomic nuclei. The wavefunction is a function of the two electron's positions:
\psi = \psi(\mathbf{r}_1,\mathbf{r}_2).
There is no closed form solution for this equation.
Time dependent[edit]
This is the equation of motion for the quantum state. In the most general form, it is written:[5]:143ff
and the solution, the wavefunction, is a function of all the particle coordinates of the system and time. Following are specific cases.
For one particle in one dimension, the Hamiltonian
\hat{H} = \frac{\hat{p}^2}{2m} + V(x,t) \,,\quad \hat{p} = -i\hbar \frac{\partial}{\partial x}
generates the equation:
i\hbar\frac{\partial}{\partial t}\Psi(x,t) = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\Psi(x,t) + V(x,t)\Psi(x,t)
For N particles in one dimension, the Hamiltonian is:
\hat{H} = \sum_{n=1}^{N}\frac{\hat{p}_n^2}{2m_n} + V(x_1,x_2,\cdots x_N,t) \,,\quad \hat{p}_n = -i\hbar \frac{\partial}{\partial x_n}
where the position of particle n is xn, generating the equation:
i\hbar\frac{\partial}{\partial t}\Psi(x_1,x_2\cdots x_N,t) = -\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\frac{\partial^2}{\partial x_n^2}\Psi(x_1,x_2\cdots x_N,t) + V(x_1,x_2\cdots x_N,t)\Psi(x_1,x_2\cdots x_N,t) \, .
For one particle in three dimensions, the Hamiltonian is:
\hat{H} = \frac{\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}}{2m} + V(\mathbf{r},t) \,,\quad \hat{\mathbf{p}} = -i\hbar \nabla
generating the equation:
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r},t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},t) + V(\mathbf{r},t)\Psi(\mathbf{r},t)
For N particles in three dimensions, the Hamiltonian is:
where the position of particle n is rn, generating the equation:[5]:141
i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t) = -\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\nabla_n^2\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t) + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t)\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t)
This last equation is in a very high dimension, so the solutions are not easy to visualize.
Solution methods[edit]
General techniques:
The Schrödinger equation has the following properties: some are useful, but there are shortcomings. Ultimately, these properties arise from the Hamiltonian used, and solutions to the equation.
In the development above, the Schrödinger equation was made to be linear for generality, though this has other implications. If two wave functions ψ1 and ψ2 are solutions, then so is any linear combination of the two:
\displaystyle \psi = a\psi_1 + b \psi_2
where a and b are any complex numbers (the sum can be extended for any number of wavefunctions). This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over all single state solutions achievable. For example, consider a wave function \Psi (x,t) such that the wave function is a product of two functions: one time independent, and one time dependent. If states of definite energy found using the time independent Shrödinger equation are given by \psi_E (x) with amplitude A_n and time dependent phase factor is given by
e^{{-iE_n t}/\hbar},
then a valid general solution is
\displaystyle \Psi(x,t) = \sum\limits_{n} A_n \psi_{E_n}(x) e^{{-iE_n t}/\hbar}.
Additionally, the ability to scale solutions allows one to solve for a wave function without normalizing it first. If one has a set of normalized solutions \psi_n, then
\displaystyle \Psi = \sum\limits_{n} A_n \psi_n
can be normalized by ensuring that
\displaystyle \sum\limits_{n}|A_n|^2 = 1.
This is much more convenient than having to verify that
\displaystyle \int\limits_{-\infty}^{\infty}|\Psi(x)|^2\,dx = \int\limits_{-\infty}^{\infty}\Psi(x)\Psi^{*}(x)\,dx = 1.
Real energy eigenstates[edit]
For the time-independent equation, an additional feature of linearity follows: if two wave functions ψ1 and ψ2 are solutions to the time-independent equation with the same energy E, then so is any linear combination:
\hat H (a\psi_1 + b \psi_2 ) = a \hat H \psi_1 + b \hat H \psi_2 = E (a \psi_1 + b\psi_2).
Two different solutions with the same energy are called degenerate.[29]
In an arbitrary potential, if a wave function ψ solves the time-independent equation, so does its complex conjugate ψ*. By taking linear combinations, the real and imaginary parts of ψ are each solutions.if there is no degeneracy they can only differ by a factor.
In the time-dependent equation, complex conjugate waves move in opposite directions. If Ψ(x, t) is one solution, then so is Ψ(x, –t). The symmetry of complex conjugation is called time-reversal symmetry.
Space and time derivatives[edit]
Continuity of the wavefunction and its first spatial derivative (in the x direction, y and z coordinates not shown), at some time t.
The Schrödinger equation is first order in time and second in space, which describes the time evolution of a quantum state (meaning it determines the future amplitude from the present).
Explicitly for one particle in 3d Cartesian coordinates – the equation is
i\hbar{\partial \Psi \over \partial t} = - {\hbar^2\over 2m} \left ( {\partial^2 \Psi \over \partial x^2} + {\partial^2 \Psi \over \partial y^2} + {\partial^2 \Psi \over \partial z^2} \right ) + V(x,y,z,t)\Psi.\,\!
The first time partial derivative implies the initial value (at t = 0) of the wavefunction
\Psi(x,y,z,0) \,\!
is an arbitrary constant. Likewise – the second order derivatives with respect to space implies the wavefunction and its first order spatial derivatives
& \Psi(x_b,y_b,z_b,t) \\
& \frac{\partial}{\partial x}\Psi(x_b,y_b,z_b,t) \quad \frac{\partial}{\partial y}\Psi(x_b,y_b,z_b,t) \quad \frac{\partial}{\partial z}\Psi(x_b,y_b,z_b,t)
\end{align} \,\!
are all arbitrary constants at a given set of points, where xb, yb, zb are a set of points describing boundary b (derivatives are evaluated at the boundaries). Typically there are one or two boundaries, such as the step potential and particle in a box respectively.
As the first order derivatives are arbitrary, the wavefunction can be a continuously differentiable function of space, since at any boundary the gradient of the wavefunction can be matched.
On the contrary, wave equations in physics are usually second order in time, notable are the family of classical wave equations and the quantum Klein–Gordon equation.
Local conservation of probability[edit]
The Schrödinger equation is consistent with probability conservation. Multiplying the Schrödinger equation on the right by the complex conjugate wavefunction, and multiplying the wavefunction to the left of the complex conjugate of the Schrödinger equation, and subtracting, gives the continuity equation for probability:[35]
{ \partial \over \partial t} \rho\left(\mathbf{r},t\right) + \nabla \cdot \mathbf{j} = 0,
is the probability density (probability per unit volume, * denotes complex conjugate), and
\mathbf{j} = {1 \over 2m} \left( \Psi^*\hat{\mathbf{p}}\Psi - \Psi\hat{\mathbf{p}}\Psi^* \right)\,\!
is the probability current (flow per unit area).
Hence predictions from the Schrödinger equation do not violate probability conservation.
Positive energy[edit]
If the potential is bounded from below, meaning there is a minimum value of potential energy, the eigenfunctions of the Schrödinger equation have energy which is also bounded from below. This can be seen most easily by using the variational principle, as follows. (See also below).
For any linear operator \hat{A} bounded from below, the eigenvector with the smallest eigenvalue is the vector ψ that minimizes the quantity
over all ψ which are normalized.[35] In this way, the smallest eigenvalue is expressed through the variational principle. For the Schrödinger Hamiltonian \hat{H} bounded from below, the smallest eigenvalue is called the ground state energy. That energy is the minimum value of
\langle \psi|\hat{H}|\psi\rangle = \int \psi^*(\mathbf{r}) \left[ - \frac{\hbar^2}{2m} \nabla^2\psi(\mathbf{r}) + V(\mathbf{r})\psi(\mathbf{r})\right] d^3\mathbf{r} = \int \left[ \frac{\hbar^2}{2m}|\nabla\psi|^2 + V(\mathbf{r}) |\psi|^2 \right] d^3\mathbf{r} = \langle \hat{H}\rangle
(using integration by parts). Due to the complex modulus of ψ squared (which is positive definite), the right hand side always greater than the lowest value of V(x). In particular, the ground state energy is positive when V(x) is everywhere positive.
For potentials which are bounded below and are not infinite over a region, there is a ground state which minimizes the integral above. This lowest energy wavefunction is real and positive definite – meaning the wavefunction can increase and decrease, but is positive for all positions. It physically cannot be negative: if it were, smoothing out the bends at the sign change (to minimize the wavefunction) rapidly reduces the gradient contribution to the integral and hence the kinetic energy, while the potential energy changes linearly and less quickly. The kinetic and potential energy are both changing at different rates, so the total energy is not constant, which can't happen (conservation). The solutions are consistent with Schrödinger equation if this wavefunction is positive definite.
The lack of sign changes also shows that the ground state is nondegenerate, since if there were two ground states with common energy E, not proportional to each other, there would be a linear combination of the two that would also be a ground state resulting in a zero solution.
Analytic continuation to diffusion[edit]
The above properties (positive definiteness of energy) allow the analytic continuation of the Schrödinger equation to be identified as a stochastic process. This can be interpreted as the Huygens–Fresnel principle applied to De Broglie waves; the spreading wavefronts are diffusive probability amplitudes.[35]
For a particle in a random walk (again for which V = 0), the continuation is to let:[36]
\tau = it \,, \!
substituting into the time-dependent Schrödinger equation gives:
{\partial \over \partial \tau} \Psi(\mathbf{r},\tau) = \frac{\hbar}{2m} \nabla ^2 \Psi(\mathbf{r},\tau),
which has the same form as the diffusion equation, with diffusion coefficient ħ/2m.
Relativistic quantum mechanics[edit]
Relativistic quantum mechanics is obtained where quantum mechanics and special relativity simultaneously apply. The general form of the Schrödinger equation is still applicable, but the Hamiltonian operators are much less obvious, and more complicated.
One wishes to build relativistic wave equations from the relativistic energy–momentum relation
E^2 = (pc)^2 + (m_0c^2)^2 \, ,
instead of classical energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation was the first such equation to be obtained, even before the non-relativistic one, and applies to massive spinless particles. The Dirac equation arose from taking the "square root" of the Klein–Gordon equation by factorizing the entire relativistic wave operator into a product of two operators – one of these is the operator for the entire Dirac equation.
The Dirac equation introduces spin matrices, in the particular case of the Dirac equation they are the gamma matrices for spin-1/2 particles, and the solutions are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle. In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of position and momentum operators, but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields.
Quantum field theory[edit]
The general equation is also valid and used in quantum field theory, both in relativistic and non-relativistic situations. However, the solution ψ is no longer interpreted as a "wave", but more like a "field".
See also[edit]
1. ^ Schrödinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules". Physical Review 28 (6): 1049–1070. Bibcode:1926PhRv...28.1049S. doi:10.1103/PhysRev.28.1049. Archived from the original on 17 December 2008.
3. ^ Ballentine, Leslie (1998), Quantum Mechanics: A Modern Development, World Scientific Publishing Co., ISBN 9810241054
4. ^ Laloe, Franck (2012), Do We Really Understand Quantum Mechanics, Cambridge University Press, ISBN 978-1-107-02501-1
5. ^ a b c d e Shankar, R. (1994). Principles of Quantum Mechanics (2nd ed.). Kluwer Academic/Plenum Publishers. ISBN 978-0-306-44790-7.
6. ^ a b Nouredine Zettili (17 February 2009). Quantum Mechanics: Concepts and Applications. John Wiley & Sons. ISBN 978-0-470-02678-6.
7. ^ O Donati G F Missiroli G Pozzi May 1973 An Experiment on Electron Interference American Journal of Physics 41 639–644
8. ^ Brian Greene, The Elegant Universe, p. 110
9. ^ Feynman Lectures on Physics (Vol. 3), R. Feynman, R.B. Leighton, M. Sands, Addison-Wesley, 1965, ISBN 0-201-02118-8
10. ^ de Broglie, L. (1925). "Recherches sur la théorie des quanta" [On the Theory of Quanta]. Annales de Physique 10 (3): 22–128. Translated version.
11. ^ Weissman, M.B.; V. V. Iliev and I. Gutman (2008). "A pioneer remembered: biographical notes about Arthur Constant Lunn". Communications in Mathematical and in Computer Chemistry 59 (3): 687–708.
12. ^ Kamen, Martin D. (1985). Radiant Science, Dark Politics. Berkeley and Los Angeles, CA: University of California Press. pp. 29–32. ISBN 0-520-04929-2.
13. ^ Schrodinger, E. (1984). Collected papers. Friedrich Vieweg und Sohn. ISBN 3-7001-0573-8. See introduction to first 1926 paper.
14. ^ a b Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft) 3-527-26954-1, (VHC Inc.) ISBN 0-89573-752-3
15. ^ Sommerfeld, A. (1919). Atombau und Spektrallinien. Braunschweig: Friedrich Vieweg und Sohn. ISBN 3-87144-484-7.
16. ^ For an English source, see Haar, T. The Old Quantum Theory.
17. ^ Rhodes, R. (1986). Making of the Atomic Bomb. Touchstone. ISBN 0-671-44133-7.
18. ^ a b Erwin Schrödinger (1982). Collected Papers on Wave Mechanics: Third Edition. American Mathematical Soc. ISBN 978-0-8218-3524-1.
19. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem; von Erwin Schrödinger". Annalen der Physik: 361–377.
20. ^ Erwin Schrödinger, "The Present situation in Quantum Mechanics," p. 9 of 22. The English version was translated by John D. Trimmer. The translation first appeared first in Proceedings of the American Philosophical Society, 124, 323–38. It later appeared as Section I.11 of Part I of Quantum Theory and Measurement by J.A. Wheeler and W.H. Zurek, eds., Princeton University Press, New Jersey 1983).
21. ^ Einstein, A.; et. al. Letters on Wave Mechanics: Schrodinger–Planck–Einstein–Lorentz.
22. ^ a b c Moore, W.J. (1992). Schrödinger: Life and Thought. Cambridge University Press. ISBN 0-521-43767-9.
23. ^ It is clear that even in his last year of life, as shown in a letter to Max Born, that Schrödinger never accepted the Copenhagen interpretation.[22]:220
24. ^ a b Molecular Quantum Mechanics Parts I and II: An Introduction to Quantum Chemistry (Volume 1), P.W. Atkins, Oxford University Press, 1977, ISBN 0-19-855129-0
26. ^ a b c d Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-1
27. ^ a b Physics of Atoms and Molecules, B.H. Bransden, C.J.Joachain, Longman, 1983, ISBN 0-582-44401-2
28. ^ a b Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd Edition), R. Resnick, R. Eisberg, John Wiley & Sons, 1985, ISBN 978-0-471-87373-0
29. ^ a b c Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN(10) 0 07 145546 9
30. ^ a b Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0
31. ^ Physical chemistry, P.W. Atkins, Oxford University Press, 1978, ISBN 0-19-855148-7
32. ^ Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John Wiley & Sons, 2010, ISBN 978-0-471-92804-1
33. ^ Physics for Scientists and Engineers – with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008, ISBN 0-7167-8964-7
34. ^ David Griffiths (2008). Introduction to elementary particles. Wiley-VCH. pp. 162–. ISBN 978-3-527-40601-2. Retrieved 27 June 2011.
35. ^ a b c Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, ISBN 978-0-13-146100-0
36. ^
External links[edit] |
ef323abbffc8a22e | Take the 2-minute tour ×
Teaching graduate analysis has inspired me to think about the completeness theorem for Fourier series and the more difficult Plancherel theorem for the Fourier transform on $\mathbb{R}$. There are several ways to prove that the Fourier basis is complete for $L^2(S^1)$. The approach that I find the most interesting, because it uses general tools with more general consequences, is to use apply the spectral theorem to the Laplace operator on a circle. It is not difficult to show that the Laplace operator is a self-adjoint affiliated operator, i.e., the healthy type of unbounded operator for which the spectral theorem applies. It's easy to explicitly solve for the point eigenstates of the Laplace operator. Then you can use a Fredholm argument, or ultimately the Arzela-Ascoli theorem, to show that the Laplace operator is reciprocal to a compact operator, and therefore has no continuous spectrum. The argument is to integrate by parts. Suppose that $$\langle -\Delta \psi, \psi\rangle = \langle \vec{\nabla} \psi, \vec{\nabla \psi} \rangle \le E$$ for some energy $E$, whether or not $\psi$ is an eigenstate and even whether or not it has unit norm. Then $\psi$ is microscopically controlled and there is only a compact space of such $\psi$ except for adding a constant. The payoff of this abstract proof is the harmonic completeness theorem for the Laplace operator on any compact manifold $M$ with or without boundary. It also works when $\psi$ is a section of a vector bundle with a connection.
My question is whether there is a nice generalization of this approach to obtain a structure theorem for the Laplace operator, or the Schrödinger equation, in non-compact cases. Suppose that $M$ is an infinite complete Riemannian manifold with some kind of controlled geometry. For instance, say that $M$ is quasiisometric to $\mathbb{R}^n$ and has pinched curvature. (Or say that $M$ is amenable and has pinched curvature.) Maybe we also have the Laplace operator plus some sort of controlled potential --- say a smooth, bounded potential with bounded derivatives. Then can you say that the spectrum of the Laplace or Schrödinger operator is completely described by controlled solutions to the PDE, which can be interpreted as "almost normalizable" states?
There is one case of this that is important but too straightforward. If $M$ is the universal cover of a torus $T$, and if its optional potential is likewise periodic, then you can use "Bloch's theorem". In other words you can solve the problem for flat line bundles on $T$, where you always just have a point spectrum, and then lift this to a mixed continuous and point spectrum upstairs. So you can derive the existence of a fancy spectrum that is not really explicit, but the non-compactness is handled using an explicit method. I think that this method yields a cute proof of the Plancherel theorem for $\mathbb{R}$ (and $\mathbb{R}^n$ of course): Parseval's theorem as described above gives you Fourier completeness for both $S^1$ and $\mathbb{Z}$, and you can splice them together using the Bloch picture to get completeness for $\mathbb{R}$.
share|improve this question
Only a simple remark. In the non-compact case, the paradigmatic example is the harmonic oscillator $$ -\Delta_{\mathbb R^d}+\frac{\vert x\vert^2}{4} $$ with spectrum $\frac{d}{2}+\mathbb N$. The eigenvectors are the Hermite functions with an explicit expression from the so-called Maxwellian $\psi_0=(2\pi)^{-d/4}\exp{-\frac{\vert x\vert^2}{4}}$ and the creation operators $(\alpha!)^{-1/2}(\frac{x}{2}-\frac{d}{dx})^\alpha \psi_0$. In one dimension the operator $-\frac{d^2}{dx^2}+x^4$ (quartic oscillator) has also a compact resolvent, but nothing explicit is known about the eigenfunctions. – Bazin May 2 '12 at 13:53
More subtle is the compactness of the resolvent of the 2D $$ -\Delta_{\mathbb R^2}+x^2y^2. $$ – Bazin May 2 '12 at 13:54
I just saw this playing around on meta.... Are you asking a question beyond that spectrally almost every solution is polynomially bounded? – Helge Aug 15 '12 at 18:53
@Helge - That's part of the story, but in the ordinary Plancherel theorem, not the hardest part to state or prove. You would also want some statement about the spectral measure (that is, the projection-valued measure produced by the spectral theorem) associated to the Laplace or Schrodinger operator. Again, if you have a Laplace operator on a closed manifold, there is an algorithm to diagonalize it completely. The completeness theorem is considered very important, and not just the fact that you can find eigenfunctions. – Greg Kuperberg Aug 18 '12 at 3:16
add comment
2 Answers 2
Since this has not been mentioned, let me point to the Weyl-Stone-Titchmarsh-Kodaira theorem which gives the generalized Fourier transform and Plancherel formula of a selfadjoint Sturm-Liouville operator. The ODE section in Dunford-Schwartz II presents this. See also the nice original paper Kodaira (1949). The (one-dimensional) Schrödinger operator with periodic potential (Hill's operator) is also treated in Kodaira's paper.
In several variables, scattering theory provides Plancherel theorems. For the Dirichlet Laplacian in the exterior of a compact obstacle, one can find a result of this kind in chapter 9 of M.E. Taylor's book PDE II. Formula (2.15) in that chapter is the Plancherel theorem of the Fourier transform $\Phi$ defined in (2.8).
Stone's formula represents the (projection-valued) spectral measure of a selfadjoint operator as the limit of the resolvent at the real axis. It is a key ingredient in proofs of these results.
share|improve this answer
add comment
Too big to fit well as comment: There is a seeming-technicality which is important to not overlook, the question of whether a symmetric operator is "essentially self-adjoint" or not. As I discovered only embarrasingly belatedly, this "essential self-adjointness" has a very precise meaning, namely, that the given symmetric operator has a unique self-adjoint extension, which then is necessarily given by its (graph-) closure. In many natural situations, Laplacians and such are essentially self-adjoint. But with any boundary conditions, this tends not to be the case, exactly as in the simplest Sturm-Liouville problems on finite intervals, not even getting to the Weyl-Kodaira-Titchmarsh complications.
Gerd Grubb's relatively recent book on "Distributions and operators" discusses such stuff.
The broader notion of Friedrichs' canonical self-adjoint extension of a symmetric (edit! :) semi-bounded operator is very useful here. At the same time, for symmetric operators that are not essentially self-adjoint, the case of $\Delta$ on $[a,b]$ with varying boundary conditions (to ensure symmetric-ness) shows that there is a continuum of mutually incomparable self-adjoint extensions.
Thus, on $[0,2\pi]$, the Dirichlet boundary conditions give $\sin nx/2$ for integer $n$ as orthonormal basis, while the boundary conditions that values and first derivatives match at endpoints give the "usual" Fourier series, in effect on a circle, by connecting the endpoints.
This most-trivial example already shows that the spectrum, even in the happy-simple discrete case, is different depending on boundary conditions.
share|improve this answer
add comment
Your Answer
|
1d24d06f87fcafa6 | Archive for January, 2009
Physics Friday 57
January 30, 2009
Consider a single particle wavefunction (in the position basis) . Then for a volume V, the probability P that the particle will be measured to be in V is
The time derivative of this is:
Now, we recall that , where * indicates the complex conjugate. Thus, via the product rule,
Now, consider the time dependent Schrödinger equation:
Solving for the time derivative, we get
And taking the complex conjugate of that:
(Note that the potential is real).
Adding these, we get
(the potential energy terms cancel).
Here, we need to use some vector calculus; namely, the product rule for the divergence operator : for scalar valued function φ and vector field , the divergence of their product is given by
Now, if our vector field is itself the gradient of a scalar function ψ, then
Swapping φ and ψ,
And taking the difference, we find
Putting in our wavefunction and it’s conjugate for φ and ψ,
Let us call the vector-valued function that is the argument of the divergence in the integrand :
Now, recalling that the probability P is the integral over V of the probability density
So in terms of the probability density
and as this holds for all V, the integrand must vanish:
Now, this should look familiar to some of you. For any conserved quantity ρ with a flux given by the function , and no sources or sinks, the quantity and flux obey the continuity equation
[For example, if ρ is electric charge density, then the conservation of electric charge gives
where is the electric current density.]
Now, as the probability density for the particle must always integrate over all space to unity, we similarly expect it to be a conserved quantity. Thus the above result is our continuity equation for quantum probability, and so as defined above is our probability current (or probability flux). It has units of probability/(area × time), or probability density times velocity.
Note that the continuity equation tells us that for a stationary state, the divergence of the probability current must be zero. However, this does not mean that the current itself must be zero. Consider the three-dimensional plane wave .
Note that as is the particle’s velocity, the probability current of the plane wave, a stationary state, is the amplitude squared times the particle velocity.
Lastly, consider a wavefunction which has the same complex phase for all locations at any given time; that is , where is a real-valued function. Then , and , and so we see the probability current is zero for such wavefunctions (one example of which are solutions to the particle in a one-dimensional box).
Monday Math 56
January 26, 2009
Consider a sequence of polynomials Pn(x), where Pn is an nth degree polynomial. Let us also have the relationship that . Such a sequence is known as an Appell sequence, and includes a number of notable polynomial sequences (including the Hermite polynomials mentioned in a recent Physics Friday post). Note that the sequence Pn(x)=xn is a trivial example.
Now, we can obtain from the relation
, or taking the definite integral of both sides, .
Letting our integral limits become x and x+1, we have:
. Now, note that Pn+1(x+1) is an n+1 degree polynomial with the same coefficient on the xn+1 term as Pn(x), and thus must be a polynomial of degree no higher than n; in fact, one can check and find it must be a degree n polynomial.
So let us then choose the Appell sequence defined by
We can compute the first few terms immediately:
gives us
Similarly, we can find
and so on.
Note that by putting x=0 in our defining equation, we have
This sequence is known as the Bernoulli polynomials. Amongst its properties is the symmetry relation
which tells us that
By differentiating the defining equation with respect to x, we get:
Now, consider the integral , for m a positive integer. Breaking up the region of integration into unit intervals, we find
and using our defining relation , we find that
And thus we see:
This is Faulhaber’s formula for the sum of the nth power of the first m integers in terms of the Bernoulli polynomials (see here).
The values of the Bernoulli polynomials at x=0 (the constant term of the polynomial) are known as the Bernoulli numbers:
The first few values are
For odd n>1, Bn=0; is the only non-zero Bernoulli number for odd n. The even Bernoulli numbers, as one may observe from the above examples, alternate in sign.
Now, we recall that
Thus the coefficient of the xk term in Bn(x) is times the coefficient of the xk-1 term of Bn-1(x); thus the coefficient of x in Bn(x) is equal to . Similarly, the coefficient of x2 in Bn(x) is equal to ; the x3 coefficient is , and so on.
Thus we see:
(proof by induction can verify this formula).
From this, and our symmetry, we find a recursive relation for the Bernoulli numbers:
which can be solved for Bn in terms of B0, B1, B2, …, Bn-1:
which with B0=1, can also be used to define the Bernoulli sequence.
Happy 25th Birthday…
January 24, 2009
…to the Apple Macintosh.
Physics Friday 56
January 23, 2009
Consider a rigid spherical body of density ρ. Now, imagine a test mass m located inside this body at a distance r from the center of the sphere. It will experience a gravitational force , with the negative sign indicating a force opposite in direction to the displacement r from the sphere center (an inward force). We see the force varies linearly with r. Thus, the gradient of the gravity is
, a constant. Here, the negative sign indicates that the gradient of the sphere’s gravitational self-attraction compresses the object.
Now, let us consider a mass M with its center of gravity a distance d from the center of our sphere. Let us consider a displacement r from the center of the sphere on the line between the centers of the sphere and M, with positive r being toward the mass M. Then the gravitational force on our test mass due to our external mass is
where the positive sign indicates the force is in the direction of positive r.
This, in turn, has a gradient
Here, the positive sign indicates that this tidal gradient puts the object under tension.
Note here that the tidal stretching increases as one approaches the mass M, while the self-attraction compression is constant. In fact, the sum of the two gradients becomes positive, and the tidal force dominates, when
Where is the radius of a sphere of density ρ with a mass 2M.
Thus, note that if d<deff, we see that the tidal forces overcome the self-attraction down to the center of the sphere, and thus a stable rigid body held together by gravitational self-attraction alone cannot exist. If our body of mass M is a sphere of radius R, it has density . Thus, writing deff in terms of ρM and R,
A real body would also have a material tensile strength to help hold it together, and also would deform from spherical into tidal bulges under the tidal force; however, there still remains a similar limit, inside which a self-gravitating celestial body (our sphere) orbitting a larger body (the mass M) will disintegrate under the tidal forces. This is called the Roche Limit.
Monday Math 55
January 19, 2009
How might one perform the unit-square integral
First, let us swap the order of integration:
Next, we approach the inner integral by performing the u-substitution
The limits become y=0 → u=∞, y=1 → u=-ln(x).
So, with this substitution, we find , and
for 0<x<1, ln(x)<0, so -ln(x)>0; -ln(x)→∞ as x→0, and -ln(1)=0, so the integration region can be seen as between and x=1, u>0. So, reversing the order of the double integral,
We found previously that , and the gamma function is defined as ; this means
This is called Hadjicostas’s formula, and holds for s any complex number with .
Note that as the pole of ζ(s) at s=1 is of order one: the Laurent series is of the form ,
where the are called the Stieltjes constants, and , the Euler-Mascheroni constant; the singularity in the term is thus removable, with
Thus we have
Similarly, with the related integral
we can use the same subtitution and change of order:
What if we try instead try the integral
Performing the same u-substitution:
Reversing the order of the double integration, and performing the inner integral,
We found previously that , so then
(for )
and, in analogy to I(s), we can also find that.
Physics Friday 55
January 16, 2009
A cursory examination of the expectation for energy
the Schrödinger equation becomes
, and plugging this into our differential equation , we get
or, eliminating the common Gaussian factor,
Applying the power series method, we plug in to find
Which gives recursion relation .
Monday Math 54
January 12, 2009
Let us consider an integral of the form , with parameter λ>0, and where f(x) has a single local minimum in the interval (a,b). The method of steepest decent, also known as saddle-point integration, is a useful method for approximating such an integral in the small λ limit. Namely, if the local minimum occurs at x=x0, then we can expand f(x) in the Taylor series about this point; as it is a local minimum, f‘(x0)=0 and f”(x0)≥0. Here, we assume that this second derivative is nonzero. Thus, our Taylor series is
Now, we note that as λ→0+, the term in the exponent becomes ever more negative, and the integrand approaches zero; the local minimum at x0 is the “slowest” to approach, and thus the function near that point comes to dominate the rest of the integral; and so
Now, we see that the last integral is part of a gaussian integral; the peak of our gaussian is in the region of integration, and as λ→0+, the gaussian’s width goes to zero as well, so that the tails become negligible in the limit, and
Now, recall that the integral of the gaussian over all real numbers is given by
Thus ,
and so
Let us perform an example: . This does not fit the form as it is, but can be transformed to do so. First, let us make the substitution . Then we have:
. Second, we note that for positive u, .
Thus, we have
and the integral above fits our form with and ; thus our approximation will be for n→∞. Now, , which is zero for u=1. , so , and . Thus, our first saddle-point approximation says
and so
which you may recognize as Stirling’s approximation (see here).
Physics Friday 54
January 9, 2009
Quantum Mechanics and Momentum
Part 11: Angular Momentum Eigenfunctions
In our previous part, we introduced the ladder operators, and showed that they generate a “ladder” of simulataneous eigenstates to L2 and Lz. Now, we examine the angular momentum operators in spherical coordinates (r,θ,φ).
Using our conversions and chain rule, we find the linear momentum operator components to be:
From these, we find
Note that all of these are fully independent of the radial coordinate.
Summing the squares of these, we can find that
This then gives us our eigenvalue relation as being
Compare this to the angular portion of Laplace’s equation in spherical coordinates. We see then that the eigenfunctions of L2 are (any radial function times) spherical harmonics:
and to make our wavefunctions 2π periodic in φ, we require that j be an integer. This means that the eigenvalues of Lz should be integer multiples of ℏ, ranging from –jℏ to jℏ.
Applying to , we find
and so
So the spherical harmonic is the simultaneous eigenfunction for L2 and Lz, with eigenvalues and , where j and m are integers with j≥0, –jmj.
Lastly, we note that it can be shown, via more that one method, that if the potential depends purely on the radial coordinate, V(r), so that the problem is spherically symmetric, then the quantum hamiltonian commutes with the angular momentum operators L2 and Lz, so that one may simultaneously specify the energy, the square magnitude of the angular momentum, and the projection onto one axis of the angular momentum. This is very useful in determining the electron wavefunctions for the hydrogen atom.
Monday Math 53
January 5, 2009
Previously, we introduced the spherical harmonics. We note that as written, the spherical harmonic is real for m=0, but complex for m≠0. However, one can find (normalized) linear combinations of and for m≠0 that are real, namely
for m>0 and for m<0. With , we have that the first few real spherical harmonics are:
Let us examine where these are zero. First, is a non-zero constant, and so is zero nowhere. For l=1, we have
, which is zero when , which is the plane ; , which is zero when , which is the plane ; and , which is zero when , which is the plane . Thus, when l=1, we will have a single surface (a plane) on which the harmonic is zero; such a surface is called a node.
We see that for , we have , which gives the cones and ().
when , or on the planes x=0 and z=0.
when , or on the planes y=0 and z=0.
becomes , which becomes the planes x=y and x=-y.
Lastly, becomes , which becomes the planes x=0 and y=0.
In fact, the real harmonics always have l nodes, all of which are vertical planes (constant φ), the plane z=0 (θ=π/2), or cones with the z-axis as their axis (constant θ≠π/2).
Physics Friday 53
January 2, 2009
Quantum Mechanics and Momentum
Part 10: Ladder Operators
Last week, we introduced the quantum angular momentum operator L, and found some properties of its magnitude and components. Most notably, that only one component may be fixed at a time, and that component, usually the z component, and the magnitude squared may be fixed simultaneously. So now, let us look at the common eigenfunctions of L2 and Lz. We want eigenfunction Y so that and .
Now, let us define two “ladder operators” L+ and L by
Let us examine the operation of L+ on our Lz eigenvalue equation:
Now, from the definition of the commutator,
and from our definition of L+,
So , and
which tells us that is also an eigenstate of Lz, with “raised” eigenvalue . Thus, we call L+ the “raising operator.” Repeated application of this shows that for n applications of the raising operator,
Similarly, we can show that gives
and so it is called the “lowering operator.”
Now, let us examine how L2 behaves on these raised an lowered eigenstates. The commutators of L2 with our raising and lowering operators are
and so
and the ladder of eigenvalues …b-2ℏ, b-ℏ, b, b+ℏ b+2ℏ,… generated by the raising and lowering operators are all eigenstates of L2 with eigenvalue a.
We have eigenfunctions , and , with eigenvalues .
Thus, .
Subtracting this from , and recalling that , we find:
Note that the middle term above corresponds to a non-negative physical quantity; thus we expect that cannot be negative: . This, in turn, means that our ladder must be bounded both above and below; there exists and such that and .
So then we have . But
Analogously, using , and , one finds
Taking the difference of these two equations, and thus cancelling a, one finds:
as .
Combine this with the knowledge that all eigenvalues on the ladder are separated by units of ℏ, we see , for some integer n, and so
, ,
giving us our eigenvalue ladder .
And as , we find |
464604a4020b5f34 | The One-Dimensional Finite-Difference Time-Domain (FDTD) Algorithm Applied to the Schrödinger Equation
The code below illustrates the use of the FDTD algorithm to solve the one-dimensional Schrödinger equation for simple potentials. It only requires Numpy and Matplotlib.
All the mathematical details are described in this PDF: Schrodinger_FDTD.pdf
In these figures the potential is shaded in arbitrary units in yellow, while the total energy of the wavepacket is plotted as a green line, in the same units as the potential. So while the energy units are not those on the left axis, both energy plots use the same units and can thus be validly compared relative to one another.
Depending on the particle energy, the yellow region may be classically forbidden (when the green line is inside the yellow region).
The wavepacket starts at t=0 as (step potential shown):
And at the end of the simulation it can look like this, depending on the actual potential height:
This illustrates the tunneling through a thin barrier, depending on the barrier height. In the second case, a classical particle would completely bounce off since its energy is lower than the potential barrier:
# Quantum Mechanical Simulation using Finite-Difference
# Time-Domain (FDTD) Method
# This script simulates a probability wave in the presence of multiple
# potentials. The simulation is c arried out by using the FDTD algorithm
# applied to the Schrodinger equation. The program is intended to act as
# a demonstration of the FDTD algorithm and can be used as an educational
# aid for quantum mechanics and numerical methods. The simulation
# parameters are defined in the code constants and can be freely
# manipulated to see different behaviors.
# NOTES
# The probability density plots are amplified by a factor for visual
# purposes. The psi_p quanity contains the actual probability density
# without any rescaling.
# BEWARE: The time step, dt, has strict requirements or else the
# simulation becomes unstable.
# The code has three built-in potential functions for demonstration.
# 1) Constant potential: Demonstrates a free particle with dispersion.
# 2) Step potential: Demonstrates transmission and reflection.
# 3) Potential barrier: Demonstrates tunneling.
# By tweaking the height of the potential (V0 below) as well as the
# barrier thickness (THCK below), you can see different behaviors: full
# reflection with no noticeable transmission, transmission and
# reflection, or mostly transmission with tunneling.
# This script requires pylab and numpy to be installed with
# Python or else it will not run.
# Author: James Nagel <>
# 5/25/07
# Updates by Fernando Perez <>, 7/28/07
# Numerical and plotting libraries
import numpy as np
import pylab
# Set pylab to interactive mode so plots update when run outside ipython
# Utility functions
# Defines a quick Gaussian pulse function to act as an envelope to the wave
# function.
def Gaussian(x,t,sigma):
""" A Gaussian curve.
x = Variable
t = time shift
sigma = standard deviation """
return np.exp(-(x-t)**2/(2*sigma**2))
def free(npts):
"Free particle."
return np.zeros(npts)
def step(npts,v0):
"Potential step"
v = free(npts)
v[npts/2:] = v0
return v
def barrier(npts,v0,thickness):
"Barrier potential"
v = free(npts)
v[npts/2:npts/2+thickness] = v0
return v
def fillax(x,y,*args,**kw):
"""Fill the space between an array of y values and the x axis.
All args/kwargs are passed to the pylab.fill function.
Returns the value of the pylab.fill() call.
xx = np.concatenate((x,np.array([x[-1],x[0]],x.dtype)))
yy = np.concatenate((y,np.zeros(2,y.dtype)))
return pylab.fill(xx, yy, *args,**kw)
# Simulation Constants. Be sure to include decimal points on appropriate
# variables so they become floats instead of integers.
N = 1200 # Number of spatial points.
T = 5*N # Number of time steps. 5*N is a nice value for terminating
# before anything reaches the boundaries.
Tp = 50 # Number of time steps to increment before updating the plot.
dx = 1.0e0 # Spatial resolution
m = 1.0e0 # Particle mass
hbar = 1.0e0 # Plank's constant
X = dx*np.linspace(0,N,N) # Spatial axis.
# Potential parameters. By playing with the type of potential and the height
# and thickness (for barriers), you'll see the various transmission/reflection
# regimes of quantum mechanical tunneling.
V0 = 1.0e-2 # Potential amplitude (used for steps and barriers)
THCK = 15 # "Thickness" of the potential barrier (if appropriate
# V-function is chosen)
# Uncomment the potential type you want to use here:
# Zero potential, packet propagates freely.
#POTENTIAL = 'free'
# Potential step. The height (V0) of the potential chosen above will determine
# the amount of reflection/transmission you'll observe
POTENTIAL = 'step'
# Potential barrier. Note that BOTH the potential height (V0) and thickness
# of the barrier (THCK) affect the amount of tunneling vs reflection you'll
# observe.
#POTENTIAL = 'barrier'
# Initial wave function constants
sigma = 40.0 # Standard deviation on the Gaussian envelope (remember Heisenberg
# uncertainty).
x0 = round(N/2) - 5*sigma # Time shift
k0 = np.pi/20 # Wavenumber (note that energy is a function of k)
# Energy for a localized gaussian wavepacket interacting with a localized
# potential (so the interaction term can be neglected by computing the energy
# integral over a region where V=0)
E = (hbar**2/2.0/m)*(k0**2+0.5/sigma**2)
# Code begins
# You shouldn't need to change anything below unless you want to actually play
# with the numerical algorithm or modify the plotting.
# Fill in the appropriate potential function (is there a Python equivalent to
# the SWITCH statement?).
if POTENTIAL=='free':
V = free(N)
elif POTENTIAL=='step':
V = step(N,V0)
elif POTENTIAL=='barrier':
V = barrier(N,V0,THCK)
raise ValueError("Unrecognized potential type: %s" % POTENTIAL)
# More simulation parameters. The maximum stable time step is a function of
# the potential, V.
Vmax = V.max() # Maximum potential of the domain.
dt = hbar/(2*hbar**2/(m*dx**2)+Vmax) # Critical time step.
c1 = hbar*dt/(m*dx**2) # Constant coefficient 1.
c2 = 2*dt/hbar # Constant coefficient 2.
c2V = c2*V # pre-compute outside of update loop
# Print summary info
print 'One-dimensional Schrodinger equation - time evolution'
print 'Wavepacket energy: ',E
print 'Potential type: ',POTENTIAL
print 'Potential height V0: ',V0
print 'Barrier thickness: ',THCK
# Wave functions. Three states represent past, present, and future.
psi_r = np.zeros((3,N)) # Real
psi_i = np.zeros((3,N)) # Imaginary
psi_p = np.zeros(N,) # Observable probability (magnitude-squared
# of the complex wave function).
# Temporal indexing constants, used for accessing rows of the wavefunctions.
PA = 0 # Past
PR = 1 # Present
FU = 2 # Future
# Initialize wave function. A present-only state will "split" with half the
# wave function propagating to the left and the other half to the right.
# Including a "past" state will cause it to propagate one way.
xn = range(1,N/2)
x = X[xn]/dx # Normalized position coordinate
gg = Gaussian(x,x0,sigma)
cx = np.cos(k0*x)
sx = np.sin(k0*x)
psi_r[PR,xn] = cx*gg
psi_i[PR,xn] = sx*gg
psi_r[PA,xn] = cx*gg
psi_i[PA,xn] = sx*gg
# Initial normalization of wavefunctions
# Compute the observable probability.
psi_p = psi_r[PR]**2 + psi_i[PR]**2
# Normalize the wave functions so that the total probability in the simulation
# is equal to 1.
P = dx * psi_p.sum() # Total probability.
nrm = np.sqrt(P)
psi_r /= nrm
psi_i /= nrm
psi_p /= P
# Initialize the figure and axes.
xmin = X.min()
xmax = X.max()
ymax = 1.5*(psi_r[PR]).max()
# Initialize the plots with their own line objects. The figures plot MUCH
# faster if you simply update the lines as opposed to redrawing the entire
# figure. For reference, include the potential function as well.
lineR, = pylab.plot(X,psi_r[PR],'b',alpha=0.7,label='Real')
lineI, = pylab.plot(X,psi_i[PR],'r',alpha=0.7,label='Imag')
lineP, = pylab.plot(X,6*psi_p,'k',label='Prob')
pylab.title('Potential height: %.2e' % V0)
# For non-zero potentials, plot them and shade the classically forbidden region
# in light red, as well as drawing a green line at the wavepacket's total
# energy, in the same units the potential is being plotted.
if Vmax !=0 :
# Scaling factor for energies, so they fit in the same plot as the
# wavefunctions
Efac = ymax/2.0/Vmax
V_plot = V*Efac
pylab.plot(X,V_plot,':k',zorder=0) # Potential line.
fillax(X,V_plot, facecolor='y', alpha=0.2,zorder=0)
# Plot the wavefunction energy, in the same scale as the potential
pylab.legend(loc='lower right')
# I think there's a problem with pylab, because it resets the xlim after
# plotting the E line. Fix it back manually.
# Direct index assignment is MUCH faster than using a spatial FOR loop, so
# these constants are used in the update equations. Remember that Python uses
# zero-based indexing.
IDX1 = range(1,N-1) # psi [ k ]
IDX2 = range(2,N) # psi [ k + 1 ]
IDX3 = range(0,N-2) # psi [ k - 1 ]
for t in range(T+1):
# Precompute a couple of indexing constants, this speeds up the computation
psi_rPR = psi_r[PR]
psi_iPR = psi_i[PR]
# Apply the update equations.
psi_i[FU,IDX1] = psi_i[PA,IDX1] + \
c1*(psi_rPR[IDX2] - 2*psi_rPR[IDX1] +
psi_i[FU] -= c2V*psi_r[PR]
psi_r[FU,IDX1] = psi_r[PA,IDX1] - \
c1*(psi_iPR[IDX2] - 2*psi_iPR[IDX1] +
psi_r[FU] += c2V*psi_i[PR]
# Increment the time steps. PR -> PA and FU -> PR
psi_r[PA] = psi_rPR
psi_r[PR] = psi_r[FU]
psi_i[PA] = psi_iPR
psi_i[PR] = psi_i[FU]
# Only plot after a few iterations to make the simulation run faster.
if t % Tp == 0:
# Compute observable probability for the plot.
# Update the plots.
# Note: we plot the probability density amplified by a factor so it's a
# bit easier to see.
# So the windows don't auto-close at the end if run outside ipython
Cookbook/SchrodingerFDTD (last edited 2008-05-02 19:08:58 by JamesNagel) |
81b26c5df9026c7a | You are currently browsing the monthly archive for August 2008.
This month I am at MSRI, for the programs of Ergodic Theory and Additive Combinatorics, and Analysis on Singular Spaces, that are currently ongoing here. This week I am giving three lectures on the correspondence principle, and on finitary versions of ergodic theory, for the introductory workshop in the former program. The article here is broadly describing the content of these talks (which are slightly different in theme from that announced in the abstract, due to some recent developments). [These lectures were also recorded on video and should be available from the MSRI web site within a few months.]
Read the rest of this entry »
As many readers may already know, my good friend and fellow mathematical blogger Tim Gowers, having wrapped up work on the Princeton Companion to Mathematics (which I believe is now in press), has begun another mathematical initiative, namely a “Tricks Wiki” to act as a repository for mathematical tricks and techniques. Tim has already started the ball rolling with several seed articles on his own blog, and asked me to also contribute some articles. (As I understand it, these articles will be migrated to the Wiki in a few months, once it is fully set up, and then they will evolve with edits and contributions by anyone who wishes to pitch in, in the spirit of Wikipedia; in particular, articles are not intended to be permanently authored or signed by any single contributor.)
So today I’d like to start by extracting some material from an old post of mine on “Amplification, arbitrage, and the tensor power trick” (as well as from some of the comments), and converting it to the Tricks Wiki format, while also taking the opportunity to add a few more examples.
Title: The tensor power trick
Quick description: If one wants to prove an inequality X \leq Y for some non-negative quantities X, Y, but can only see how to prove a quasi-inequality X \leq CY that loses a multiplicative constant C, then try to replace all objects involved in the problem by “tensor powers” of themselves and apply the quasi-inequality to those powers. If all goes well, one can show that X^M \leq C Y^M for all M \geq 1, with a constant C which is independent of M, which implies that X \leq Y as desired by taking M^{th} roots and then taking limits as M \to \infty.
Read the rest of this entry »
Jim Colliander, Mark Keel, Gigliola Staffilani, Hideo Takaoka, and I have just uploaded to the arXiv the paper “Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation“, which we have submitted to Inventiones Mathematicae. This paper concerns the numerically observed phenomenon of weak turbulence for the periodic defocusing cubic non-linear Schrödinger equation
-i u_t + \Delta u = |u|^2 u (1)
in two spatial dimensions, thus u is a function from {\Bbb R} \times {\Bbb T}^2 to {\Bbb C}. This equation has three important conserved quantities: the mass
M(u) = M(u(t)) := \int_{{\Bbb T}^2} |u(t,x)|^2\ dx
the momentum
\vec p(u) = \vec p(u(t)) = \int_{{\Bbb T}^2} \hbox{Im}( \nabla u(t,x) \overline{u(t,x)} )\ dx
and the energy
E(u) = E(u(t)) := \int_{{\Bbb T}^2} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{4} |u(t,x)|^4\ dx.
(These conservation laws, incidentally, are related to the basic symmetries of phase rotation, spatial translation, and time translation, via Noether’s theorem.) Using these conservation laws and some standard PDE technology (specifically, some Strichartz estimates for the periodic Schrödinger equation), one can establish global wellposedness for the initial value problem for this equation in (say) the smooth category; thus for every smooth u_0: {\Bbb T}^2 \to {\Bbb C} there is a unique global smooth solution u: {\Bbb R} \times {\Bbb T}^2 \to {\Bbb C} to (1) with initial data u(0,x) = u_0(x), whose mass, momentum, and energy remain constant for all time.
However, the mass, momentum, and energy only control three of the infinitely many degrees of freedom available to a function on the torus, and so the above result does not fully describe the dynamics of solutions over time. In particular, the three conserved quantities inhibit, but do not fully prevent the possibility of a low-to-high frequency cascade, in which the mass, momentum, and energy of the solution remain conserved, but shift to increasingly higher frequencies (or equivalently, to finer spatial scales) as time goes to infinity. This phenomenon has been observed numerically, and is sometimes referred to as weak turbulence (in contrast to strong turbulence, which is similar but happens within a finite time span rather than asymptotically).
To illustrate how this can happen, let us normalise the torus as {\Bbb T}^2 = ({\Bbb R}/2\pi {\Bbb Z})^2. A simple example of a frequency cascade would be a scenario in which solution u(t,x) = u(t,x_1,x_2) starts off at a low frequency at time zero, e.g. u(0,x) = A e^{i x_1} for some constant amplitude A, and ends up at a high frequency at a later time T, e.g. u(T,x) = A e^{i N x_1} for some large frequency N. This scenario is consistent with conservation of mass, but not conservation of energy or momentum and thus does not actually occur for solutions to (1). A more complicated example would be a solution supported on two low frequencies at time zero, e.g. u(0,x) = A e^{ix_1} + A e^{-ix_1}, and ends up at two high frequencies later, e.g. u(T,x) = A e^{iNx_1} + A e^{-iNx_1}. This scenario is consistent with conservation of mass and momentum, but not energy. Finally, consider the scenario which starts off at u(0,x) = A e^{i Nx_1} + A e^{iNx_2} and ends up at u(T,x) = A + A e^{i(N x_1 + N x_2)}. This scenario is consistent with all three conservation laws, and exhibits a mild example of a low-to-high frequency cascade, in which the solution starts off at frequency N and ends up with half of its mass at the slightly higher frequency \sqrt{2} N, with the other half of its mass at the zero frequency. More generally, given four frequencies n_1, n_2, n_3, n_4 \in {\Bbb Z}^2 which form the four vertices of a rectangle in order, one can concoct a similar scenario, compatible with all conservation laws, in which the solution starts off at frequencies n_1, n_3 and propagates to frequencies n_2, n_4.
One way to measure a frequency cascade quantitatively is to use the Sobolev norms H^s({\Bbb T}^2) for s > 1; roughly speaking, a low-to-high frequency cascade occurs precisely when these Sobolev norms get large. (Note that mass and energy conservation ensure that the H^s({\Bbb T}^2) norms stay bounded for 0 \leq s \leq 1.) For instance, in the cascade from u(0,x) = A e^{i Nx_1} + A e^{iNx_2} to u(T,x) = A + A e^{i(N x_1 + N x_2)}, the H^s({\Bbb T}^2) norm is roughly 2^{1/2} A N^s at time zero and 2^{s/2} A N^s at time T, leading to a slight increase in that norm for s > 1. Numerical evidence then suggests the following
Conjecture. (Weak turbulence) There exist smooth solutions u(t,x) to (1) such that \|u(t)\|_{H^s({\Bbb T}^2)} goes to infinity as t \to \infty for any s > 1.
We were not able to establish this conjecture, but we have the following partial result (“weak weak turbulence”, if you will):
Theorem. Given any \varepsilon > 0, K > 0, s > 1, there exists a smooth solution u(t,x) to (1) such that \|u(0)\|_{H^s({\Bbb T}^2)} \leq \epsilon and \|u(T)\|_{H^s({\Bbb T}^2)} > K for some time T.
This is in marked contrast to (1) in one spatial dimension {\Bbb T}, which is completely integrable and has an infinite number of conservation laws beyond the mass, energy, and momentum which serve to keep all H^s({\Bbb T}^2) norms bounded in time. It is also in contrast to the linear Schrödinger equation, in which all Sobolev norms are preserved, and to the non-periodic analogue of (1), which is conjectured to disperse to a linear solution (i.e. to scatter) from any finite mass data (see this earlier post for the current status of that conjecture). Thus our theorem can be viewed as evidence that the 2D periodic cubic NLS does not behave at all like a completely integrable system or a linear solution, even for small data. (An earlier result of Kuksin gives (in our notation) the weaker result that the ratio \|u(T)\|_{H^s({\Bbb T}^2)} / \|u(0)\|_{H^s({\Bbb T}^2)} can be made arbitrarily large when s > 1, thus showing that large initial data can exhibit movement to higher frequencies; the point of our paper is that we can achieve the same for arbitrarily small data.) Intuitively, the problem is that the torus is compact and so there is no place for the solution to disperse its mass; instead, it must continually interact nonlinearly with itself, which is what eventually causes the weak turbulence.
Read the rest of this entry »
Prodded by several comments, I have finally decided to write up some my thoughts on time management here. I actually have been drafting something about this subject for a while, but I soon realised that my own experience with time management is still very much a work in progress (you should see my backlog of papers that need writing up) and I don’t yet have a coherent or definitive philosophy on this topic (other than my advice on writing papers, for instance my page on rapid prototyping). Also, I can only talk about my own personal experiences, which probably do not generalise to all personality types or work situations, though perhaps readers may wish to contribute their own thoughts, experiences, or suggestions in the comments here. [I should also add that I don’t always follow my own advice on these matters, often to my own regret.]
I can maybe make some unorganised comments, though. Firstly, I am very lucky to have some excellent collaborators who put a lot of effort into our joint papers; many of the papers appearing recently on this blog, for instance, were to a large extent handled by co-authors. Generally, I find that papers written in collaboration take longer than singly-authored papers, but the net effort expended per author is significantly less (and the quality of writing higher). Also, I find that I can work on many joint papers in parallel (since the ball is often in another co-author’s court, or is pending some other development), but only on one single-authored paper at a time.
[For reasons having to do with the academic calendar, many more of these papers get finished during the summer than any other time of year, but many of these projects have actually been gestating for quite some time. (There should be a joint paper appearing shortly which we have been working on for about three or four years, for instance; and I have been thinking about the global regularity problem for wave maps problem on and off (mostly off) since about 2000.) So a paper being released every week does not actually correspond to a week being the time needed to conceive and then write up a paper; there is in fact quite a long pipeline of development which mostly happens out of public view.]
Read the rest of this entry »
I have just uploaded to the arXiv the third installment of my “heatwave” project, entitled “Global regularity of wave maps V. Large data local well-posedness in the energy class“. This (rather technical) paper establishes another of the key ingredients necessary to establish the global existence of smooth wave maps from 2+1-dimensional spacetime {\Bbb R}^{1+2} to hyperbolic space \mathbf{H} = \mathbf{H}^m. Specifically, a large data local well-posedness result is established, constructing a local solution from any initial data with finite (but possibly quite large) energy, and furthermore that the solution depends continuously on the initial data in the energy topology. (This topology was constructed in my previous paper.) Once one has this result, the only remaining task is to show a “Palais-Smale property” for wave maps, in that if singularities form in the wave maps equation, then there exists a non-trivial minimal-energy blowup solution, whose orbit is almost periodic modulo the symmetries of the equation. I anticipate this to the most difficult component of the whole project, and is the subject of the fourth (and hopefully final) installment of this series.
This local result is closely related to the small energy global regularity theory developed in recent years by myself, by Krieger, and by Tataru. In particular, the complicated function spaces used in that paper (which ultimately originate from a precursor paper of Tataru). The main new difficulties here are to extend the small energy theory to large energy (by localising time suitably), and to establish continuous dependence on the data (i.e. two solutions which are initially close in the energy topology, need to stay close in that topology). The former difficulty is in principle manageable by exploiting finite speed of propagation (exploiting the fact (arising from the monotone convergence theorem) that large energy data becomes small energy data at sufficiently small spatial scales), but for technical reasons (having to do with my choice of gauge) I was not able to do this and had to deal with the large energy case directly (and in any case, a genuinely large energy theory is going to be needed to construct the minimal energy blowup solution in the next paper). The latter difficulty is in principle resolvable by adapting the existence theory to differences of solutions, rather than to individual solutions, but the nonlinear choice of gauge adds a rather tedious amount of complexity to the task of making this rigorous. (It may be that simpler gauges, such as the Coulomb gauge, may be usable here, at least in the case m=2 of the hyperbolic plane (cf. the work of Krieger), but such gauges cause additional analytic problems as they do not renormalise the nonlinearity as strongly as the caloric gauge. The paper of Tataru establishes these goals, but assumes an isometric embedding of the target manifold into a Euclidean space, which is unfortunately not available for hyperbolic space targets.)
The main technical difficulty that had to be overcome in the paper was that there were two different time variables t, s (one for the wave maps equation and one for the heat flow), and three types of PDE (hyperbolic, parabolic, and ODE) that one has to solve forward in t, forward in s, and backwards in s respectively. In order to close the argument in the large energy case, this necessitated a rather complicated iteration-type scheme, in which one solved for the caloric gauge, established parabolic regularity estimates for that gauge, propagated a “wave-tension field” by the heat flow, and then solved a wave maps type equation using that field as a forcing term. The argument can eventually be closed using mostly “off-the-shelf” function space estimates from previous papers, but is remarkably lengthy, especially when analysing differences of two solutions. (One drawback of using off-the-shelf estimates, though, is that one does not get particularly good control of the solution over extended periods of time; in particular, the spaces used here cannot detect the decay of the solution over extended periods of time (unlike, say, Strichartz spaces L^q_t L^r_x for q < \infty) and so will not be able to supply the long-time perturbation theory that will be needed in the next paper in this series. I believe I know how to re-engineer these spaces to achieve this, though, and the details should follow in the forthcoming paper.)
Van Vu and I have just uploaded to the arXiv our new paper, “Random matrices: Universality of ESDs and the circular law“, with an appendix by Manjunath Krishnapur (and some numerical data and graphs by Philip Wood). One of the things we do in this paper (which was our original motivation for this project) was to finally establish the endpoint case of the circular law (in both strong and weak forms) for random iid matrices A_n = (a_{ij})_{1 \leq i,j \leq n}, where the coefficients a_{ij} are iid random variables with mean zero and unit variance. (The strong circular law says that with probability 1, the empirical spectral distribution (ESD) of the normalised eigenvalues \frac{1}{\sqrt{n}} \lambda_1, \ldots, \frac{1}{\sqrt{n}} \lambda_n of the matrix A_n converges to the uniform distribution on the unit circle as n \to \infty. The weak circular law asserts the same thing, but with convergence in probability rather than almost sure convergence; this is in complete analogy with the weak and strong law of large numbers, and in fact this law is used in the proof.) In a previous paper, we had established the same claim but under the additional assumption that the (2+\eta)^{th} moment {\Bbb E} |a_{ij}|^{2+\eta} was finite for some \eta > 0; this builds upon a significant body of earlier work by Mehta, Girko, Bai, Bai-Silverstein, Gotze-Tikhomirov, and Pan-Zhou, as discussed in the blog article for the previous paper.
As it turned out, though, in the course of this project we found a more general universality principle (or invariance principle) which implied our results about the circular law, but is perhaps more interesting in its own right. Observe that the statement of the circular law can be split into two sub-statements:
1. (Universality for iid ensembles) In the asymptotic limit n \to \infty, the ESD of the random matrix A_n is independent of the choice of distribution of the coefficients, so long as they are normalised in mean and variance. In particular, the ESD of such a matrix is asymptotically the same as that of a (real or complex) gaussian matrix G_n with the same mean and variance.
2. (Circular law for gaussian matrices) In the asymptotic limit n \to \infty, the ESD of a gaussian matrix G_n converges to the circular law.
The reason we single out the gaussian matrix ensemble G_n is that it has a much richer algebraic structure (for instance, the real (resp. complex) gaussian ensemble is invariant under right and left multiplication by the orthogonal group O(n) (resp. the unitary group U(n))). Because of this, it is possible to compute the eigenvalue distribution very explicitly by algebraic means (for instance, using the machinery of orthogonal polynomials). In particular, the circular law for complex gaussian matrices (Statement 2 above) was established all the way back in 1967 by Mehta, using an explicit formula for the distribution of the ESD in this case due to Ginibre.
These highly algebraic techniques completely break down for more general iid ensembles, such as the Bernoulli ensemble of matrices whose entries are +1 or -1 with an equal probability of each. Nevertheless, it is a remarkable phenomenon – which has been referred to as universality in the literature, for instance in this survey by Deift – that the spectral properties of random matrices for non-algebraic ensembles are in many cases asymptotically indistinguishable in the limit n \to \infty from that of algebraic ensembles with the same mean and variance (i.e. Statement 1 above). One might view this as a sort of “non-Hermitian, non-commutative” analogue of the universality phenomenon represented by the central limit theorem, in which the limiting distribution of a normalised average
\displaystyle \overline{X}_n := \frac{1}{\sqrt{n}} (X_1 + \ldots + X_n ) (1)
of an iid sequence depends only on the mean and variance of the elements of that sequence (assuming of course that these quantities are finite), and not on the underlying distribution. (The Hermitian non-commutative analogue of the CLT is known as Wigner’s semicircular law.)
Previous approaches to the circular law did not build upon the gaussian case, but instead proceeded directly, in particular controlling the ESD of a random matrix A_n via estimates on the Stieltjes transform
\displaystyle \frac{1}{n} \log |\det( \frac{1}{\sqrt{n}} A_n - zI )| (2)
of that matrix for complex numbers z. This method required a combination of delicate analysis (in particular, a bound on the least singular values of \frac{1}{\sqrt{n}} A_n - zI), and algebra (in order to compute and then invert the Stieltjes transform). [As a general rule, and oversimplifying somewhat, algebra tends to be used to control main terms in a computation, while analysis is used to control error terms.]
What we discovered while working on our paper was that the algebra and analysis could be largely decoupled from each other: that one could establish a universality principle (Statement 1 above) by relying primarily on tools from analysis (most notably the bound on least singular values mentioned earlier, but also Talagrand’s concentration of measure inequality, and a universality principle for the singular value distribution of random matrices due to Dozier and Silverstein), so that the algebraic heavy lifting only needs to be done in the gaussian case (Statement 2 above) where the task is greatly simplified by all the additional algebraic structure available in that setting. This suggests a possible strategy to proving other conjectures in random matrices (for instance concerning the eigenvalue spacing distribution of random iid matrices), by first establishing universality to swap the general random matrix ensemble with an algebraic ensemble (without fully understanding the limiting behaviour of either), and then using highly algebraic tools to understand the latter ensemble. (There is now a sophisticated theory in place to deal with the latter task, but the former task – understanding universality – is still only poorly understood in many cases.)
Read the rest of this entry »
RSS Google+ feed
Get every new post delivered to your Inbox.
Join 4,730 other followers |
3634f137a37d8a02 | Discover Dialogue: Chemist George Whitesides
What is intelligence? Where does life come from? Those are the big questions
By David Ewing Duncan|Wednesday, December 03, 2003
Photograph by Brian Finke
George Whitesides, 64, is the Mallinckrodt Professor of Chemistry at Harvard University. He is a Renaissance thinker whose ideas crisscross scientific disciplines and an outspoken critic who is fond of reminding scientists that they really understand very little. He believes that science is rapidly changing from separate studies of biology, chemistry, and physics to a new discipline that combines all three. Although he tries to confine his research to basic science, he holds many patents and has spun off companies, including several working on soft lithography, a method for building nanostuff.
Do you have a word for what you do?
W: I don’t have a word for it. We apply physical science to biology, we apply physics to materials science, we think about chemical principles in microfluidic devices. We are working in three or four areas. There is nanotech, where the thrust is to develop methods that enable people to make small structures easily. Then there are emergence, complexity, and self-assembly, the notion of complex systems putting themselves together and developing characteristic behaviors. This area of complexity is to me one of the big areas of science, whether the subject is the power breakdown on the East Coast or how a reactor works. The last area is tools for analyzing the cell, understanding how to control it.
So you’re a tinkerer?
W: Tinkerer is a funny word because it has in some sense an Edisonian quality, and an ideal project for our group is something in which we start with a question that’s fundamental science, such as: Where does lightning come from? And we try to understand it, and then find out how to apply it, and then build a prototype, and then do research engineering, and 10 years later there’s a start-up.
How did a chemist become an inventor?
W: Part of it is curiosity. Part of it is talking to all sorts of people, trying to consult in various areas. The chemistry is typically only a very small part.
The disciplines—biology, chemistry, physics—seem to be coalescing.
W: It is all beginning to come together. One issue at the university is that you have to have a set of courses that add up to a coherent whole. But the courses are taught along one set of axes, and research is being done along a completely different set of axes now. It is an interesting disconnect, and a big problem. What do I teach? I teach a general course on molecular biology for anyone who wants to know about molecular biology. At the end of the course you have a pretty good idea what’s going on. And then I teach another, more specialized course for chemists.
The place where this kind of education really goes on is with graduate students because when they eventually get jobs, it is in chemistry departments, and bioengineering departments, and chemical engineering departments, and biology departments—all over the map.
Some of them go to biotech companies . . .
W: Yes, although many students are not attracted to big companies these days. If you are in electronics you may still have to go to the big companies because the best possible talent is still in electronics and telecommunications. But in biotechnology there are still many interesting opportunities in start-ups.
Who is helping you?
W: The business of the lab is doing first experiments. And that requires a big group with a lot of skills, imaginative people. The graduate students are mostly chemists and materials scientists, with a few biologists; the postdoctoral students are everything—they’re electrical engineers, and chemical engineers, and physicists, and biologists, and M.D.-Ph.D.s.
Is a lot of time spent sitting around and chewing over things?
W: Particularly at the beginning of projects. Once something is started and the experiments are working, then that becomes, in a proper sense, more normal science, where one can proceed according to principles that are well understood. But the business of figuring out where to get a foothold in a new area can get pretty difficult. It requires a fair amount of intuition. It’s one place where someone older can make a contribution. I’ve seen a lot of projects start and fail and succeed and so on and so forth. Often you have some helpful instinct to bring to the story.
What do you mean by “complexity”?
W: Take the weather, or take air traffic control, or take electric power grids. They are complex systems with components that interact with one another. Things happen in these systems that we really, really do not understand.
Can you describe an example of a complexity experiment?
W: This is more of a physics experiment than biology: Take a polystyrene dish and put a bunch of little steel balls into it, then put a magnet underneath. The balls kind of follow the poles of the magnet around and roll around on the inside of the dish in a circle. Then they do something that to me is really quite amazing: That disorganized cloud resolves itself into a series of concentric rings of balls, all of them following one another. The individual balls also resolve themselves into a very highly ordered pattern, so they are equally spaced in each ring, and the rings are equally spaced relative to one another. The whole system spontaneously organizes itself in a very complicated way. Then after about 10 minutes the whole thing suddenly freezes onto the surface. How does that happen?
I find it immensely interesting because what’s going on there relates to such problems as where lightning comes from. Turns out we don’t know. What if you shuffle your feet across the carpet and there’s a spark? Where does that come from? We don’t know really where that comes from either. The world is just full of neat stuff that we don’t know the origins of.
Are you trying to come up with some basic physical explanations for where things come from?
W: When you look at systems that self-organize into complex behaviors and understand what the rules are, the bigger question becomes whether there are situations in which your understanding of the individual cases can help you predict the behavior of the complex system. Are there layers of complexity that are not predictable? This turns out to be a very important in biology. The genomics guys will tell you that by understanding the gene you understand the cell. Then there are people—I happen to be one of them—who say that the gene is interesting but probably largely irrelevant to much of what goes on in the cell. So what strategy do you use to understand a system that is as complicated as an organism? Do you look at behavior and try to understand that, or do you really follow science’s historically honored reductionist approach and pick it apart from the bottom and then rebuild it from there? Science has tended to do both, but for the last 30 to 40 years in biology there has been a great enthusiasm for reductionism.
So you’re using the really small to predict the really large?
W: Yes. I’m not sure that in my lifetime we will make these connections across all scales, but recently we have been working with things that are small. The nice thing about small things is that it enables you to look for new phenomena, and it enables you to put a lot of information into a small area. In a practical sense you can make a laboratory on a chip kind of thing.
Will we ever be able to understand these systems?
W: The question is: If you have a system with enough moving parts, can you really predict all of its behaviors? Is there something that these systems share in common that would form a new kind of science, or is it all idiosyncratic and special? And I don’t know the answer to that right now.
What about extraordinarily complex questions, such as the origin of life?
W: This notion of life as a traceable continuum with increasing complexity is something that I think is going to be pretty troubling to work out.
With so little understood, it doesn’t sound like this is the end of science, as science writer John Horgan proposed in his landmark book.
W: It should have been called The End of Physics. I mean, the big questions to me are: What is intelligence? Where does life come from? What is self-awareness? These are questions that are at least as interesting as questions like: Why do things fall down? Those who are Horgan physicists will tell you that there can’t be anything fundamentally new, that it’s all built into the Schrödinger equation. But the fact of the matter is that’s not a useful statement because the Schrödinger equation is terrific for the hydrogen atom and pretty much useless for everything else. It has no predictive value. I don’t think that any amount of information I have on the laws of attraction between atoms is going to tell me why someone plays the piano particularly well.
Why is a simple cell so fascinating to you?
W: What can be more interesting than life? What is a cell? What is an animal? Life in a certain sense is a sack of chemical reactions. That’s one way to look at it. Another is to say that it’s an entity that is compartmentalized, energy dissipating, adaptive, and self-replicating. A third way of looking at it is to say it’s a network of catalytic reactions, and amazingly, it replicates itself. But I haven’t said anything yet. I’ve just given names to things I don’t understand.
What do we understand?
W: We understand a lot about chemistry and physics, and some about biology. But even here there is a lot we don’t understand intuitively. You know, chemistry has been around for a while, and we all know about covalent bonds. Except it turns out that we don’t really understand covalent bonds at all. Can I predict the bond energy of H20? The answer is: exactly. And the computer that has predicted this took a series of integrals that I understand conceptually, which contain all sorts of different terms—nuclear repulsion, delocalization, this and that; it’s taken all of that and come up with the right answer. But I actually don’t understand it. And that’s the simplest molecule. What about a cell?
Then how can you hope to comprehend it?
W: We’re going to start to try to make things that have primitive, lifelike characteristics. If you can make something that replicates itself and has some of these lifelike characteristics, then perhaps one is moving in the right direction. And I think we do understand in principle many things about how a cell works. But in practice, we don’t.
Are we making any progress with the promise of nanotechnology?
W: There is evolutionary nanotechnology. There are people at places like Texas Instruments and Intel doing clever engineering to make things smaller and smaller. They have working microprocessors with “wires” that are only 90 nanometers across.
What about the more revolutionary stuff?
W: Interestingly, the area where there has been the most activity and the most progress has been in the interface between chemistry and materials science. There are many new kinds of materials. There are quantum dots for fluorescence and color, and buckyballs for electrical conductivity. There’s an absolute explosion of people making structures that are very, very small.
Why is it all taking so long?
W: I sense a little impatience in some people about the progression of nanotechnology, but if you take biotechnology, we didn’t have just one or two things that did the job. It wasn’t just the double helix, it was the double helix and the polymerase chain reaction and cloning, and on and on. Many different technologies have to be on the shelf for an area of technology to explode with applications.
You have suggested a kind of periodic table for clusters of atoms that might be used as basic building materials.
W: Yes, maybe these structures—clusters of atoms—form a sort of a periodic table of their own, and there are a series of elements that are clusters of atoms rather than being individual atoms because you can begin to make matter in new ways out of them. Why would you want to do that? The answer is, we don’t know right now. But there is a wonderful opportunity for discovery. I’m willing to predict with great confidence that over the course of 20 years this is going to produce stuff that’s very useful.
Are you also studying tiny structures in biology to get nano ideas from nature?
W: Yes. When one thinks about what’s discussed in nanotechnology, it is little motors and little batteries and things like that—those are already in cells. But they don’t exist at all in the way that futurists have thought about them, as devices with little motors and propellers. Bacteria use tails—flagella—which are a lot different from propellers. So one of the most interesting parts of the science in this area is to look at the biology from the vantage of an engineer, to say: Here’s a rotary motor, here’s how batteries operate in that world. How can I mimic those ideas outside a cell?
Do you worry about any of this going wrong? Did you read Michael Crichton’s book Prey, about nanotech machines running amok?
W: The difficulty with ideas like Prey is that they tend to ignore some very basic notions, like the logic of size. You’ve got to have these nano devices powered, and they have to have a certain size to talk to one another. It is not an accident that bacteria are one micron by three microns, not nanometers. It takes about that much space, even at a molecular level, to store the information in the machinery to make them do the things that they do. It’s also not an accident that they eat glucose—they need power.
What about futurist Bill Joy’s warning: that self-replicating nanodevices could proliferate and consume the planet, turning it into a gray goo?
W: As far as I can see, it’s complete nonsense. If there were new kinds of self-replicators, there might be a problem. But there are not. The level of hard science in these ideas is really very low.
Next Page
1 of 3
Comment on this article
Discover's Newsletter
Collapse bottom bar
Log in to your account
Email address:
Remember me
Forgot your password?
No problem. Click here to have it emailed to you.
Not registered yet?
|
d70da9278c9df29c |
Cheat Sheet
Quantum Physics For Dummies
In dabbling in quantum physics, you come across spin operators and commutation relationships, and many formulae, principles, and effects named for people such as the Hamiltonian, the Heisenberg Uncertainty Principle, the Schrödinger Equation, and the Compton Effect.
Quantum Physics and the Hamiltonian
One of the central problems of quantum mechanics is to calculate the energy levels of a system. The energy operator, called the Hamiltonian, abbreviated H, gives you the total energy. Finding the energy levels of a system breaks down to finding the eigenvalues of the problem
The eigenvalues can be found by solving the equation:
Quantum Physics and the Heisenberg Uncertainty Principle
This relation holds for all three dimensions:
Quantum Physics and the Schrödinger Equation
When a quantum mechanical state can be described by a wave function,
then this is a solution of the Schrödinger equation, which is written in terms of the potential
and energy
like so:
The Schrödinger equation work in three dimensions as well:
Spin Operators and Commutation in Quantum Physics
Quantum Physics and the Compton Effect
In quantum physics, you may deal with the Compton effect of X-ray and gamma ray qualities in matter. To calculate these effects, use the following formula, which assumes that the light is represented by a photon with energy E = hu and that its momentum is p = E/c:
• Add a Comment
• Print
• Share
Promoted Stories From Around The Web
blog comments powered by Disqus |
fc34686e200d972e | Molecular orbital diagram
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A molecular orbital diagram, or MO diagram, is a qualitative descriptive tool explaining chemical bonding in molecules in terms of molecular orbital theory in general and the linear combination of atomic orbitals (LCAO) molecular orbital method in particular.[1][2][3] A fundamental principle of these theories is that as atoms bond to form molecules, a certain number of atomic orbitals combine to form the same number of molecular orbitals, although the electrons involved may be redistributed among the orbitals. This tool is very well suited for simple diatomic molecules such as dihydrogen, dioxygen, and carbon monoxide but becomes more complex when discussing even comparatively simple polyatomic molecules, such as methane. MO diagrams can explain why some molecules exist and others do not. They can also predict bond strength, the electronic transitions that can take place.
Qualitative MO theory was introduced in 1928 by Robert S. Mulliken [4][5] and Friedrich Hund.[6] A mathematical description was provided by contributions from Douglas Hartree in 1928 [7] and Vladimir Fock in 1930.[8]
Molecular orbital diagrams are diagrams of molecular orbital (MO) energy levels, shown as short horizontal lines in the center, flanked by constituent atomic orbital (AO) energy levels for comparison, with the energy levels increasing from the bottom to the top. Lines, often dashed diagonal lines, connect MO levels with their constituent AO levels. Degenerate energy levels are commonly shown side by side. Appropriate AO and MO levels are filled with electrons symbolized by small vertical arrows, whose directions indicate the electron spins. The AO or MO shapes themselves are often not shown on these diagrams. For a diatomic molecule, an MO diagram effectively shows the energetics of the bond between the two atoms, whose AO unbonded energies are shown on the sides. For simple polyatomic molecules with a "central atom" such as methane (CH
) or carbon dioxide (CO
), a MO diagram may show one of the identical bonds to the central atom. For other polyatomic molecules, an MO diagram may show one or more bonds of interest in the molecules, leaving others out for simplicity. Often even for simple molecules, AO and MO levels of inner orbitals and their electrons may be omitted from a diagram for simplicity.
In MO theory molecular orbitals form by the overlap of atomic orbitals. The atomic orbital energy correlates with electronegativity as more electronegative atoms hold their electrons more tightly, lowering their energies. MO modelling is only valid when the atomic orbitals have comparable energy; when the energies differ greatly the mode of bonding becomes ionic. A second condition for overlapping atomic orbitals is that they have the same symmetry.
MO diagram hydrogen
MO diagram for dihydrogen. Here electrons are shown by dots.
Two atomic orbitals can overlap in two ways depending on their phase relationship. The phase of an orbital is a direct consequence of the wave-like properties of electrons. In graphical representations of orbitals, orbital phase is depicted either by a plus or minus sign (which has no relationship to electric charge) or by shading one lobe. The sign of the phase itself does not have physical meaning except when mixing orbitals to form molecular orbitals.
Two same-sign orbitals have a constructive overlap forming a molecular orbital with the bulk of the electron density located between the two nuclei. This MO is called the bonding orbital and its energy is lower than that of the original atomic orbitals. A bond involving molecular orbitals which are symmetric with respect to rotation around the bond axis is called a sigma bond (σ-bond). If the phase changes, the bond becomes a pi bond (π-bond). Symmetry labels are further defined by whether the orbital maintains its original character after an inversion about its center; if it does, it is defined gerade, g. If the orbital does not maintain its original character, it is ungerade, u.
Atomic orbitals can also interact with each other out-of-phase which leads to destructive cancellation and no electron density between the two nuclei at the so-called nodal plane depicted as a perpendicular dashed line. In this anti-bonding MO with energy much higher than the original AO's, any electrons present are located in lobes pointing away from the central internuclear axis. For a corresponding σ-bonding orbital, such an orbital would be symmetrical but differentiated from it by an asterisk as in σ*. For a π-bond, corresponding bonding and antibonding orbitals would not have such symmetry around the bond axis and be designated π and π*, respectively.
The next step in constructing an MO diagram is filling the newly formed molecular orbitals with electrons. Three general rules apply:
• The Aufbau principle states that orbitals are filled starting with the lowest energy
• The Pauli exclusion principle states that the maximum number of electrons occupying an orbital is two, with opposite spins
• Hund's rule states that when there are several MO's with equal energy, the electrons occupy the MO's one at a time before two electrons occupy the same MO.
The filled MO highest in energy is called the Highest Occupied Molecular Orbital or HOMO and the empty MO just above it is then the Lowest Unoccupied Molecular Orbital or LUMO. The electrons in the bonding MO's are called bonding electrons and any electrons in the antibonding orbital would be called antibonding electrons. The reduction in energy of these electrons is the driving force for chemical bond formation. Whenever mixing for an atomic orbital is not possible for reasons of symmetry or energy, a non-bonding MO is created, which is often quite similar to and has energy level equal or close to its constituent AO, thus not contributing to bonding energetics. The resulting electron configuration can be described in terms of bond type, parity and occupancy for example dihydrogen 1σg2. Alternatively it can be written as a molecular term symbol e.g. 1Σg+ for dihydrogen. Sometimes, the letter n is used to designate a non-bonding orbital.
For a stable bond, the bond order, defined as
\ \mbox{Bond Order} = \frac{(\mbox{No. of electrons in bonding MOs}) - (\mbox{No. of electrons in anti-bonding MOs})}{2}
must be positive.
The relative order in MO energies and occupancy corresponds with electronic transitions found in photoelectron spectroscopy (PES). In this way it is possible to experimentally verify MO theory. In general, sharp PES transitions indicate nonbonding electrons and broad bands are indicative of bonding and antibonding delocalized electrons. Bands can resolve into fine structure with spacings corresponding to vibrational modes of the molecular cation (see Franck–Condon principle). PES energies are different from ionisation energies which relates to the energy required to strip off the nth electron after the first n − 1 electrons have been removed. MO diagrams with energy values can be obtained mathematically using the Hartree–Fock method. The starting point for any MO diagram is a predefined molecular geometry for the molecule in question. An exact relationship between geometry and orbital energies is given in Walsh diagrams.
s-p mixing[edit]
In molecules, orbitals of the same symmetry are able to mix. As the s-p gap increases (C<N<O<F), such mixing loses its importance, leading to the inversion of 3σg and 1πu MO levels in homonuclear diatomics between N2 and O2.
Diatomic MO diagrams[edit]
The smallest molecule, hydrogen gas exists as dihydrogen (H-H) with a single covalent bond between two hydrogen atoms. As each hydrogen atom has a single 1s atomic orbital for its electron, the bond forms by overlap of these two atomic orbitals. In figure 1 the two atomic orbitals are depicted on the left and on the right. The vertical axis always represents the orbital energies. Each atomic orbital is singly occupied with an up or down arrow representing an electron.
MO diagram of dihydrogen
Application of MO theory for dihydrogen results in having both electrons in the bonding MO with electron configuration 1σg2. The bond order for dihydrogen is (2-0)/2 = 1. The photoelectron spectrum of dihydrogen shows a single set of multiplets between 16 and 18 eV (electron volts).[9]
The dihydrogen MO diagram helps explain how a bond breaks. When applying energy to dihydrogen, a molecular electronic transition takes place when one electron in the bonding MO is promoted to the antibonding MO. The result is that there is no longer a net gain in energy.
Bond breaking in MO diagram
Dihelium and diberyllium[edit]
Dihelium (He-He) is a hypothetical molecule and MO theory helps to explain why dihelium does not exist in nature. The MO diagram for dihelium looks very similar to that of dihydrogen, but each helium has two electrons in its 1s atomic orbital rather than one for hydrogen, so there are now four electrons to place in the newly formed molecular orbitals.
MO diagram of dihelium
The only way to accomplish this is by occupying the both the bonding and antibonding orbitals with two electrons, which reduces the bond order ((2-2)/2) to zero and cancels the net energy stabilization.However, by removing one electron from dihelium, the stable gas-phase species He+
ion is formed with bond order 1/2.
Another molecule that is precluded based on this principle is diberyllium. Beryllium has an electron configuration 1s22s2, so there are again two electrons in the valence level. However, the 2s can mix with the 2p orbitals in diberyllium, whereas there are no p orbitals in the valence level of hydrogen or helium. This mixing makes the antibonding 1σu orbital slightly less antibonding than the bonding 1σg orbital is bonding, with a net effect that the whole configuration has a slight bonding nature. Hence the diberyllium molecule exists (and has been observed in the gas phase).[10] It nevertheless still has a low dissociation energy of only 59 kJ·mol−1.[10]
MO theory correctly predicts that dilithium is a stable molecule with bond order 1 (configuration 1σg2u2g2). The 1s MOs are completely filled and do not participate in bonding.
MO diagram of dilithium
Dilithium is a gas-phase molecule with a much lower bond strength than dihydrogen because the 2s electrons are further removed from the nucleus. In a more detailed analysis both the 1σ orbitals have higher energies than the 1s AO and the occupied 2σ is also higher in energy than the 2s AO (see table 1).
The MO diagram for diboron (B-B electron configuration boron: 1s22s22p1) requires the introduction of an atomic orbital overlap model for p orbitals. The three dumbbell-shaped p-orbitals have equal energy and are oriented mutually perpendicularly (or orthogonally). The p-orbitals oriented in the x-direction (px) can overlap end-on forming a bonding (symmetrical) sigma orbital and an antibonding sigma* molecular orbital. In contrast to the sigma 1s MO's, the sigma 2p has some non-bonding electron density at either side of the nuclei and the sigma* 2p has some electron density between the nuclei.
The other two p-orbitals, py and pz, can overlap side-on. The resulting bonding orbital has its electron density in the shape of two lobes above and below the plane of the molecule. The orbital is not symmetric around the molecular axis and is therefore a pi orbital. The antibonding pi orbital (also asymmetrical) has four lobes pointing away from the nuclei. Both py and pz orbitals form a pair of pi orbitals equal in energy (degenerate) and can have higher or lower energies than that of the sigma orbital.
In diboron the 1s and 2s electrons do not participate in bonding but the single electrons in the 2p orbitals occupy the 2πpy and the 2πpz MO's resulting in bond order 1. Because the electrons have equal energy (they are degenerate) diboron is a diradical and since the spins are parallel the compound is paramagnetic.
MO diagram of diboron
Like diboron, dicarbon (C-C electron configuration:1s22s22p2 MO's 2σg2u2u4) is a reactive gas-phase molecule. The molecule can be described as having two pi bonds but without a sigma bond. [11]
The bond order for dinitrogen (2σg2u2u4g2) is three because two electrons are now also added in the 3σ MO. The MO diagram correlates with the experimental photoelectron spectrum for nitrogen.[12] The 1σ electrons can be matched to a peak at 410 eV (broad), the 2σg electrons at 37 eV (broad), the 2σu electrons at 19 eV (doublet), the 1πu4 electrons at 17 eV (multiplets), and finally the 3σg2 at 15.5 eV (sharp).
MO treatment of dioxygen is different from that of the previous diatomic molecules because the pσ MO is now lower in energy than the 2π orbitals. This is attributed to interaction between the 2s MO and the 2pz MO.[13] Distributing 8 electrons over 6 molecular orbitals leaves the final two electrons as a degenerate pair in the 2pπ* antibonding orbitals resulting in a bond order of 2. As in diboron, when these unpaired electrons have the same spin, this type of dioxygen called triplet oxygen is a paramagnetic diradical. When both HOMO electrons pair with opposite spins in one orbital, this other oxygen type is called singlet oxygen.
MO diagram of dioxygen
The bond order decreases and the bond length increases in the order O+
(112.2 pm), O
(121 pm), O
(128 pm) and O2−
(149 pm).[13]
Difluorine and dineon[edit]
In difluorine two additional electrons occupy the 2pπ* with a bond order of 1. In dineon Ne
(as with dihelium) the number of bonding electrons equals the number of antibonding electrons and this compound does not exist.
MO energies overview[edit]
Table 1 gives an overview of MO energies for first row diatomic molecules calculated by the Hartree-Fock-Roothaan method, together with atomic orbital energies.
Table 1. Calculated MO energies for diatomic molecules in Hartrees [14]
H2 Li2 B2 C2 N2 O2 F2
g -0.5969 -2.4523 -7.7040 - 11.3598 - 15.6820 - 20.7296 -26.4289
u -2.4520 -7.7032 -11.3575 -15.6783 -20.7286 -26.4286
g -0.1816 -0.7057 -1.0613 -1.4736 -1.6488 -1.7620
u -0.3637 -0.5172 -0.7780 -1.0987 -1.4997
g -0.6350 -0.7358 -0.7504
u -0.3594 -0.4579 -0.6154 -0.7052 -0.8097
g -0.5319 -0.6682
1s (AO) -0.5 -2.4778 -7.6953 -11.3255 -15.6289 -20.6686 -26.3829
2s (AO) -0.1963 -0.4947 -0.7056 -0.9452 -1.2443 -1.5726
2p (AO) -0.3099 -0.4333 -0.5677 -0.6319 -0.7300
Heteronuclear diatomics[edit]
In heteronuclear diatomic molecules, mixing of atomic orbitals only occurs when the electronegativity values are similar. In carbon monoxide (CO, isoelectronic with dinitrogen) the oxygen 2s orbital is much lower in energy than the carbon 2s orbital and therefore the degree of mixing is low. The electron configuration 1σ2*22*242 is identical to that of nitrogen. The g and u subscripts no longer apply because the molecule lacks a center of symmetry.
In hydrogen fluoride (HF), the hydrogen 1s orbital can mix with fluorine 2pz orbital to form a sigma bond because experimentally the energy of 1s of hydrogen is comparable with 2p of fluorine. The HF electron configuration 1σ2224 reflects that the other electrons remain in three lone pairs and that the bond order is 1.
Multinuclear molecules[edit]
Carbon dioxide[edit]
Carbon dioxide, CO
, is a linear molecule with a total of sixteen bonding electrons in its valence shell. Carbon is the central atom of the molecule and a principal axis, the z-axis, is visualized as a single axis that goes through the center of carbon and the two oxygens atoms. For convention, blue atomic orbital lobes are positive phases, red atomic orbitals are negative phases, with respect to the wave function from the solution of the Schrödinger equation.[15] In carbon dioxide the carbon 2s (−19.4 eV), carbon 2p (−10.7 eV), and oxygen 2p (−15.9 eV)) energies associated with the atomic orbitals are in proximity whereas the oxygen 2s energy (−32.4 eV) is different.[16]
Carbon and each oxygen atom will have a 2s atomic orbital and a 2p atomic orbital, where the p orbital is divided into px, py, and pz. With these derived atomic orbitals, symmetry labels are deduced with respect to rotation about the principal axis which generates a phase change, pi bond (π)[17] or generates no phase change, known as a sigma bond (σ).[18] Symmetry labels are further defined by whether the atomic orbital maintains its original character after an inversion about its center atom; if the atomic orbital does retain its original character it is defined gerade,g, or if the atomic orbital does not maintain its original character, ungerade, u. The final symmetry-labeled atomic orbital is now known as an irreducible representation.
Carbon dioxide’s molecular orbitals are made by the linear combination of atomic orbitals of the same irreducible representation that are also similar in atomic orbital energy. Significant atomic orbital overlap explains why sp bonding may occur.[19] Strong mixing of the oxygen 2s atomic orbital is not to be expected and are non-bonding degenerate molecular orbitals. The combination of similar atomic orbital/wave functions and the combinations of atomic orbital/wave function inverses create particular energies associated with the nonbonding (no change), bonding (lower than either parent orbital energy) and antibonding (higher energy than either parent atomic orbital energy) molecular orbitals.
Water (H
) is a bent molecule (105°) with C2v molecular symmetry. The oxygen atomic orbitals are labeled according to their symmetry as a1 for the 2s2 orbital and b2 (2px), b1 (2py) and a1 (2pz) for 4 electrons in the 2p orbital. The two hydrogen 1s orbitals are premixed to form a A1 (bonding) and B2 (antibonding) MO.
C2v E C2 σv(xz) σv'(yz)
A1 1 1 1 1 z x2, y2, z2
A2 1 1 −1 −1 Rz xy
B1 1 −1 1 −1 x, Ry xz
B2 1 −1 −1 1 y, Rx yz
Mixing takes place between same-symmetry orbitals of comparable energy resulting a new set of MO's for water. The lowest-energy MO, 1a1 resembles the oxygen 2s AO with some mixing with the hydrogen A1 AO. Next is the 1b1 MO resulting from mixing of the oxygen b1 AO and the hydrogen B1 AO followed by the 2a1 MO created by mixing the a1 orbitals. Both MO's form the oxygen to hydrogen sigma bonds. The oxygen b2 AO (the p-orbital perpendicular to the molecular plane) alone forms the 1b2 MO as it is unable to mix. This MO is nonbonding. In agreement with this description the photoelectron spectrum for water shows two broad peaks for the 1b2 MO (18.5 eV) and the 2a1 MO (14.5 eV) and a sharp peak for the nonbonding 1b1 MO at 12.5 eV. This MO treatment of water differs from the orbital hybridisation picture because now the oxygen atom has just one lone pair instead of two and contrary to the oversimplified VSEPR picture, water does not have two equivalent lone electron pairs resembling rabbit ears.[20]
Hydrogen sulfide (H2S) too has a C2v symmetry with 8 valence electrons but the bending angle is only 92°. As reflected in its PE spectrum as compared to water the 2a1 MO is stabilised (improved overlap) and the 1b2 MO is destabilized (poorer overlap).
See also[edit]
1. ^ Clayden, Jonathan; Greeves, Nick; Warren, Stuart; Wothers, Peter (2001). Organic Chemistry (1st ed.). Oxford University Press. pp. 96–103. ISBN 978-0-19-850346-0.
2. ^ Organic Chemistry, Third Edition, Marye Anne Fox, James K. Whitesell, 2003, ISBN 978-0-7637-3586-9
3. ^ Organic Chemistry 3rd Ed. 2001, Paula Yurkanis Bruice, ISBN 0-13-017858-6
4. ^ Mulliken, R. (1928). "The Assignment of Quantum Numbers for Electrons in Molecules. I". Physical Review 32 (2): 186. Bibcode:1928PhRv...32..186M. doi:10.1103/PhysRev.32.186. edit
5. ^ Mulliken, R. (1928). "Electronic States and Band Spectrum Structure in Diatomic Molecules. VII. P2→S2 and S2→P2 Transitions". Physical Review 32 (3): 388. Bibcode:1928PhRv...32..388M. doi:10.1103/PhysRev.32.388. edit
6. ^ Hund, F. Z. Physik 1928, 51, 759.
7. ^ Hartree, D. R. Proc. Cambridge. Phil. Soc. 1928, 24, 89
8. ^ Fock, V. Z. Physik 1930, 61, 126
9. ^ .hydrogen @ PES database
10. ^ a b Keeler, James; Wothers, Peter (2003). Why Chemical Reactions Happen. Oxford University Press. p. 74. ISBN 9780199249732.
11. ^ Shaik, S., Rzepa, H. S. and Hoffmann, R. (2013), One Molecule, Two Atoms, Three Views, Four Bonds? . Angew. Chem. Int. Ed., 52: 3020–3033. doi:10.1002/anie.201208206
12. ^ Bock, H.; Mollere, P. D. (1974). "Photoelectron spectra. An experimental approach to teaching molecular orbital models". Journal of Chemical Education 51 (8): 506. Bibcode:1974JChEd..51..506B. doi:10.1021/ed051p506. edit
13. ^ a b Modern Inorganic Chemistry William L. Jolly 1985 ISBN 0-07-032760-2
14. ^ Lawson, D. B.; Harrison, J. F. (2005). "Some Observations on Molecular Orbital Theory". Journal of Chemical Education 82 (8): 1205. doi:10.1021/ed082p1205. edit
15. ^ Housecroft, C. E.; Sharpe, A. G. (2008). Inorganic Chemistry (3rd ed.). Prentice Hall. p. 9. ISBN 978-0131755536.
16. ^ "An Introduction to Molecular Orbitals". Jean & volatron. ""1993"" ISBN 0-19-506918-8. p.192
17. ^ Housecroft, C. E.; Sharpe, A. G. (2008). Inorganic Chemistry (3rd ed.). Prentice Hall. p. 38. ISBN 978-0131755536.
18. ^ Housecroft, C. E.; Sharpe, A. G. (2008). Inorganic Chemistry (3rd ed.). Prentice Hall. p. 34. ISBN 978-0131755536.
19. ^ Housecroft, C. E.; Sharpe, A. G. (2008). Inorganic Chemistry (3rd ed.). Prentice Hall. p. 33. ISBN 978-0131755536.
20. ^ Laing, Michael (1987). "No rabbit ears on water. The structure of the water molecule: What should we tell the students?". Journal of Chemical Education 64: 124. Bibcode:1987JChEd..64..124L. doi:10.1021/ed064p124.
External links[edit]
• MO diagrams at Link
• MO diagrams at Link
• Molecular orbitals at Link |
d1795c156c9d0358 | Go to previous page Go up Go to next page
3 Basic Concepts and Solutions
In this section we present the basic framework for general relativity in higher dimensions, beginning with the definition of conserved charges in vacuum, i.e., mass and angular momentum, and the introduction of a set of dimensionless variables that are convenient for describing the phase space and phase diagrams of higher-dimensional rotating black holes. Then we introduce the Tangherlini solutions that generalize the four-dimensional Schwarzschild solution. The analysis that proves their classical stability is then reviewed. Black strings and black p-branes, and their Gregory–Laflamme instability, are briefly discussed for their relevance to novel kinds of rotating black holes.
3.1 Conserved charges
The Einstein–Hilbert action is generalized to higher dimensions in the form
1 ∫ √ --- I = ------ ddx − gR + Imatter. (5 ) 16πG
This is a straightforward generalization, and the only aspect that deserves some attention is the implicit definition of Newton’s constant G in d dimensions. It enters the Einstein equations in the conventional form
1 Rμν − --gμνR = 8 πGT μν , (6 ) 2
where Tμν = 2(− g)−1∕2(δImatter∕δgμν). This definition of the gravitational coupling constant, without any additional dimension-dependent factors, has the notable advantage that the Bekenstein-Hawking entropy formula takes the same form
𝒜H S = ---- (7 ) 4G
in every dimension. This follows, e.g., from the standard Euclidean quantum gravity calculation of the entropy.
Mass, angular momenta, and other conserved charges of isolated systems are defined through comparison to the field created near asymptotic infinity by a weakly gravitating system ([154] gives a careful Hamiltonian analysis of conserved charges in higher-dimensional asymptotically-flat spacetimes). The Einstein equations for a small perturbation around flat Minkowski space
g μν = ημν + h μν (8 )
in linearized approximation take the conventional form
¯ □ hμν = − 16πGT μν, (9 )
where ¯hμν = hμν − 1h ημν 2 and we have imposed the transverse gauge condition ∇ μ¯hμν = 0.
Since the sources are localized and we work at linearized perturbation order, the fields in the asymptotic region are the same as those created by point-like sources of mass M and angular momentum with antisymmetric matrix Jij, at the origin xk = 0 of flat space in Cartesian coordinates,
(d−1) k Ttt = M δ (x ), (10 ) 1- (d− 1) k Tti = − 2Jij∇j δ (x ). (11 )
The equations are easily integrated, assuming stationarity, to find
¯ ---16πG------M--- htt = (d − 3)Ω rd−3, (12 ) dk−2 ¯h = − 8πG--x-Jki, (13 ) ti Ωd−2 rd−1
where √ -i-i r = x x, and (d−1)∕2 (d−-1) Ωd −2 = 2π ∕ Γ 2 is the area of a unit (d − 2 )-sphere. From here we recover the metric perturbation hμν = ¯hμν − d1−2 ¯hημν as
h = ---16-πG-----M---, (14 ) tt (d − 2)Ωd −2rd−3 16πG M hij = --------------------d−3δij, (15 ) (d − 2)(d − 3)Ωd− 2r 8πG xkJki hti = −-------d−1-. (16 ) Ωd −2 r
It is often convenient to have the off-diagonal rotation components of the metric in a different form. By making a suitable coordinate rotation the angular momentum matrix Jij can be put into block-diagonal form, each block being a 2 × 2 antisymmetric matrix with parameter
Ja ≡ J2a−1,2a. (17 )
Here a = 1,...,N labels the different independent rotation planes. If we introduce polar coordinates on each of the planes
(x2a−1,x2a) = (racosφa, rasin φa) (18 )
then (no sum over a)
8 πGJa r2 8πGJa μ2 htφa = − --------da−1-= − --------d−a3. (19 ) Ωd −2 r Ωd−2 r
In the last expression we have introduced the ‘direction cosines’
ra μa = r . (20 )
Given the abundance of black-hole solutions in higher dimensions, one is interested in comparing properties, such as the horizon area 𝒜 H, of different solutions characterized by the same set of parameters (M, Ja ). A meaningful comparison between dimensionful magnitudes requires the introduction of a common scale, so the comparison is made between dimensionless magnitudes obtained by factoring out this scale. Since classical general relativity in vacuum is scale invariant, the common scale must be one of the physical parameters of the solutions, and a natural choice is the mass. Thus we introduce dimensionless quantities for the spins ja and the area aH,
d−3 J da−3 d−3 𝒜dH−3 ja = cJ GM--d−2-, aH = c𝒜 (GM--)d−2 , (21 )
where the numerical constants are
d−2 ( ) d−23 c = Ωd−-3(d-−-2)--- , c = --Ωd-−3--(d − 2)d−2 d-−-4- (22 ) J 2d+1 (d − 3)d−23 𝒜 2(16π)d−3 d − 3
(these definitions follow the choices in [80Jump To The Next Citation Point]). Studying the entropy, or the area 𝒜H, as a function of Ja for fixed mass is equivalent to finding the function aH (ja).
Note that, with our definition of the gravitational constant G, both the Newtonian gravitational potential energy,
1 Φ = − -htt, (23 ) 2
and the force law (per unit mass)
(d − 3)8πG M F = − ∇ Φ = ----------------ˆr (24 ) (d − 2)Ωd− 2rd−2
acquire d-dependent numerical prefactors. Had we chosen to define Newton’s constant so as to absorb these factors in the expressions for Φ or F, Equation (7View Equation) would have been more complicated.
To warm up before dealing with black holes, we follow John Michell and Simon de Laplace and compute, using Newtonian mechanics, the radius at which the escape velocity of a test particle in this field reaches the speed of light. The kinetic energy of a particle of unit mass with velocity v = c = 1 is K = 1∕2, so the equation K + Φ = 0 that determines the Michell–Laplace ‘horizon’ radius is
( 16πGM ) 1d−3- htt(r = rML ) = 1 ⇒ rML = ------------ . (25 ) (d − 2)Ωd−2
We will see in the next section 3.2 that, just like in four dimensions, this is precisely equal to the horizon radius for a static black hole in higher dimensions.
3.2 The Schwarzschild–Tangherlini solution and black p-branes
Consider the linearized solution above for a static source (14View Equation) in spherical coordinates, and pass to a gauge where r is the area radius,
r → r − -------8πG---------M---. (26 ) (d − 2 )(d − 3)Ωd −2rd−3
The linearized approximation to the field of a static source is then
( ) ( ) 2 --μ-- 2 --μ-- 2 2 2 ds(lin) = − 1 − rd−3 dt + 1 + rd− 3 dr + r dΩ d−2, (27 )
where, to lighten the notation, we have introduced the ‘mass parameter’
μ = --16πGM-----. (28 ) (d − 2)Ωd− 2
This suggests that the Schwarzschild solution generalizes to higher dimensions in the form
( μ ) dr2 ds2 = − 1 − -d−3- dt2 + -----μ-- + r2dΩ2d− 2. (29 ) r 1 − rd−3
In essence, all we have done is change the radial falloff 1∕r of the Newtonian potential to the d-dimensional one, d−3 1∕r. As Tangherlini found in 1963 [232], this turns out to give the correct solution: it is straightforward to check that this metric is indeed Ricci flat. It is apparent that there is an event horizon at r0 = μ1∕(d−3), which coincides with the Michell–Laplace result (25View Equation).
Having this elementary class of black-hole solutions, it is easy to construct other vacuum solutions with event horizons in d ≥ 5. The direct product of two Ricci-flat manifolds is itself a Ricci-flat manifold. So, given any vacuum black-hole solution ℬ of the Einstein equations in d dimensions, the metric
∑p ds2d+p = ds2d(ℬ) + dxidxi (30 ) i=1
describes a black p-brane, in which the black-hole horizon ℋ ⊂ ℬ is extended to a horizon p ℋ × ℝ, or p ℋ × 𝕋 if we identify periodically i i x ∼ x + Li. A simple way of obtaining another kind of vacuum solution is the following: unwrap one of the directions xi, perform a boost t → coshαt + sinh αxi, xi → sinh αt + cosh αxi, and re-identify points periodically along the new coordinate xi. Although locally equivalent to the static black brane, the new boosted black-brane solution is globally different from it.
These black brane spacetimes are not (globally) asymptotically flat, so we only introduce them insofar as they are relevant for understanding the physics of asymptotically-flat black holes.
3.3 Stability of the static black hole
The stability of the d > 4 Schwarzschild solution against linearized gravitational perturbations can be analyzed by decomposing such perturbations into scalar, vector and tensor types according to how they transform under the rotational-symmetry group SO (d − 1) [105Jump To The Next Citation Point163Jump To The Next Citation Point151Jump To The Next Citation Point]. Assuming a time dependence e− iωt and expanding in spherical harmonics on Sd −2, the equations governing each type of perturbation reduce to a single ODE governing the radial dependence. This equation can be written in the form of a time-independent Schrödinger equation with “energy” eigenvalue 2 ω.
In investigating stability, we consider perturbations that are regular on the future horizon and outgoing at infinity. An instability would correspond to a mode with Im ω > 0. For such modes, the boundary conditions at the horizon and infinity imply that the left-hand side (LHS) of the Schrödinger equation is self-adjoint, and hence 2 ω is real. Therefore, an unstable mode must have negative imaginary ω. For tensor modes, the potential in the Schrödinger equation is manifestly positive, hence ω2 > 0 and there is no instability [105]. For vectors and scalars, the potential is not everywhere positive. Nevertheless, it can be shown that the operator appearing on the LHS of the Schrödinger equation is positive, hence ω2 > 0 and there is no instability [151Jump To The Next Citation Point]. In conclusion, the d > 4 Schwarzschild solution is stable against linearized gravitational perturbations.
3.4 Gregory–Laflamme instability
The instabilities of black strings and black branes [118119] have been reviewed in [167128Jump To The Next Citation Point], so we shall be brief in this section and only mention the features that are most relevant to our subject. We shall only discuss neutral black holes and black branes; when charges are present, the problem becomes more complex.
This instability is the prototype for situations in which the size of the horizon is much larger in some directions than in others. Consider, as a simple, extreme case of this, the black string obtained by adding a flat direction z to the Schwarzschild solution. One can decompose linearized gravitational perturbations into scalar, vector and tensor types according to how they transform with respect to transformations of the Schwarzschild coordinates. Scalar and vector perturbations of this solution are stable [117Jump To The Next Citation Point]. Tensor perturbations that are homogeneous along the z-direction are also stable, since they are the same as tensor perturbations of the Schwarzschild black hole. However, there appears to be an instability for long-wavelength tensor perturbations with nontrivial dependence on z; the frequency ω of perturbations ∼ e−i(ωt−kz) acquires a positive imaginary part when k < kGL ∼ 1∕r0, where r0 is the Schwarzschild horizon radius. Thus, if the string is compactified on a circle of length L > 2π∕kGL ∼ r0, it becomes unstable. Of the unstable modes, the fastest one (with the largest imaginary frequency) occurs for k roughly one half of kGL. The instability creates inhomogeneities along the direction of the string. Their evolution beyond the linear approximation has been followed numerically in [42]. It is unclear yet what the endpoint is; the inhomogeneities may well grow until a sphere pinches down to a singularity.3 In this case, the Planck scale will be reached along the evolution, and fragmentation of the black string into black holes, with a consistent increase in the total horizon entropy, may occur.
Another important feature of this phenomenon is the appearance of a zero-mode (i.e., static) perturbation with k = kGL. Perturbing the black string with this mode yields a new static solution with inhomogeneities along the string direction [117Jump To The Next Citation Point120Jump To The Next Citation Point]. Following numerically these static perturbations beyond the linear approximation has given a new class of inhomogeneous black strings [248Jump To The Next Citation Point].
These results easily generalize to black p-branes; for a wavevector k along the p directions tangent to the brane, the perturbations ∼ exp (− iωt + ik ⋅ z)) with |k | ≤ kGL are unstable. The value of kGL depends on the codimension of the black brane, but not on p.
Go to previous page Go up Go to next page |
29c5966a46c4645f | Nat Comm 2012, 3:1108 CrossRef 4 Lal S, Link S, Halas N: Nano-op
Nat Comm 2012, 3:1108.CrossRef 4. Lal S, Link S, Halas N: Nano-optics
from sensing to waveguiding. Nat Photonics 2007, 1:641–648.CrossRef 5. Martin-Cano D, Martin-Moreno L, García-Vidal F, Moreno E: Resonance energy transfer and superradiance mediated by plasmonic nanowaveguides. Nano Lett 2010, 10:3129–3134.CrossRef 6. Sorger V, Zhang X: Spotlight on plasmon lasers. Science 2011, 333:709–710.CrossRef 7. Russell K, Liu TL, Cui S, Hu EL: Large spontaneous this website emission enhancement in plasmonic nanocavities. Nat Photonics 2012, 6:459–462.CrossRef 8. Noginov M, Zhu G, Belgrave A, Bakker R, Shalaev V, Narimanov E, Stout S, Herz E, Suteewong T, Wiesner U: Demonstration of a spaser-based nanolaser. Nature 2009, 460:1110–1113.CrossRef 9. Juan ML, Righini M, Quidant R: Plasmon nano-optical tweezers. Nat Photonics 2011, 5:349–356.CrossRef 10. Schuller J, Barnard E, Cai W, Jun YC, White J, Brongersma M: Plasmonics for extreme light concentration and manipulation. Nat Mater 2010, 9:193–204.CrossRef 11. Fan J, Wu C, Bao K, Bao J, Bardhan R, Halas N, Manoharan V, Nordlander P, Shvets G, Capasso F: Self-assembled plasmonic nanoparticle clusters. Science 2010, 328:1135–1138.CrossRef 12. Reed J, Zhu H, Zhu AY, Li C, Cubukcu E: Graphene-enabled silver nanoantenna sensors. Nano Lett 2012, 12:4090–4094.CrossRef 13. Li JF, Huang YF, Ding Y, Yang ZL, Li SB, Zhou XS, Fan FR, Zhang W, Zhou ZY, Wu DY, Ren B, Wang ZL, Tian ZQ: Shell-isolated
nanoparticle-enhanced Raman spectroscopy. Nature 2010, 464:392–395.CrossRef 14. Evans P, Kullock R, Hendren W, Atkinson R, Pollard R, Eng L: Optical transmission properties and electric field distribution of interacting 2D silver nanorod arrays. Adv Funct Mater 2008, 18:1075–1079.CrossRef 15. Liu SD, Cheng MT, Yang ZJ, Wang QQ: Surface plasmon propagation in a pair of metal nanowires coupled to a nanosized optical emitter.
Opt Lett 2008, 33:851–853.CrossRef 16. Kawata S, Ono A, Verma P: Subwavelength colour imaging with a metallic nanolens. Nat Photonics 2008, 2:438–442.CrossRef 17. Lyvers D, Moon J, Kildishev A, Shalaev V, Wei A: Gold nanorod arrays as plasmonic cavity resonators. ACS Nano 2008, 2:2569–2576.CrossRef 18. Zhou ZK, Li M, Yang ZJ, Peng XN, Su XR, Zhang Glycogen branching enzyme ZS, Li JB, Kim N, Yu XF, Zhou L, Hao ZH, Wang QQ: Plasmon-mediated radiative energy transfer across a silver nanowire array via resonant transmission and subwavelength imaging. ACS Nano 2010, 4:5003–5010.CrossRef 19. Yao J, Liu Z, Liu Y, Wang Y, Sun C, Bartal G, Stacy A, Zhang X: Optical negative refraction in bulk metamaterials of nanowires. Science 2008, 321:930–930.CrossRef 20. Mühlschlegel P, Eisler J, Martin O, Hecht B, Pohl D: Resonant optical antennas. Science 2005, 308:1607–1609.CrossRef 21. Knight M, Sobhani H, Nordlander P, Halas N: Photodetection with active optical antennas. Science 2011, 332:702–704.CrossRef 22. Novotny L, van Hulst N: Antennas for light. Nat Photonics 2011,5(2):83–90.CrossRef 23.
If we neglect , this is exactly the same as that of the two-dimen
If we neglect , this is exactly the same as that of the two-dimensional simple harmonic oscillator of frequencies ω j . We will use this
formula in order to develop DSN, which is a typical nonclassical quantum state. If we regard that the transformed Hamiltonian is very simple, the quantum dynamics in the transformed system may be easily developed. Let us write the Schrödinger equations for elements of the transformed Hamiltonian as (25) where represent number state wave functions for each component of the decoupled systems described see more by . By means of the usual annihilation operator, (26) and the creation operator defined as the Hermitian adjoint of , one can Pembrolizumab chemical structure identify the initial wave functions of the transformed system in number state such that (27) where (28) This formula of wave functions will be used in the next section in order to derive the DSN of the system. Displaced squeezed number state The DSNs are defined by first squeezing the number states and then displacing them. Like squeezed states, DSNs exhibit nonclassical properties of the quantum field in which the fluctuation
of a certain observable can be less than that in the vacuum state. This state is a generalized quantum state for dynamical systems and, in fact, equivalent to excited two-photon coherent states in quantum optics. If we consider that DSNs generalize and combine the features of well-known important states such as displaced number states (DNs) [22], squeezed number states [23], and two-photon selleck inhibitor coherent states (non-excited) [24], the study of DSNs may be very interesting. Different aspects of these states, including quantal statistics, entropy, entanglement, and position space representation with the correct overall phase, have been investigated in [17, 23, 25]. To obtain the DSN in the original system, we first derive the DSN in the transformed system according to its exact definition. Then, we will transform it inversely into
that of the original system. The squeeze operator in the transformed system is given by (29) where (30) Using the Baker-Campbell-Hausdorff relation that is given by [26] (31) where , the squeeze operator can be rewritten as (32) Let us express the DSN in the transformed system in the form (33) where represent two decoupled states which are drivable from (34) Here, are displacement operators in the transformed system, which are given by (35) where α j is an eigenvalue of at initial time. By considering Equation 26, we can confirm that (36) where q j c (t) and p j c (t) are classical solutions of the equation of motion in charge and current spaces, respectively, for the finally transformed system.
gingivalis [15] SDS PAGE analysis of the V8 protease and α-haemo
gingivalis [15]. SDS PAGE analysis of the V8 protease and α-haemolysin demonstrated that photosensitisation caused changes to the proteins which resulted in smearing of the protein bands. We propose that singlet oxygen may play a role in the inactivation of V8 protease as a protective effect is observed when photosensitisation is performed in the presence of the singlet oxygen scavenger L-tryptophan (data not shown). Conclusion In conclusion, the results of this study suggest that photosensitisation with methylene
blue and laser light of 665 nm may be able to reduce the virulence click here potential of S. aureus, as well as effectively killing the organism. Inactivation of α-haemolysin and sphingomyelinase is not affected by the presence of human serum, indicating that PDT may be effective against these toxins in vivo. Considering the extensive damage virulence factors can cause to host tissues,
the ability to inhibit their activity would be a highly desirable feature for any antimicrobial treatment regimen and would represent a significant advantage over conventional antibiotic strategies. Methods Light source A Periowave™ laser (Ondine Biopharma Inc., Canada), which emits light with a wavelength of 665 nm was used for all irradiation experiments. For experimental purposes, this website the laser system was set up to give a power density of 32 mW/cm2. The power output of Bortezomib cost the laser was measured using a thermopile power meter (TPM-300CE, Genetic, Canada) and was found to be 73 mW at the plate surface. Photosensitiser Methylene blue (C16H18ClN3S.3H2O) was purchased from Sigma-Aldrich (UK). Stock solutions of 0.1 mg/ml were prepared in phosphate buffered saline (PBS) and kept in the dark at room temperature. Bacterial strains EMRSA-16 was maintained by weekly subculture on Blood Agar (Oxoid Ltd, UK), supplemented with 5% horse blood (E & O Laboratories Ltd). For experimental
purposes, bacteria were grown aerobically in Brain Heart Infusion broth (Oxoid Ltd, UK) at 37°C for 16 hours in a shaking incubator at 200 rpm. Cultures were centrifuged and resuspended in an equal volume of PBS and the optical density was adjusted to 0.05 at 600 nm, corresponding to approximately 1 × 107 colony forming units (CFU) per mL. The effect of photosensitiser dose on the lethal photosensitisation of EMRSA-16 Methylene blue was diluted in PBS to give final concentrations of 1, 5, 10 and 20 μM. 50 μL of methylene blue was added to an equal volume of the inoculum in triplicate wells of a sterile, flat-bottomed, untreated 96-well plate and irradiated with 665 nm laser light with an energy density of 1.93 J/cm2 (L+S+), with stirring. Three additional wells containing 50 μL methylene blue and 50 μL of the bacterial suspension were kept in the dark to assess the toxicity of the photosensitiser alone (L-S+).
PubMed 36 Aagaard P, Simonsen EB, Andersen JL, Magnusson P, Dyhr
PubMed 36. Aagaard P, Simonsen EB, Andersen JL, Magnusson P, Dyhre-Poulsen P: Increased rate of force development and neural drive of human skeletal muscle following resistance training. J Appl Physiol 2002, 93:1318–1326.PubMed 37. Sale DG: Influence of exercise and training on motor unit activation. Exerc Sport Sci Rev 1987, 15:95–151.CrossRefPubMed 38. Staron RS, CP 673451 Karapondo DL, Kraemer WJ, Fry AC, Gordon SE,
Falkel JE, Hagerman FC, Hikida RS: Skeletal muscle adaptations during early phase of heavy-resistance training in men and women. J Appl Physiol 1994, 76:1247–1255.PubMed 39. Aswar U, Mohan V, Bhaskaran S, Bodhankar L: Study of Galactomannan on Androgenic and Anabolic Activity in Male Rats. Pharmacology Online 2008, 56–65. 40. Ratamess NA: Adaptations to Anaerobic Training Programs. Essentials of Strength Training and Conditioning 2008, 3:94–119. Competing interests The authors declare that they have no competing interests. Authors’ contributions CW is the principal investigator. CP & BB assisted in data collection and coordinated the study. CP, CW, & LT analyzed data & wrote the manuscript. RK assisted in the grant preparation and securing grant funding. DW & LT analyzed blood variables. BC, LT, &
CF consulted on study design, manuscript review and preparation. All authors have read and approved the final manuscript.”
“Introduction Tennis is an intermittent sport with the actual playing time being 17-28% of total match duration [1]. The remainder Dinaciclib molecular weight of the time is recovery between points and games. On average, the rallies last 4.3-7.7 sec in men’s Grand Slam tournament matches [2]. At the stroke frequency of approximately 0.75 shots. sec-1 [2], the cumulative effect of the repetitive short-term high-intensity efforts throughout prolonged tennis matches could result in significant neuromuscular fatigue [1, 3], which in turn may impair certain aspects of Miconazole skilled performance [4, 5]. Indeed, the stroke accuracy was significantly decreased in competitive tennis players near the point of volitional fatigue [6]. Stroke accuracy and velocity were also significantly decreased after a strenuous training session (average rating of
perceived exertion (RPE) 15.9/20) in well-trained tennis players [7]. One of the potential factors that may influence the skilled tennis performance is neural function. The central activation failure, changes in neurotransmitter levels and disturbance in excitation-contraction coupling have been suggested to play an important role in the development of fatigue in prolonged tennis matches [3, 8]. The decline in maximal voluntary contraction and electromyographic activity of knee extensor muscles occurred progressively during a 3-hour tennis match, indicating a decreasing number of motor units that are voluntarily recruited [3]. The impairments in neural functions in lower limbs may lead to the slower acceleration in movement and the inability to reach the optimal stroke position.
A total of 469 patients (264 women and 205 men; mean age 48 1 yea
A total of 469 patients (264 women and 205 men; mean age 48.1 years) were enrolled, including 26 with gastric cancer, 64 with gastric ulcer, 131 with duodenal ulcer, 209 with gastritis & without IM and 39 with gastritis & IM. From each category, 32 isolates were randomly sampled
(the cancer group had just 26 isolates and all were selected). A total of 154 isolates were sampled, but 8 stored strains could not be successfully subcultured after refrigeration. Accordingly, 146 strains were finally obtained from patients with Selleck NVP-AUY922 duodenal ulcer (n = 31), gastric ulcer (n = 32), gastric cancer (n = 24), gastritis with IM (n = 28), and gastritis without IM (n = 31). These 146 H. pylori isolates were analyzed for the cagA-genotype by polymerase chain reaction and for the intensity of p-CagA by in vitro co-culture with AGS cells (a human gastric adenocarcinoma epithelial cell line); further the p-CagA
intensity was defined as strong, weak, or sparse. Besides, in each patient, their gastric biopsies taken from both antrum and corpus for histology were reviewed by the updated Sydney’s system. Histological analysis of the gastric specimens Each gastric sample was stained with haematoxylin and eosin as well as with modified Giemsa stains to analyze for H. pylori density (HPD, range 0-5) and H. pylori-related histology by the updated Sydney’s system. The histological parameters included acute inflammation score (AIS, ABC294640 range 0-3; 0: none, 1: mild, 2:moderate, 3: severe), chronic inflammation score (CIS, range 1-3; 1: mild, 2: moderate, 3: severe), mucosal atrophy, and IM as applied in our previous studies [20, 21]. For each patient, the presence of atrophy or IM was defined as a positive histological Oxymatrine finding in any specimen from the antrum or corpus. In each patient, the total HPD, AIS, and CIS were the sum of each score of the gastric specimens from antrum and corpus, and thus ranged from
0-10, 0-6, and 2-6, respectively. Based on the sum of HPD, the patients were categorized as loose (score ≤ 5), moderate (score within 6-8), and dense (score ≥ 9) H. pylori colonization, respectively. For the sum of AIS, mild, moderate, and severe acute inflammations were defined with scores ≤1, 2-3, or ≥4, respectively. Based on the sum of CIS, mild, moderate, and severe chronic inflammations were defined with scores ≤3, 4-5, or 6, respectively. Based on the specimens collected from both the antrum and corpus within the same patient, the topographical distribution of chronic gastritis was defined as follows: 1) very limited chronic gastritis, if the CIS scored was 1 for both antrum and corpus; 2) antrum-predominant gastritis, if the CIS score of the antrum was higher than the score of the corpus; and 3) corpus-predominant gastritis, if the corpus CIS was equal to or higher than that of the antrum [21]. Analysis of cagA genotype and type IV secretion system function of H. pylori All H.
Such an approach, however, entails risks linked to excessive comm
Such an approach, however, entails risks linked to excessive commodification of nature and would need to be contextualised for Palbociclib cost different groups of stakeholders. A second challenge is that the problem of biodiversity loss is caused by a complex set of issues working at different levels. Recommendations about communication normally emphasise simplicity, but we argue that communication about biodiversity loss needs to incorporate or stress this complexity. Some argue that frameworks such as the drivers,
pressures, state, impacts, responses (DPSIR) approach could help to map the complex picture of issues linked to biodiversity and make this complexity more understandable and further manageable (see Rounsevell et al. 2010). This would, however, need to be complemented by defining concrete and potential policy recommendations (the ‘responses’ in the DPSIR framework) that could be employed to tackle problems. The third challenge is that biodiversity loss is a multi-dimensional problem that neither ecological science or environmental policy can solely address. The problem of working in “silos”, as outlined earlier in this paper, does not help to tackle such problems. To understand and act for conservation and sustainable uses of biodiversity requires selleck transdisciplinary approaches where various disciplines, stakeholders as well as policy makers take part in the co-construction of knowledge. However,
moving beyond silos is not just a challenge for scientists but also for policy: policy sectors other than just the environmental policy sector need to integrate biodiversity into their core focus areas. Only in this way will the complexities associated with biodiversity and its loss be taken into account to a sufficient extent by the wider Doxacurium chloride policy community. The acknowledgement of heterogeneous policy communities raises a fundamental question for biodiversity-related
science-policy interfaces, namely how to identify and reach the most relevant target audiences. Biodiversity scientists may need to step onto uncomfortable ground, away from their favourite decision-makers in environmental policy sectors, for example by targeting also departments or sectors responsible for economic policies which are partly responsible for biodiversity loss. The basic message in the literature, and influencing our recommendations, is about the importance of jointly constructing knowledge and bringing together the scientific, institutional or policy knowledge. Thus, dialogue should be initiated with different target audiences, with special attention paid to other sectors that may be less familiar to biodiversity scientists, such as economic sectors and interest groups. There are ways to reach these groups. Firstly, biodiversity researchers could try to impact on the private actors by first altering the views of the related policy makers to implement top-down policies. This is unlikely until biodiversity is fully ‘mainstreamed’ across policy sectors.
An image of the microarray was taken and analysed using a designa
An image of the microarray was taken and analysed using a designated reader and software (Alere Technologies GmbH, Jena, Germany). Analysis allowed to determine the presence or absence of the target genes as well as, by comparison to a database
of reference strains, the assignment to clonal complexes as previously defined by MLST [41] and eBURST analysis of MLST data (http://saureus.mlst.net/eburst/). Sequence types which differ in nucleotide polymorphisms affecting MLST genes (such as ST22 and ST1117) cannot be differentiated. However, STs which originate from recombination events such as CC8/ST239 or CC30/ST34 [24, 25] can be identified as well as some other STs which differ from their respective parent lineage such as CC1/ST772 or CC8/ST72. Epidemic strains are defined and identified based on profiles HM781-36B in vitro and MLST data previously described [20, 21]. Acknowledgements The authors acknowledge the staff of the microbiology laboratory at the KFMC for collecting strains as well as Elke Müller (Alere
Technologies GmbH) for excellent technical assistance. Electronic supplementary material Additional file 1: Patient demographics and full hybridisation profiles. (PDF 836 KB) References 1. Humphreys H, Carroll JD, Keane CT, Cafferkey MT, Pomeroy HM, Coleman DC: Importation of methicillin-resistant Staphylococcus aureus from Baghdad to Dublin and subsequent nosocomial spread. J Hosp Infect 1990,15(2):127–135.PubMedCrossRef 2. Weber S, Ehricht R, Slickers P, Abdel-Wareth L, Donnelly G, Pitout M, Monecke S: Genetic not fingerprinting of MRSA from Abu Dhabi. ECCMID: 2010, Vienna; 2010. 3. Fatholahzadeh B, Emaneini M, Aligholi M, Gilbert G, Taherikalani M, Jonaidi N, Eslampour MA, Feizabadi MM: Molecular characterization of methicillin-resistant Staphylococcus aureus clones from a teaching hospital in Tehran. Jpn J Infect Dis 2009,62(4):309–311.PubMed 4. Cirlan M, Saad M, Coman G, Bilal NE, Elbashier AM, Kreft D, Snijders S, van Leeuwen W, van Belkum A: International spread of major clones of methicillin resistant Staphylococcus
aureus: nosocomial endemicity of multi locus sequence type 239 in Saudi Arabia and Romania. Infect Genet Evol 2005,5(4):335–339.PubMedCrossRef 5. Alp E, Klaassen CH, Doganay M, Altoparlak U, Aydin K, Engin A, Kuzucu C, Ozakin C, Ozinel MA, Turhan O, et al.: MRSA genotypes in Turkey: persistence over 10 years of a single clone of ST239. J Infect 2009,58(6):433–438.PubMedCrossRef 6. Chongtrakool P, Ito T, Ma XX, Kondo Y, Trakulsomboon S, Tiensasitorn C, Jamklang M, Chavalit T, Song JH, Hiramatsu K: Staphylococcal cassette chromosome mec (SCCmec) typing of methicillin-resistant Staphylococcus aureus strains isolated in 11 Asian countries: a proposal for a new nomenclature for SCCmec elements. Antimicrob Agents Chemother 2006,50(3):1001–1012.PubMedCrossRef 7.
In the case of MPA, the self-resistance mechanism has not been el
In the case of MPA, the self-resistance mechanism has not been elucidated. Figure 1 Role of IMPDH and MPA in GMP biosynthesis. MPA inhibits IMPDH. MPA: Mycophenolic acid. R: ribose 5′-monophosphate. IMP: inosine-5′-monophosphate, XMP: xanthosine-5′-monophosphate, guanosine-5′-monophosphate. GMP: Guanosine monophosphate. IMPDH: IMP dehydrogenase. The MPA biosynthetic gene cluster from Penicillium brevicompactum was identified only recently [12]. Interestingly, it turned out that the MPA gene cluster, in addition to the MPA biosynthetic genes, contains a putative IMPDH-encoding gene (mpaF). The study also revealed an additional putative IMPDH-encoding gene by probing the P. brevicompactum genomic
DNA [12]. A BLAST search using mpaF as query resulted in only a single IMPDH encoding gene per organism for all fully sequenced non-Penicillium
filamentous fungi (see the Results and Discussion section for details). Thus, the discovery of mpaF identifies P. brevicompactum as the first filamentous fungus known to feature two IMPDH encoding genes. In this study, we have identified additional species from the Penicillium subgenus Penicillium that contain two putative IMPDH encoding genes. Furthermore, we show that the two copies that are present in each fungus are dissimilar, and that one of them forms Opaganib research buy a new distinct group in a cladistic analysis. The IMPDH from the MPA cluster, mpaF, is the founding member of this novel group. The presence of mpaF within the biosynthesis cluster in P. brevicompactum hints at a role in MPA self-resistance. In this study, we examine this hypothesis and show that mpaF confers resistance to MPA when expressed in an otherwise highly sensitive non-producer
fungus Aspergillus nidulans. Results and discussion Expression of mpaF in A. nidulans confers resistance to MPA In order to investigate whether MpaFp from P. brevicompactum is resistant to MPA we transferred mpaF to a fungus, A. nidulans, which does not produce MPA. Specifically, we constructed a strain where the A. nidulans IMPDH triclocarban structural gene (imdA) was replaced by the coding region of mpaF, see Figure 2A. The sensitivity of this strain towards MPA was then compared to a reference A. nidulans strain. As expected, the spot assays shown in Figure 2 demonstrate that the germination of WT spores is reduced due to MPA. This effect is most significant at media containing 100 and 200 μg/ml MPA where the viability is reduced by approximately two orders of magnitude as compared to the plate containing no MPA. The level of sensitivity of A. nidulans towards MPA is consistent with the toxic levels observed for other eukaryotic organisms [13, 14]. In contrast, MPA had little or no effect on spore viability of the strain NID495 where the gene encoding A. nidulans IMPDH (imdA) has been replaced by mpaF.
Electrochim Acta 2001, 47:345–352 CrossRef 7 Qiu J, Guo M, Feng
Electrochim Acta 2001, 47:345–352.CrossRef 7. Qiu J, Guo M, Feng Y, Wang X: Electrochemical deposition of branched hierarchical ZnO nanowire arrays and its photoelectrochemical properties. Electrochim
Acta 2011, 56:5776–5782.CrossRef 8. Pan K, Dong Y, Zhou W, Pan Q, Xie Y, Xie T, Tian G, Wang G: Facile fabrication of hierarchical TiO 2 nanobelt/ZnO nanorod heterogeneous nanostructure: an efficient photoanode for water splitting. Appl Mater Interf 2013, 5:8314–8320.CrossRef 9. Baek SH, Kim SB, Shin JK, Kim JH: Preparation of hybrid silicon wire and planar solar cells having ZnO antireflection coating by all-solution processes. Sol Energy Mater Sol Cells 2012, 96:251–256.CrossRef 10. Zhou H, Qu Y, Zeid T, Duan X: Towards highly efficient photocatalysts
using semiconductor nanoarchitectures. Energy Environ Sci 2012, 5:6732–6743.CrossRef 11. Lee YJ, Ruby DS, Peters DW, McKenzie BB, Hsu JW: ZnO nanostructures as efficient antireflection layers in solar cells. Nano Lett 2008, 8:1501–1505.CrossRef 12. Akhavana O, Azimiradc R, Safad S: Functionalized carbon nanotubes in ZnO thin films for photoinactivation of bacteria. Mater Chem Phys 2011, 130:598–602.CrossRef 13. Wahab R, Kim YS, Mishra A, Yun SI, Shin HS: Formation of ZnO micro-flowers prepared via solution process and their antibacterial activity. Nanoscale Res Lett 2010, 5:1675–1681.CrossRef 14. Karunakaran C, Rajeswari V, Gomathisankar P: Enhanced photocatalytic and antibacterial activities of sol–gel synthesized ZnO and Ag-ZnO. Trametinib purchase Mater Sci Semicond Process 2011,
14:133–138.CrossRef 15. Sun K, Jing Y, Park N, Li C, Bando Y, Wang D: Solution synthesis of large-scale, high-sensitivity ZnO/Si hierarchical nanoheterostructure photodetectors. J Am Chem Soc 2010, 132:15465–15467.CrossRef 16. Sun K, Jing Y, Li C, Zhang X, Aguinaldo R, Kargar PAK5 A, Madsen K, Banu K, Zhou Y, Bando Y, Liu Z, Wang D: 3D branched nanowire heterojunction photoelectrodes for high-efficiency solar water splitting and H 2 generation. Nanoscale 2012, 4:1515–1521.CrossRef 17. Devarapalli RR, Shinde DR, Barka-Bouaifel F, Yenchalwar SG, Boukherroub R, More MA, Shelke MV: Vertical arrays of SiNWs–ZnO nanostructures as high performance electron field emitters. J Mater Chem 2012, 22:22922–22928.CrossRef 18. Choudhury BD, Abedin A, Dev A, Sanatinia R, Anand A: Silicon micro-structure and ZnO nanowire hierarchical assortments for light management. Opt Mater Express 2013, 3:1039–1048.CrossRef 19. Cheng C, Fan HJ: Branched nanowires: synthesis and energy applications. Nano Today 2012, 7:327–342.CrossRef 20. Zhou H, Tian ZR: Recent advances in multistep solution nanosynthesis of nanostructured three-dimensional complexes of semiconductive materials. Prog Nat Sci Mater Int 2013, 23:237–285. 21.
boulardii in acidic environments, most likely by preventing progr
boulardii in acidic environments, most likely by preventing programmed cell death. In toto, given the observation that many of the proven health benefits of S. boulardii are dependent on cell viability, our data suggests that taking S. boulardii and AdoMet together may be a more effective treatment for gastrointestinal disorders than taking the probiotic yeast alone. Methods Yeast strains, plasmids, and growth conditions All experiments were done with isogenic Saccharomyces cerevisiae strains in the W303-1B background (MATα ade2, his3, leu2, trp1, ura3, ssd1-d2), and with Saccharomyces boulardii (Florastor, Lot No. 538) obtained
from Biocodex, Inc. (San Bruno, CA). For all the experiments described in this paper, cells were cultured and treated using standard yeast protocols [41]. Unless noted otherwise, all other drugs and reagents were purchased from SIGMA-Aldrich.
Ethanol-induced cell death assay Cells of the indicated strain and genotype were cultured in rich YPD media overnight, resuspended in fresh media, and allowed to reach exponential phase (an approximate OD600 value of 0.2). They were then resuspended see more in water or fresh media or in water or fresh media containing either 15% or 22% ethanol [33], and allowed to grow at 30°C for the indicated times. Next, they were either serially diluted onto YPD plates and cultured at 30°C for 2 days to test for viability or treated with the appropriate stain for the indicated test, and examined using a Zeiss LSM 700 Confocal Laser Scanning Microscope.
At least three independent cultures were tested and compared. Statistical significance was determined with the Student’s t-test. Acetic acid-induced cell death assay Cells of the indicated genotype Clomifene were cultured in rich YPD media overnight, resuspended in fresh media, and allowed to reach exponential phase (an approximate OD600 value of 0.2). They were then resuspended in fresh media pH 3 or fresh media pH 3 containing 160mM acetic acid, allowed to grow at 30°C with shaking for 2 hours. Next, they were treated with the appropriate stain for the indicated test, and examined using a Zeiss LSM 700 Confocal Laser Scanning Microscope. Hydrochloric acid-induced cell death assay Cells of the indicated genotype were cultured in rich YPD media overnight, resuspended in fresh media, and allowed to reach exponential phase (an approximate OD600 value of 0.2). They were then resuspended in water, water containing either 50 mM or 75 mM HCl, water containing 50 mM HCl and 2 mM AdoMet, or water containing 2 mM AdoMet alone. They were allowed to sit at room temperature for 1.5 hours. Then, they were either serially diluted onto YPD plates and cultured at 30°C for 2 days to test for viability or treated with the appropriate stain for the indicated test, and examined using a Zeiss LSM 700 Confocal Laser Scanning Microscope. |
5c09b0ecf5297205 | User Tools
Site Tools
Add a new page:
Classical Field Theory
A field in physics is something that associates with each point in space and with each instance in time a quantity.
The easiest way to think about a classical field is as a mattress. A mattress consists of many point masses that are connected by springs. The horizontal location of these point masses is the quantity that is associated with each point in space and time.
The point masses can oscillate and these oscillations influence the neighboring point masses. This way wave-like perturbations can move through the mattress, as everyone knows whoever jumped around on a mattress.
If we now imagine that we zoom out such that the point masses become smaller and smaller we end up with a great approximation to a classical field. A classical field is nothing but the continuum limit of a mattress.
In a field theory, we describe everything in terms of field configurations. Solutions of the field equations describe sequences of field configurations:
A classical field is a dynamical system with an infinite number of degrees of freedom. We describe fields mathematically by partial differential equations.
The action functional $S[\phi(x)]$ for a free real scalar field of mass $m$ is \begin{eqnarray} S[\phi(x)]\equiv \int d^{4}x \,\mathcal{L}(\phi,\partial_{\mu}\phi)= {1\over 2}\int d^{4}x \,\left(\partial_{\mu}\phi\partial^{\mu}\phi- {m^{2}}\phi^2\right). \end{eqnarray} We can calculate the equations of motion are obtained by using the Euler-Lagrange equations \begin{eqnarray} \partial_{\mu}\left[\partial\mathcal{L}\over \partial(\partial_{\mu}\phi) \right]-{\partial\mathcal{L}\over \partial\phi}=0 \quad \Longrightarrow \quad (\partial_{\mu}\partial^{\mu}+m^{2})\phi=0. \label{eq:eomKG} \end{eqnarray}
The momentum canonically conjugated to the field $\phi(x)$ is given by \begin{eqnarray} \pi(x)\equiv {\partial\mathcal{L}\over \partial(\partial_{0}\phi)} ={\partial\phi\over\partial t}. \end{eqnarray}
The corresponding Hamiltonian function is \begin{eqnarray} H\equiv \int d^{3}x \left(\pi{\partial\phi\over\partial t}-\mathcal{L}\right) = {1\over 2}\int d^{3}x\left[ \pi^2+(\vec{\nabla}\phi)^{2}+m^{2}\right]. \end{eqnarray}
In classical theories, we can write the equations of motionin terms of the Poisson brackets: \begin{eqnarray} \{A,B\}\equiv \int d^{3}x\left[{\delta {A}\over \delta \phi} {\delta{B}\over \delta\pi}- {\delta{A}\over \delta\pi}{\delta{B}\over \delta\phi} \right], \end{eqnarray} where ${\delta\over \delta \phi}$ denotes the functional derivative defined as \begin{eqnarray} {\delta A\over \delta\phi}\equiv {\partial\mathcal{A}\over \partial\phi}-\partial_{\mu}\left[{\partial\mathcal{A} \over \partial(\partial_{\mu}\phi)}\right] \end{eqnarray} The canonically conjugated classical fields satisfy the following equal time Poisson brackets \begin{eqnarray} \{\phi(t,\vec{x}),\phi(t,\vec{x}\,')\}&=&\{\pi(t,\vec{x}), \pi(t,\vec{x}\,')\}=0,\nonumber \\ \{\phi(t,\vec{x}),\pi(t,\vec{x}\,')\}&=&\delta(\vec{x}-\vec{x}\,'). \label{eq:etccr} \end{eqnarray}
Great Resources:
We consider first the sourceless equation in four dimensions,
$$ D_\mu F^{\mu\nu} = 0. \tag{2.65}$$
The first issue concerns the existence of regular solutions. If regular initial data is taken, will the solution evolve in a regular fashion, or will the nonlinearities produce singularities? This question has been answered: regular solutions to (2.65) do exist, and the same is true if one considers a larger system: scalar and spinor fields interacting with gauge fields [20].
However, physicists are not so interested in the general solution which depends on arbitrary initial data, but rather in specific solutions which reflect some physically interesting situation. For example, in the Maxwell theory we are interested in plane wave solutions.
Let us note that any Maxwell solution is a solution of the Yang-Mills equation, when one makes the Ansatz that the space and internal symmetry degrees of freedom decouple. If one forms $A_\mu^a(x) = \eta^a A_\mu (x)$ with $\eta^a$ constant and $A_\mu(x)$ satisfying the Maxwell equation, then $A_\mu^a(x)$ is a solution to the Yang-Mills equation, which we shall call "Abelian. Thus it is interesting the see whether there are plane wave solutions in the non-Abelian theory, which are not Abelian. By "plane wave", we shall mean a configuration of finite energy $(0 < \mathcal{e} < \infty$), of constant direction for the Poynting vector $\mathcal{P}(x) = \hat{\mathcal{P}}|\mathcal{P}(x)|$ with $\hat{\mathcal{P}}$ constant, and with magnitude of the Poynting vector equal to the energy density $ \mathcal{e} = \mathcal{P}(x)|$. Such solutions have been constructed [21], but unlike their Maxwell analogs, they do not seem to have any physical significance. certainly, if gauge quanta are confined, one cannot make a coherent superposition of the to construct an observable plane wave. Alternatively, one may view the Maxwell waves as quantum mechanical wave functions for the photon. However, the non-Abelian plane waves solve a nonlinear equation; they cannot be superposed to form other solutions, and it is hard to see how they can be used as wave functions.
Another class of solutions, more appropriate to nonlinear field theories, are the celebrated solitons, which do have a quantum meaning - they are the starting point of a semi-classical description of coherently bound quantum states [22]. A soliton should be a static solution, have finite energy, and be stable in the sense that small perturbations do not grow exponentially in time. However, one proves with virial theorems that no such solution exists in the pure Yang-Mills theory in four, three or two dimensions [23].
Another tack that one take is that of symmetry. Recall that the classical Yang-.Mills theory in four dimensions possesses conformal $SO(4,2)$ symmetry. One may seek solutions invariant under the maximal compact subgroup, i.e. $SO(4) \times SO(2)$. This solution has been constructed [24]; it is called a "meron". But again no physical significance has been attached to it, or to its generalization which possesses the smaller compact invariance symmetry group $SO(4)$ [25].
There are many other solutions to (2.65) that have been found [26], and while their discoverers invariably highlight some unique characteristic, no physical application has been given thus far - although doubtlessly they are mathematically interesting.
There is one more class of solutions, which I shall describe later. These do not solve the Yang-Mills equations (2.65) in Minkowski space, but rather in Euclidean space, and are called instantons (pseudoparticles). In fact instantons solve the self-duality equation
$$ ^\star F^{\mu\nu} = \pm F^{\mu\nu} $$
and then (the Euclidean-space analog of) (2.65) follows by the Bianchi identity. […] Of all the solutions, the instantons have interested mathematicians most; for physicists they give a semi-classical understanding of some of the topological effects that are present in Yang-Mills theory.
Topological Investigations of Quantized Gauge Theories, by R. Jackiw (1983)
Why is it interesting?
Classical field theory was for a long time the best framework to describe the fundamental forces of nature. The most notable examples of classical field theories are Newtonian gravity and classical Electrodynamics.
Like the Hamiltonian formalism for classical physics, the Schrödinger equation is not so much a specific equation, but a framework for quantum mechanical equations generally. Once one has obtained the appropriate Hamiltonian, the time evolution of the state according to Schrödinger's equation proceeds rather as though $|\Psi>$ were a classical field subject to some classical field equation such as Maxwell's. In fact, if $|\Psi>$ describes the state of a single photon, then it turns out that Schrodinger's equation actually becomes Maxwell's equations! The equation for a single photon is precisely the same as the equation for an entire electromagnetic field. (However, there is an important difference in the type of solution for the equations that is allowed. Classical Maxwell fields are necessarily real whereas photon states are complex. There is also a so-called 'positive frequency condition that the photon state must satisfy). This fact is responsible for the Maxwell-field-wavelike behaviour and polarization of single photons that we caught glimpses of earlier. As another example, if 11Ji} describes the state of a single electron, then Schröinger's equation becomes Dirac's remarkable wave equation for the electron discovered in 1928 after Dirac had supplied much additional originality and insight
The Emperor's New Mind by R. Penrose
While our aim is to discuss the quantized Yang-Mills theory, let us pause for a moment and examine the dynamical field equations in their classical setting. After all, the Maxwell theory, which is the antecedent and inspiration for the Yang-Mills theory, was thoroughly investigated within classical physics, with results that are quite relevant physically even when quantum effects are ignored. Unfortunately, no such physical success can be claimed here, though much of mathematical interest has been achieved.
theories/classical_field_theory.txt · Last modified: 2018/04/15 12:36 by ida |
93078662c8fe5dce | We connect to each other through particles. Calls and texts ride flecks of light, Web sites and photographs load on electrons. All communication is, essentially, physical. Information is recorded and broadcast on actual objects, even those we cannot see.
Physicists also connect to the world when they communicate with it. They dispatch glints of light toward particles or atoms, and wait for this light to report back. The light interacts with the bits of matter, and how this interaction changes the light reveals a property or two of the bits—although this interaction often changes the bits, too. The term of art for such a candid affair is a measurement.
Particles even connect to each other using other particles. The force of electromagnetism between two electrons is conveyed by particles of light, and quarks huddle inside a proton because they exchange gluons. Physics is, essentially, the study of interactions.
Information is always conveyed through interactions, whether between particles or ourselves. We are compositions of particles who communicate with each other, and we learn about our surroundings by interacting with them. The better we understand such interactions, the better we understand the world and ourselves.
Physicists already know that interactions are local. As with city politics, the influence of particles is confined to their immediate precincts. Yet interactions remain difficult to describe. Physicists have to treat particles as individuals and add complex terms to their solitary existence to model their intimacies with other particles. The resulting equations are usually impossible to solve. So physicists have to approximate even for single particles, which can interact with themselves as a boat rolls in its own wake. Although physicists are meticulous, it is a wonder they ever succeed. Still, their contentions are the most accurate theories we have.
Quantum mechanics is the consummate theory of particles, so it naturally describes measurements and interactions. During the past few decades, as computers have nudged the quantum, the theory has been reframed to encompass information, too. What quantum mechanics implies for measurements and interactions is notoriously bizarre. Its implications for information are stranger still.
One of the strangest of these implications refutes the material basis of communication as well as common sense. Some physicists believe that we may be able to communicate without transmitting particles. In 2013 a once amateur physicist named Hatim Salih even devised a protocol, alongside professionals, in which information is obtained from a place where particles never travel. Information can be disembodied. Communication may not be so physical after all.
This past April, the early edition of a short article about Salih’s protocol appeared online in the Proceedings of the National Academy of Sciences. Most of the article’s 10 authors were members of the University of Science and Technology of China, at its branches in Shanghai and Hefei. The final author was Jian-Wei Pan, an eminent physicist who has also developed a constellation of satellites for communicating through quantum mechanics. He recently used this network for transmitting entangled particles over a distance of 1,200 kilometers.
Pan and his collaborators publish at a rate of more than one paper a month. But the paper that they published in April, co-written by Yuan Cao and Yu-Huai Li, was exceptional. They described an experiment in which they sent a black-and-white image of a Chinese knot to a computer, without transmitting any particles.
Extraordinary claims require extraordinary evidence—even the person whose work dug the original foundation for the team’s evidence, Lev Vaidman, doubts their claim. Vaidman and others have been arguing about how to interpret such results for a decade. And their communication is now changing how we understand the quantum theory.
Physicists strain to comprehend what quantum mechanics whispers about reality and what we can know about the material world. The theory, however, is starting to speak up. Physicists now question the uncertainty that the quantum theory has imposed, as even weak measurements reveal particulars that were once thought impossible. At stake are the very notions of measurements and interactions, and the foundations of the information technologies of the future.
For if we can process information without particles, we may build a computer that need not turn on, and we may be able to communicate with absolute secrecy. There would be nothing to intercept and nothing to hack. This possibility derives from the information contained inside wave functions—and from the way that the imaginary manifests as real. So before we can disembody communication, we must give body to the quantum theory.
Embodying Quantum Mechanics
The basic instrument of quantum mechanics, from which all its oddities are composed, is the wave function. Every possible state of a quantum object, every possible outcome of its measurement, is a solution to the Schrödinger equation. This simple equation resembles the one that describes moving waves—enough to have confused Erwin Schrödinger into naming its solutions wave functions—but quantum waves are abstract, not real. Unlike the solutions for ocean breakers or sound, wave functions always contain imaginary numbers.
To obtain real answers from this complex math, physicists multiply a wave function by a negative version of itself. The result is the probability of observing an object with the properties that the wave function details. Summing all the squares of all the solutions for any quantum object always totals 100 percent. The Schrödinger equation accounts for every possibility. It confounds, but it does not surprise.
When we solve the Schrödinger equation to predict the location of a particle, there are usually many possibilities—much as there would be in establishing the precise location of surf. Positions and trajectories are ill-defined in quantum mechanics because of the well-known duality of particles and waves. But measurements offer a certainty that wave functions cannot. When we observe the location of an electron, we know it for sure. Such knowledge, however, has a price. Once we know the position, we cannot know the speed. If we measure the speed, we forfeit all knowledge of the position. This gnostic trade-off is called the Heisenberg uncertainty principle. Many other observables, such as time and energy, are equally incompatible.
One notable quirk of this mathematics is that combining solutions to the Schrödinger equation for any particular object, according to their probabilities, is also a possible solution. This is called a superposition, although that is a misnomer. One solution is not placed atop another, but rather they are added together into a blend. And as with juicing, the flavor of the whole surpasses what was added in.
Quantum mechanics is counterintuitive, and superpositions are why. We have never experienced one in our daily lives, despite the shifting probabilities and blends of truth that we live with. So, to understand superpositions, let’s consider a thought experiment that can be made real. This example illustrates most of the oddities of quantum mechanics, and underlies the actual experiments undertaken by Pan and his colleagues.
A Trick of Light
Point a laser toward a piece of glass coated partially with aluminum, as in a one-way mirror. If the glass is at an angle of 45 degrees relative to the incoming light, half the beam continues through and the other half reflects away, perpendicular to the original beam. There is no road less traveled by—the choice of path, like quantum mechanics, is perfectly random.
Now, set a regular mirror in each of these paths and reunite the beams. The light acts as a wave so the beams interfere with each other where they meet, producing a pattern of ripples on a fluorescent screen that glows where it is struck (Figure 1). The interference pattern on the screen looks like someone took a comb to it—a result equivalent to the famous double-slit experiment. But our setup has a fancier name—a Mach–Zehnder interferometer.
Credit: Jen Christiansen
We can alter the pattern on the screen by inserting a pane of glass in one beam’s path. Glass slows the light, so the peaks and valleys of its waves no longer match those from the other beam. A certain thickness of glass slows one beam just enough so its peaks arrive with the valleys of the other. Different areas of the screen now turn dark, where the light in the two beams interfere with each other destructively. If we were to place a photon detector at such a spot, no light would register.
Physicists have learned how to produce single photons and how to detect them (even with their eyes), so they often conduct such experiments using particles rather than beams. When they direct photons one at a time toward a one-way mirror, otherwise known as a beam splitter, half continue blithely through and the other half reflect away—the same as before with a beam. Nothing changes for single particles. Although only one photon travels on either path at a time, an interference pattern still emerges on the screen. We can even alter the pattern by inserting a pane of glass. Photons still act like waves. But what does each photon now interfere with? The answer to that question is the essence of quantum mechanics.
A photon cannot split in half and interfere with itself—we always detect photons whole. The photons exist as superpositions, so perhaps they take both paths at once. To explain superpositions, writers often say that a particle exists in two places at a time. But this is wrong.
If we place detectors on both of a photon’s possible paths, one always clicks and the other does not. If we place a detector in one path, it clicks half the time. Yet when a detector registers the photons on either path, an interference pattern no longer appears on the screen. Even if we interact with photons but let them pass, just to know where they are, the pattern still disappears. The act of a measurement, the very acquisition of knowledge, alters the result. Once we observe a particle, it does not act like a wave. Light may lead a double life, but it only leads one life at a time.
Schrödinger believed that wave functions corresponded to real objects. Since 1926, most physicists have interpreted the wave function as an abstract parcel of knowledge, not an inhabitant of our world. There is a sense, however, in which the mathematics must be real.
Whenever a photon is described by a superposition of paths, they somehow interfere. If we ruin the superposition by distinguishing the paths, the interference always disappears. Whenever we find out which path a photon takes, the other path is no longer possible. Wave functions detail possibilities. So after one path becomes impossible, the wave function changes to reflect our knowledge of the world. Physicists say the wave function collapses. The quantum world collapses, too. Superpositions are more tenuous than any of our classical experiences.
Blowing Up
In 1993, Avshalom Elitzur and Vaidman pushed interferometers past the surreal and into the absurd, with a thought experiment that others would make real. Instead of a fluorescent screen, imagine a second beam-splitter where the paths reunite (Figure 2a). Now place a detector in line with each possible path after the splitter. The photons are equally likely to proceed to either detector. Alter one of the original paths again by adding a pane of glass, so there is destructive interference at one detector but not the other—a photon always registers in the second detector, but never in the first. We can actually observe this.
Now place an obstacle in one of the paths after the original split. Half the photons are absorbed and the other half travel the unimpeded path. These unimpeded photons should proceed as before, to the second detector. They do not. Half register in the first detector, which did not click when there were two paths (Figure 2b). The interference disappears because the other path is no longer possible. The photons definitely travel the path without the obstruction, but somehow they know what happens to the other path and change their behavior accordingly. In fact, a photon appearing in the forbidden detector—just once—is enough to intuit the presence of the obstruction.
Elitzur and Vaidman claimed their thought experiment was an example of the nonlocality of quantum mechanics. Two particles born together can exist in a superposition of complementary properties—and as the particles separate across the universe, we can measure the property of one and instantly know the other. This interdependence is called entanglement. Classical objects have distant influences—the moon orbits the Earth, magnets attract metals—but these influences are communicated through local interactions, traveling no faster than light. Particles separated by a universe, however, lose their superpositions immediately. The photons on our paths have no mass or charge, so they do not emit physical influences across space. Quanta are still local. Yet somehow the photon on the clear path knows about the obstacle in the other path instantaneously, without interacting with it at all. The photon acquired information from afar.
“It is common to think that unlike classical mechanics,” Elitzur and Vaidman explained, “quantum mechanics poses severe restrictions on the minimal disturbance of the system due to the measurement procedure.” This cannot be true. A path may be undisturbed, yet our observations will change. The mere presence of an obstacle on the other path acts like a measurement, conveying information to the photons and to us.
Dennis Gabor, who developed holograms, said that every observation requires a photon. But light does not have to strike an object to reveal it. We can see without looking. (This is neither a shell game nor ESP. Most photons in the real world have many more than two possible paths, and these usually cancel one another, leaving the straight, shortest path for light, which we observe. Most light acts as we classically believe.)
Elitzur and Vaidman, who were then working in Tel Aviv, plotted their idea more dramatically (Figure 2b) Instead of an inert obstacle in one path, they imagined a bomb set to explode when struck by a photon. If a photon travels that path, the bomb explodes and we know for certain that the photon was there. If a photon travels the clear path, we can still discern the presence of an obstacle—in this case, the bomb—without shining a light on it. The photons on the unimpeded path will register in the forbidden detector half the time, alerting us to the bomb’s presence. Elitzur and Vaidman called this an interaction-free measurement. Sir Roger Penrose, the noble mathematical physicist, called their insight a counterfactual. But the thought experiment is not counter to established fact. Evert du Marchie van Voorthuysen demonstrated interaction-free measurements using inexpensive instruments—and obstacles other than bombs—at a science expo in Groningen, in 1995. Afterward, physicists could not explain the demonstration any better than the spectators.
Credit: Jen Christiansen
Wave functions and superpositions describe actual phenomena with actual consequences. The mathematics is set; the interpretation is not. Some physicists believe, again, that wave functions are real objects, similar to a magnetic field. Others contend that wave functions describe ensembles, not single particles. Still others take the mathematics so seriously that they argue superpositions create many worlds, one for each possibility.
Most physicists insist that the math details only the many possibilities of our one world. But Elitzur and Vaidman converted themselves to the more radical idea. “This paradox can be avoided in the framework of the many-worlds interpretation,” they wrote. If there are many worlds, interaction-free measurements are easy to explain. The wave function does not collapse and every possibility still exists, somewhere—we discern an obstacle here because an explosion happened in another universe.
In the reckoning of Elitzur and Vaidman the probability of conveying information without interactions, in any universe, is at best 50 percent. But in 1994, two young men who had recently completed their PhDs in the Bay Area—Mark Kasevich and Paul Kwiat—met in the lab of Anton Zeilinger in Innsbruck. Kasevich told Kwiat, and a few other colleagues in Austria, how they might improve the odds. If an obstacle transmits information without interactions half the time, more obstacles should transmit information more frequently. Repeatedly splitting the paths in an interferometer and inserting obstacles in them is akin to repeating measurements and gaining knowledge from each one. Kwiat and his colleagues called this an interrogation.
In theory, physicists could extract perfect information without interactions if they used an infinite number of obstacles. In experiments, Kwiat and his colleagues routed one of the paths of a photon through an obstacle six times and increased the number of interaction-free measurements to 70 percent. During the 1970s, two physicists at The University of Texas at Austin, Baidyanath Misra and E. C. George Sudarshan, studied the weird capacity for repeated measurements to prolong quantum effects. They called it Zeno’s paradox for quantum mechanics. The Greek philosopher had argued that measuring the position of an arrow repeatedly, as it progresses half the distance to its mark, implies the arrow never lands. Half a distance always remains.
In the nearly 25 years since the introduction of counterfactuals, physicists have realized many applications that are less volatile than detecting bombs. In 1998, Kwiat and his collaborators at Los Alamos developed photographs of human hair inside an interferometer, on a path that light did not traverse. Two years later in England, two theorists, Graeme Mitchison and Richard Jozsa, described how to compute without interactions.
Quantum computers are hard to build, in part because measurements are heavy-handed. To know the outcome of an algorithm, we have to ruin the very superpositions on which such a computer runs. In 2006, Onur Hosten, Kwiat and other collaborators at the University of Illinois at Urbana–Champaign, appended a chain of quantum Zeno effects to counterfactuals and designed a quantum computer that could deliver information without running at all. “This is possible only in the realm of quantum computers,” they explained, “where the computer can exist in a quantum superposition of ‘running’ and ‘not running’ at the same time.”
When Vaidman read that theoretical computers need not be turned on to work, he thought Kwiat had bested him again, improving the efficiency of counterfactuals. But the idea is not as straightforward as discerning an obstacle on a path without light. As Vaidman says, Kwiat and his collaborators’ computer relies on “the absence of an object in a particular place, allegedly without [photons] being there.” But no information can come from nothing. After analyzing the experiment for several months, Vaidman explained that “the photon did not enter the interferometer, the photon never left the interferometer, but it was there.” The particle had to be where it could not, if information was derived from the absence of an object. Kwiat wrote that Vaidman’s interpretation is “nonsense.”
At the Electronics and Telecommunications Research Institute in South Korea, in 2009, Tae-Gon Noh took the next logical step. Instead of a fanciful computer that does not have to run, Noh applied counterfactuals “to a real-world communication task.” He developed a protocol for sending a key to unlock shared data. When a photon travels the unobstructed path in an interferometer, the information acquired about the other path—through which the photon could not have traveled—may be used to reveal the secret key. The crests and troughs of the light can be made to undulate up and down or side to side, and this binary property (called polarization) can be used to encode bits. Information can then be transmitted through the obstructed channel, which the receiver controls. The sender and receiver also share regular information, but if they follow a simple protocol, no one can eavesdrop or steal their key. There is nothing to intercept—the photons live and die, as Noh explained, inside the sender’s device. Even stranger than the lack of a signal, he said, is “the mere possibility that an eavesdropper can commit a crime is sufficient to detect the eavesdropper, even though the crime is not in fact carried out.” He compared counterfactuals to the preemptive arrests in the film Minority Report.
In 2011, Pan and a few other collaborators in Hefei realized Noh’s “engrossing” scheme in the real world, on a tabletop in their lab. They sent a secure key—at a rate of 51 bits per second—over a kilometer of fiber-optic cable, although not without significant errors. Pan and his group did not achieve the fidelity needed to convert their science into a technology, but they claimed, “we have given proof-in-principle demonstrations.” Some information really could travel without particles.
While living in England in 2009, a young man named Hatim Salih read Noh’s paper and asked himself, “Why didn’t I think of that?” He had a degree in electronics but had taught himself quantum physics after reading a few popular books by Roger Penrose and attending seminars in York . A year later Salih returned to his native Sudan, where he marketed solar panels, and a friend invited him to be a visiting researcher at the King Abdulaziz City for Science and Technology in Saudi Arabia. He did not have a PhD, but with a colleague there and two other theorists at Texas A&M University, he took “the logic of counterfactual communication to its natural conclusion.” As they explained, “using a chained version of the Zeno effect, information can be directly exchanged between Alice and Bob with no physical particles traveling between them, thus achieving direct counterfactual communication.” (Instead of labeling senders and receivers A and B, physicists call them Alice and Bob.)
First, Salih and his colleagues devised a protocol for communicating some information without particles. Split photons down two paths and reunite them at a second beam splitter, as before. Now do this again and again, adding one interferometer after another (Figure 3a). Alter the paths with special beam splitters, so the photons always proceed to the same detector at the end. The theoretical Bob, who controls obstacles in the series of paths, can use them to send information to Alice’s detectors. If he lets a photon through, the guaranteed click at the first detector is defined as a 0 in binary logic. If he blocks the paths after every split, a photon very likely appears in the second detector, a result defined as a 1. Thus Bob transmits information to Alice, even when he does not let some particles through.
In theory this set-up transmits 0s with certainty, but the counterfactual information—the 1s transmitted without particles—are less reliable. Photons from the unimpeded path occasionally pass to the other detector that registers 0s, even if there are hundreds of obstacles.
But Salih and his colleagues then claimed that they knew how to accomplish what no one had before: Make each bit counterfactual. It should be possible to transmit signals between a sender and receiver simply by blocking the paths that a photon should never take.
After the initial split for the photons in an interferometer, divide one of these two paths again. Now add one small interferometer after another on this path, placing obstacles that Bob controls in each (Figure 3b). Many small interferometers are thus nested inside a large one, and this can be done again and again. The obstacles on the interior paths act as repeated measurements, and the more interaction-free measurements there are, the more efficient the communication will be. The paths can even be made to interfere, so the particles that arrive at Alice’s detectors can never travel the paths that Bob obstructs. They are truly blocked. But the detectors will still register differently when he obstructs his paths or not. Bob sends information without interacting with any particles.
The protocol that Salih and his colleagues designed is difficult to imagine, even inside a lab. So they conceived another protocol using a similar interferometer, one developed by Albert Michelson to determine the existence of the aether during the 1880s (and used, more recently, to detect gravitational waves). In a Michelson interferometer, light is again divided onto two paths, but mirrors reflect the beams back to where they originally split. They interfere there. Experimenters can nest these interferometers and distinguish the light where it interferes by the two polarizations, which serve as the bits.
At the end of their paper, Salih and his colleagues declared, “we strongly challenge the long-standing assumption that information transfer requires physical particles to travel between sender and receiver.” In 2014, they even obtained a patent on direct communication without “physically real” entities. Salih then founded a company, called Qubet Research, to monetize the idea.
Credit: Jen Christiansen
Weak Measurements, Strong Opinions
Lev Vaidman is a prolific commenter. Twelve of his most recent 25 papers are replies to other physicists or criticisms of their work. He is sometimes impolitic enough to assert that someone’s paper should never have been published, but he also lists his rejected papers on his Web site. He comments so frequently, he says, because “open discussion and disagreements help to move physics forward.”
Physicists can agree on the mathematics and the results of experiments, but still dispute their interpretations. Perhaps surprisingly—but also rationally—Vaidman doubts that communication happens in the absence of particles, as Salih and others describe. Vaidman has complained that Salih et al.’s protocol “was based on a naive classical approach to the past of the photons.” He consented that a “process is counterfactual if it occurs without any real physical particle traveling between the two parties. But what is the meaning of this definition? For the quantum particle, there is no clear definition of ‘traveling’.” Vaidman’s argument is not just about language, but what we can say about the world.
Vaidman insists that particles have no past. And if they don’t, we cannot actually know if one was ever near an object and interacted with it. When we measure a particle to find out, the wave function that could have told us collapses. We do not learn history from particles, we force history upon them.
But, in 1988, Vaidman and two colleagues at the University of South Carolina imagined a new kind of measurement—one that was so weak that it did not collapse quantum states. A weak measurement cannot release the information we seek from photons, but coupled with many such measurements and one strong one, it might. In fact, a weak measurement followed by a strong one gives us more information than we have any right to know. Sending an electron through a slight magnetic field and then a strong perpendicular field, for instance, reveals two incompatible properties at the same time. Weak measurements disclose what Heisenberg had deemed uncertain.
Vaidman and his colleagues have converted their theory of weak measurements into a new version of quantum mechanics. They combine the information from a weak measurement and a strong one into a single wave function. The past is set by the weak measurement, and Vaidman then builds a superposition between the particle’s past and its future to know what happened in between. When Vaidman applies his theory to counterfactuals, the photon always appears where it should not—on the obstructed path. Few understand his approach, and many doubt it. The results are imaginary numbers that give negative probabilities, which should be impossible.
But in 2013, Ariel Danan and a few colleagues in Tel Aviv, including Vaidman, studied interaction-free measurements in actual weak experiments. They vibrated one of the mirrors on one of the paths inside an interferometer to locate the photons on this path. “The experiment is analogous to the following scenario,” they wrote. “If our radio plays Bach, we know that the photons come from a classical music station, but if we hear a traffic report, we know that the photons come from a local radio station.” What they heard was surprising. The photons flitted about, even on forbidden paths, guided by their wave functions.
Many physicists doubt that a photon that neither enters a path nor exits it can still somehow be there. Salih argues that Vaidman is using his own version of quantum mechanics, so he naturally believes that other interpretations are wrong. Salih even implied that Vaidman is telling photons what to say when other physicists interrogate them.
This past April, Pan and his colleagues wrote in their paper: “Although several publications are presently available regarding the theoretical aspects of [counterfactual communication], a faithful experimental demonstration, however, is missing.” It was time for an experiment on communication to speak. The group started planning their experiment to end the “heated debate” before Salih and others even formally published their idea.
An infinite number of interferometers was required for perfect communication, which Pan and his group acknowledged was impractical. So they simplified the protocol for Michelson interferometers and built four, with two smaller ones nested inside. They set their source of single photons, their beam splitters and their mirrors on a small table that was temperature-controlled and isolated against vibrations. The counterfactual communication would occur across 50 centimeters, inside a lab in Shanghai. Pan’s collaborators, Cao and Li, designed a number of possible images to send, and the group voted for a Chinese knot. As Cheng-Zhi Peng explained, “it is symmetric and beautiful.”
Jian-Wei Pan (seated at center) and colleagues in their lab in Shanghai. Credit: Bo Li
The group wrote software to run their experiment automatically, without any human interference. On May 31, 2013, they sat at a computer and waited through the night to see if the image loaded on a screen. They trusted their instruments, but they quietly hoped that nothing would appear. A negative result would imply that quantum mechanics is wrong. No one had ever observed that.
Over five hours, 10 kilobytes of information passed the 50 empty centimeters between the sender and receiver. Many of the bits had to be transmitted several times before they registered, and the computer was better at recognizing 1s than 0s. But a monochrome bitmap appeared through static, although the group had not transmitted any particles that they could discern. Once they saw the image, after sunrise, they disbanded to sleep before they celebrated. They posted a short article one year later but did not submit their paper for publication for more than three years. They were too busy building communication satellites, and they wanted some time to think about the result.
Pan and his colleagues are now working to transmit a picture in shades of gray, and they hope to send pure quantum information based on another protocol by Salih. To ensure that no photons pass through the transmission channel, they also plan to do a weak measurement to determine where the photon goes.
Although Pan is in the business of communication satellites, and counterfactuals pique banks and militaries, the group reported another potential application for their experiment: “imaging ancient arts where shining a light directly is not permitted.” Kwiat has implied that counterfactuals might not be useful for anything else. He wrote: “In order to achieve a high level of counterfactuality, one needs many cycles, and this greatly slows down the rate of communication.” Information moves slower without particles than with them.
Credit: Jen Christiansen; Source: “Direct counterfactual communication via quantum Zeno effect,” By Yuan Cao et al., PNAS, No. 19, May 9, 2017 (Chinese knot panels)
Pan and colleagues attribute the mystery of counterfactual communication to the wave/particle duality. Salih has another interpretation. “I believe this experiment has something to say in support of the reality of the quantum wave function: If physical particles did not transfer information then what did?” Imaginary wave functions may be the last preserve of the real.
Salih is now working on a proof of counterfactuals, using weak measurements, to outflank his critics. When I asked Vaidman what would convince him that no particles were ever transmitted, he replied, tautologically: “If an object was found in a counterfactual way, there should be zero trace near it.” Pan’s collaborators told me, perhaps jokingly: “Although our demonstration hasn’t solved the issue entirely, we do believe that our work shed some light on the discussion.”
Quantum mechanics has survived nearly 100 years, and the unorthodox theory remains fabulous. Experiments routinely verify its predictions, and the normative theories invented to reform it have failed. Physicists continue to uncover new ways to adapt its mysteries to information technology and realize its wonders in the world. They are still waiting for the theory to communicate its meaning to us, however—with or without particles. |
021be26e508049ca | Science X Newsletter Week 52
Dear ymilog,
Here is your customized Science X Newsletter for week 52:
Korean artificial sun sets the new world record of 20-sec-long operation at 100 million degrees
The Korea Superconducting Tokamak Advanced Research (KSTAR), a superconducting fusion device also known as the Korean artificial sun, set the new world record as it succeeded in maintaining the high temperature plasma for 20 seconds with an ion temperature over 100 million degrees (Celsius).
Artificial intelligence solves Schrödinger's equation
A team of scientists at Freie Universität Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrödinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrödinger equation, but in practice this is extremely difficult.
In the northern sky in December is a beautiful cluster of stars known as the Pleiades, or the "seven sisters." Look carefully and you will probably count six stars. So why do we say there are seven of them?
Japanese spacecraft's gifts: Asteroid chips like charcoal
They resemble small fragments of charcoal, but the soil samples collected from an asteroid and returned to Earth by a Japanese spacecraft were hardly disappointing.
Jupiter and Saturn cheek-to-cheek in rare celestial dance
The solar system's two biggest planets, Jupiter and Saturn, came within planetary kissing range in Monday's evening sky, an intimacy that will not occur again until 2080.
Earthlings and astronauts chat away, via ham radio
The International Space Station cost more than $100 billion. A ham radio set can be had for a few hundred bucks.
Masks block 99.9% of large COVID-linked droplets: study
Face masks reduce the risk of spreading large COVID-linked droplets when speaking or coughing by up to 99.9 percent, according to a lab experiment with mechanical mannequins and human subjects, researchers said Wednesday.
Breast milk could help treat COVID-19 and protect babies
Health psychology professor Jennifer Hahn-Holbrook and incoming grad student Jessica Marino have a new study suggesting that the breastmilk of mothers who have recovered from COVID-19 contains strong antibodies to the virus.
Light travels at a speed of about 300,000,000 meters per second as light particles, photons, or equivalently as electromagnetic field waves. Experiments led by Hrvoje Petek, an R.K. Mellon professor in the Department of Physics and Astronomy examined ideas surrounding the origins of light, taking snapshots of light, stopping light and using it to change properties of matter.
New flower from 100 million years ago brings fresh holiday beauty to 2020
Oregon State University researchers have identified a spectacular new genus and species of flower from the mid-Cretaceous period, a male specimen whose sunburst-like reach for the heavens was frozen in time by Burmese amber.
Hydrogen is a sustainable source of clean energy that avoids toxic emissions and can add value to multiple sectors in the economy including transportation, power generation, metals manufacturing, among others. Technologies for storing and transporting hydrogen bridge the gap between sustainable energy production and fuel use, and therefore are an essential component of a viable hydrogen economy. But traditional means of storage and transportation are expensive and susceptible to contamination. As a result, researchers are searching for alternative techniques that are reliable, low-cost and simple. More-efficient hydrogen delivery systems would benefit many applications such as stationary power, portable power, and mobile vehicle industries.
Could COVID-19 have wiped out the Neandertals?
Everybody loves Neandertals, those big-brained brutes we supposedly outcompeted and ultimately replaced using our sharp tongues and quick, delicate minds. But did we really, though? Is it mathematically possible that we could yet be them, and they us?
Austrian court overturns virus mask mandate in schools
Austria's Constitutional Court ruled Wednesday that two government measures to fight the spread of coronavirus in schools, compulsory mask-wearing and splitting classes into two halves to be taught in alternate shifts, were illegal.
Making jet fuel out of carbon dioxide
A team of researchers affiliated with several institutions in the U.K. and one in Saudi Arabia has developed a way to produce jet fuel using carbon dioxide as a main ingredient. In their paper published in the journal Nature Communications, the group describes their process and its efficiency.
Remarkable new species of snake found hidden in a biodiversity collection
To be fair, the newly described Waray Dwarf Burrowing Snake (Levitonius mirus) is pretty great at hiding.
Study finds evidence of lasting immunity after mild or asymptomatic COVID-19 infection
New research involving scientists from Queen Mary University of London has found evidence of protective immunity in people up to four months after mild or asymptomatic COVID-19.
Study resolves the position of fleas on the tree of life
A study of more than 1,400 protein-coding genes of fleas has resolved one of the longest standing mysteries in the evolution of insects, reordering their placement in the tree of life and pinpointing who their closest relatives are.
New population of blue whales discovered in the western Indian ocean
Anti-diarrhea drug drives cancer cells to cell death
The research group led by Dr. Sjoerd van Wijk from the Institute of Experimental Cancer Research in Paediatrics at Goethe University already two years ago found evidence indicating that the anti-diarrhea drug loperamide could be used to induce cell death in glioblastoma cell lines. They have now deciphered its mechanism of action and, in doing so, are opening new avenues for the development of novel treatment strategies.
COVID immunity lasts at least eight months, new data reveals
Australian researchers have revealed—for the first time—that people who have been infected with the COVID-19 virus have immune memory to protect against reinfection for at least eight months.
This email is a free service of Science X Network
You received this email because you subscribed to our list.
No comments:
Post a Comment |
96a038880aa4ef88 | Clifford algebra
In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra. As K-algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems.[1][2] The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford.
The most familiar Clifford algebras, the orthogonal Clifford algebras, are also referred to as (pseudo-)Riemannian Clifford algebras, as distinct from symplectic Clifford algebras.[3]
Introduction and basic properties
A Clifford algebra is a unital associative algebra that contains and is generated by a vector space V over a field K, where V is equipped with a quadratic form Q : VK. The Clifford algebra Cl(V, Q) is the "freest" algebra generated by V subject to the condition[4]
where the product on the left is that of the algebra, and the 1 is its multiplicative identity. The idea of being the "freest" or "most general" algebra subject to this identity can be formally expressed through the notion of a universal property, as done below.
Where V is a finite-dimensional real vector space and Q is nondegenerate, Cl(V, Q) may be identified by the label Clp,q(R), indicating that V has an orthogonal basis with p elements with ei2 = +1, q with ei2 = −1, and where R indicates that this is a Clifford algebra over the reals; i.e. coefficients of elements of the algebra are real numbers. This basis may be found by orthogonal diagonalization.
The free algebra generated by V may be written as the tensor algebra n≥0 V ⊗ ⋯ ⊗ V, that is, the sum of the tensor product of n copies of V over all n, and so a Clifford algebra would be the quotient of this tensor algebra by the two-sided ideal generated by elements of the form vvQ(v)1 for all elements vV. The product induced by the tensor product in the quotient algebra is written using juxtaposition (e.g. uv). Its associativity follows from the associativity of the tensor product.
The Clifford algebra has a distinguished subspace V, being the image of the embedding map. Such a subspace cannot in general be uniquely determined given only a K-algebra isomorphic to the Clifford algebra.
If the characteristic of the ground field K is not 2, then one can rewrite this fundamental identity in the form
is the symmetric bilinear form associated with Q, via the polarization identity.
Quadratic forms and Clifford algebras in characteristic 2 form an exceptional case. In particular, if char(K) = 2 it is not true that a quadratic form uniquely determines a symmetric bilinear form satisfying Q(v) = v, v, nor that every quadratic form admits an orthogonal basis. Many of the statements in this article include the condition that the characteristic is not 2, and are false if this condition is removed.
As a quantization of the exterior algebra
Clifford algebras are closely related to exterior algebras. Indeed, if Q = 0 then the Clifford algebra Cl(V, Q) is just the exterior algebra ⋀(V). For nonzero Q there exists a canonical linear isomorphism between ⋀(V) and Cl(V, Q) whenever the ground field K does not have characteristic two. That is, they are naturally isomorphic as vector spaces, but with different multiplications (in the case of characteristic two, they are still isomorphic as vector spaces, just not naturally). Clifford multiplication together with the distinguished subspace is strictly richer than the exterior product since it makes use of the extra information provided by Q.
The Clifford algebra is a filtered algebra, the associated graded algebra is the exterior algebra.
More precisely, Clifford algebras may be thought of as quantizations (cf. quantum group) of the exterior algebra, in the same way that the Weyl algebra is a quantization of the symmetric algebra.
Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras.
Universal property and construction
Let V be a vector space over a field K, and let Q : VK be a quadratic form on V. In most cases of interest the field K is either the field of real numbers R, or the field of complex numbers C, or a finite field.
A Clifford algebra Cl(V, Q) is a pair (A, i),[5][6] where A is a unital associative algebra over K and i is a linear map i : V → Cl(V, Q) satisfying i(v)2 = Q(v)1 for all v in V, defined by the following universal property: given any unital associative algebra A over K and any linear map j : VA such that
(where 1A denotes the multiplicative identity of A), there is a unique algebra homomorphism f : Cl(V, Q) → A such that the following diagram commutes (i.e. such that fi = j):
The quadratic form Q may be replaced by a (not necessarily symmetric) bilinear form ⋅,⋅ that has the property v, v = Q(v), vV, in which case an equivalent requirement on j is
When the characteristic of the field is not 2, this may be replaced by what is then an equivalent requirement,
where the bilinear form may additionally be restricted to being symmetric without loss of generality.
A Clifford algebra as described above always exists and can be constructed as follows: start with the most general algebra that contains V, namely the tensor algebra T(V), and then enforce the fundamental identity by taking a suitable quotient. In our case we want to take the two-sided ideal IQ in T(V) generated by all elements of the form
for all
and define Cl(V, Q) as the quotient algebra
The ring product inherited by this quotient is sometimes referred to as the Clifford product[7] to distinguish it from the exterior product and the scalar product.
It is then straightforward to show that Cl(V, Q) contains V and satisfies the above universal property, so that Cl is unique up to a unique isomorphism; thus one speaks of "the" Clifford algebra Cl(V, Q). It also follows from this construction that i is injective. One usually drops the i and considers V as a linear subspace of Cl(V, Q).
The universal characterization of the Clifford algebra shows that the construction of Cl(V, Q) is functorial in nature. Namely, Cl can be considered as a functor from the category of vector spaces with quadratic forms (whose morphisms are linear maps preserving the quadratic form) to the category of associative algebras. The universal property guarantees that linear maps between vector spaces (preserving the quadratic form) extend uniquely to algebra homomorphisms between the associated Clifford algebras.
Basis and dimension
Since V comes equipped with a quadratic form Q, in characteristic not equal to 2 there exist bases for V that are orthogonal. An orthogonal basis is one such that for a symmetric bilinear form
for , and
The fundamental Clifford identity implies that for an orthogonal basis
for , and .
This makes manipulation of orthogonal basis vectors quite simple. Given a product of distinct orthogonal basis vectors of V, one can put them into a standard order while including an overall sign determined by the number of pairwise swaps needed to do so (i.e. the signature of the ordering permutation).
If the dimension of V over K is n and {e1, …, en} is an orthogonal basis of (V, Q), then Cl(V, Q) is free over K with a basis
The empty product (k = 0) is defined as the multiplicative identity element. For each value of k there are n choose k basis elements, so the total dimension of the Clifford algebra is
Examples: real and complex Clifford algebras
The most important Clifford algebras are those over real and complex vector spaces equipped with nondegenerate quadratic forms.
Each of the algebras Clp,q(R) and Cln(C) is isomorphic to A or AA, where A is a full matrix ring with entries from R, C, or H. For a complete classification of these algebras see Classification of Clifford algebras.
Real numbers
Clifford algebras are also sometimes referred to as geometric algebras, most often over the real numbers.
Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form:
where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted Rp,q. The Clifford algebra on Rp,q is denoted Clp,q(R). The symbol Cln(R) means either Cln,0(R) or Cl0,n(R) depending on whether the author prefers positive-definite or negative-definite spaces.
A standard basis {e1, …, en} for Rp,q consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. Of such a basis, the algebra Clp,q(R) will therefore have p vectors that square to +1 and q vectors that square to −1.
A few low-dimensional cases are:
Cl0,0(R) is naturally isomorphic to R since there are no nonzero vectors.
Cl0,1(R) is a two-dimensional algebra generated by e1 that squares to −1, and is algebra-isomorphic to C, the field of complex numbers.
Cl0,2(R) is a four-dimensional algebra spanned by {1, e1, e2, e1e2}. The latter three elements all square to −1 and anticommute, and so the algebra is isomorphic to the quaternions H.
Cl0,3(R) is an 8-dimensional algebra isomorphic to the direct sum HH, the split-biquaternions.
Complex numbers
One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space of dimension n is equivalent to the standard diagonal form
Thus, for each dimension n, up to isomorphism there is only one Clifford algebra of a complex vector space with a nondegenerate quadratic form. We will denote the Clifford algebra on Cn with the standard quadratic form by Cln(C).
For the first few cases one finds that
Cl0(C) ≅ C, the complex numbers
Cl1(C) ≅ CC, the bicomplex numbers
Cl2(C) ≅ M2(C), the biquaternions
where Mn(C) denotes the algebra of n × n matrices over C.
Examples: constructing quaternions and dual quaternions
In this section, Hamilton's quaternions are constructed as the even sub algebra of the Clifford algebra Cl0,3(R).
Let the vector space V be real three-dimensional space R3, and the quadratic form Q be the negative of the usual Euclidean metric. Then, for v, w in R3 we have the bilinear form (or scalar product)
Now introduce the Clifford product of vectors v and w given by
This formulation uses the negative sign so the correspondence with quaternions is easily shown.
Denote a set of orthogonal unit vectors of R3 as e1, e2, and e3, then the Clifford product yields the relations
The general element of the Clifford algebra Cl0,3(R) is given by
The linear combination of the even degree elements of Cl0,3(R) defines the even subalgebra Cl[0]
(R) with the general element
The basis elements can be identified with the quaternion basis elements i, j, k as
which shows that the even subalgebra Cl[0]
(R) is Hamilton's real quaternion algebra.
To see this, compute
Dual quaternions
In this section, dual quaternions are constructed as the even Clifford algebra of real four-dimensional space with a degenerate quadratic form.[8][9]
Let the vector space V be real four-dimensional space R4, and let the quadratic form Q be a degenerate form derived from the Euclidean metric on R3. For v, w in R4 introduce the degenerate bilinear form
This degenerate scalar product projects distance measurements in R4 onto the R3 hyperplane.
The Clifford product of vectors v and w is given by
Note the negative sign is introduced to simplify the correspondence with quaternions.
Denote a set of mutually orthogonal unit vectors of R4 as e1, e2, e3 and e4, then the Clifford product yields the relations
The general element of the Clifford algebra Cl(R4, d) has 16 components. The linear combination of the even degree elements defines the even subalgebra Cl[0]
(R4, d)
with the general element
The basis elements can be identified with the quaternion basis elements i, j, k and the dual unit ε as
This provides the correspondence of Cl[0]
(R) with dual quaternion algebra.
To see this, compute
The exchanges of e1 and e4 alternate signs an even number of times, and show the dual unit ε commutes with the quaternion basis elements i, j, and k.
Examples: in small dimension
Let K be any field of characteristic not 2.
Dimension 1
For dim V = 1, if Q has diagonalization diag(a), that is there is a non-zero vector x such that Q(x) = a, then Cl(V, Q) is algebra-isomorphic to a K-algebra generated by an element x satisfying x2 = a, the quadratic algebra K[X] / (X2a).
In particular, if a = 0 (that is, Q is the zero quadratic form) then Cl(V, Q) is algebra-isomorphic to the dual numbers algebra over K.
If a is a non-zero square in K, then Cl(V, Q) ≃ KK.
Otherwise, Cl(V, Q) is isomorphic to the quadratic field extension K(a) of K.
Dimension 2
For dim V = 2, if Q has diagonalization diag(a, b) with non-zero a and b (which always exists if Q is non-degenerate), then Cl(V, Q) is isomorphic to a K-algebra generated by elements x and y satisfying x2 = a, y2 = b and xy = −yx.
Thus Cl(V, Q) is isomorphic to the (generalized) quaternion algebra (a, b)K. We retrieve Hamilton's quaternions when a = b = −1, since H = (−1, −1)R.
As a special case, if some x in V satisfies Q(x) = 1, then Cl(V, Q) ≃ M2(K).
Relation to the exterior algebra
Given a vector space V one can construct the exterior algebra ⋀(V), whose definition is independent of any quadratic form on V. It turns out that if K does not have characteristic 2 then there is a natural isomorphism between ⋀(V) and Cl(V, Q) considered as vector spaces (and there exists an isomorphism in characteristic two, which may not be natural). This is an algebra isomorphism if and only if Q = 0. One can thus consider the Clifford algebra Cl(V, Q) as an enrichment (or more precisely, a quantization, cf. the Introduction) of the exterior algebra on V with a multiplication that depends on Q (one can still define the exterior product independently of Q).
The easiest way to establish the isomorphism is to choose an orthogonal basis {e1, …, en} for V and extend it to a basis for Cl(V, Q) as described above. The map Cl(V, Q) → ⋀(V) is determined by
Note that this only works if the basis {e1, …, en} is orthogonal. One can show that this map is independent of the choice of orthogonal basis and so gives a natural isomorphism.
If the characteristic of K is 0, one can also establish the isomorphism by antisymmetrizing. Define functions fk: V × ⋯ × V → Cl(V, Q) by
where the sum is taken over the symmetric group on k elements, Sk. Since fk is alternating it induces a unique linear map k(V) → Cl(V, Q). The direct sum of these maps gives a linear map between ⋀(V) and Cl(V, Q). This map can be shown to be a linear isomorphism, and it is natural.
A more sophisticated way to view the relationship is to construct a filtration on Cl(V, Q). Recall that the tensor algebra T(V) has a natural filtration: F0F1F2 ⊂ ⋯, where Fk contains sums of tensors with order k. Projecting this down to the Clifford algebra gives a filtration on Cl(V, Q). The associated graded algebra
is naturally isomorphic to the exterior algebra ⋀(V). Since the associated graded algebra of a filtered algebra is always isomorphic to the filtered algebra as filtered vector spaces (by choosing complements of Fk in Fk+1 for all k), this provides an isomorphism (although not a natural one) in any characteristic, even two.
In the following, assume that the characteristic is not 2.[10]
Clifford algebras are Z2-graded algebras (also known as superalgebras). Indeed, the linear map on V defined by v ↦ −v (reflection through the origin) preserves the quadratic form Q and so by the universal property of Clifford algebras extends to an algebra automorphism
Since α is an involution (i.e. it squares to the identity) one can decompose Cl(V, Q) into positive and negative eigenspaces of α
Since α is an automorphism it follows that:
where the bracketed superscripts are read modulo 2. This gives Cl(V, Q) the structure of a Z2-graded algebra. The subspace Cl[0](V, Q) forms a subalgebra of Cl(V, Q), called the even subalgebra. The subspace Cl[1](V, Q) is called the odd part of Cl(V, Q) (it is not a subalgebra). This Z2-grading plays an important role in the analysis and application of Clifford algebras. The automorphism α is called the main involution or grade involution. Elements that are pure in this Z2-grading are simply said to be even or odd.
Remark. In characteristic not 2 the underlying vector space of Cl(V, Q) inherits an N-grading and a Z-grading from the canonical isomorphism with the underlying vector space of the exterior algebra ⋀(V).[11] It is important to note, however, that this is a vector space grading only. That is, Clifford multiplication does not respect the N-grading or Z-grading, only the Z2-grading: for instance if Q(v) ≠ 0, then v ∈ Cl1(V, Q), but v2 ∈ Cl0(V, Q), not in Cl2(V, Q). Happily, the gradings are related in the natural way: Z2N/2NZ/2Z. Further, the Clifford algebra is Z-filtered:
The degree of a Clifford number usually refers to the degree in the N-grading.
The even subalgebra Cl[0](V, Q) of a Clifford algebra is itself isomorphic to a Clifford algebra.[12][13] If V is the orthogonal direct sum of a vector a of nonzero norm Q(a) and a subspace U, then Cl[0](V, Q) is isomorphic to Cl(U, −Q(a)Q), where −Q(a)Q is the form Q restricted to U and multiplied by −Q(a). In particular over the reals this implies that:
In the negative-definite case this gives an inclusion Cl0,n−1(R) ⊂ Cl0,n(R), which extends the sequence
Likewise, in the complex case, one can show that the even subalgebra of Cln(C) is isomorphic to Cln−1(C).
In addition to the automorphism α, there are two antiautomorphisms that play an important role in the analysis of Clifford algebras. Recall that the tensor algebra T(V) comes with an antiautomorphism that reverses the order in all products of vectors:
Since the ideal IQ is invariant under this reversal, this operation descends to an antiautomorphism of Cl(V, Q) called the transpose or reversal operation, denoted by xt. The transpose is an antiautomorphism: (xy)t = yt xt. The transpose operation makes no use of the Z2-grading so we define a second antiautomorphism by composing α and the transpose. We call this operation Clifford conjugation denoted
Of the two antiautomorphisms, the transpose is the more fundamental.[14]
Note that all of these operations are involutions. One can show that they act as ±1 on elements which are pure in the Z-grading. In fact, all three operations depend only on the degree modulo 4. That is, if x is pure with degree k then
where the signs are given by the following table:
k mod 4 0123
++(−1)k(k − 1)/2
++(−1)k(k + 1)/2
Clifford scalar product
When the characteristic is not 2, the quadratic form Q on V can be extended to a quadratic form on all of Cl(V, Q) (which we also denoted by Q). A basis-independent definition of one such extension is
where ⟨a0 denotes the scalar part of a (the degree 0 part in the Z-grading). One can show that
where the vi are elements of V – this identity is not true for arbitrary elements of Cl(V, Q).
The associated symmetric bilinear form on Cl(V, Q) is given by
One can check that this reduces to the original bilinear form when restricted to V. The bilinear form on all of Cl(V, Q) is nondegenerate if and only if it is nondegenerate on V.
The operator of left (respectively right) Clifford multiplication by the transpose at of an element a is the adjoint of left (respectively right) Clifford multiplication by a with respect to this inner product. That is,
Structure of Clifford algebras
In this section we assume that characteristic is not 2, the vector space V is finite-dimensional and that the associated symmetric bilinear form of Q is non-singular. A central simple algebra over K is a matrix algebra over a (finite-dimensional) division algebra with center K. For example, the central simple algebras over the reals are matrix algebras over either the reals or the quaternions.
• If V has even dimension then Cl(V, Q) is a central simple algebra over K.
• If V has even dimension then Cl[0](V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K.
• If V has odd dimension then Cl(V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K.
• If V has odd dimension then Cl[0](V, Q) is a central simple algebra over K.
The structure of Clifford algebras can be worked out explicitly using the following result. Suppose that U has even dimension and a non-singular bilinear form with discriminant d, and suppose that V is another vector space with a quadratic form. The Clifford algebra of U + V is isomorphic to the tensor product of the Clifford algebras of U and (−1)dim(U)/2dV, which is the space V with its quadratic form multiplied by (−1)dim(U)/2d. Over the reals, this implies in particular that
These formulas can be used to find the structure of all real Clifford algebras and all complex Clifford algebras; see the classification of Clifford algebras.
Notably, the Morita equivalence class of a Clifford algebra (its representation theory: the equivalence class of the category of modules over it) depends only on the signature (pq) mod 8. This is an algebraic form of Bott periodicity.
Lipschitz group
The class of Lipschitz groups (a.k.a.[15] Clifford groups or Clifford–Lipschitz groups) was discovered by Rudolf Lipschitz.[16]
In this section we assume that V is finite-dimensional and the quadratic form Q is nondegenerate.
An action on the elements of a Clifford algebra by its group of units may be defined in terms of a twisted conjugation: twisted conjugation by x maps yα(x) y x−1, where α is the main involution defined above.
The Lipschitz group Γ is defined to be the set of invertible elements x that stabilize the set of vectors under this action,[17] meaning that for all v in V we have:
This formula also defines an action of the Lipschitz group on the vector space V that preserves the quadratic form Q, and so gives a homomorphism from the Lipschitz group to the orthogonal group. The Lipschitz group contains all elements r of V for which Q(r) is invertible in K, and these act on V by the corresponding reflections that take v to v − (⟨r, v⟩ + ⟨v, r⟩)r/Q(r). (In characteristic 2 these are called orthogonal transvections rather than reflections.)
If V is a finite-dimensional real vector space with a non-degenerate quadratic form then the Lipschitz group maps onto the orthogonal group of V with respect to the form (by the Cartan–Dieudonné theorem) and the kernel consists of the nonzero elements of the field K. This leads to exact sequences
Over other fields or with indefinite forms, the map is not in general onto, and the failure is captured by the spinor norm.
Spinor norm
In arbitrary characteristic, the spinor norm Q is defined on the Lipschitz group by
It is a homomorphism from the Lipschitz group to the group K× of non-zero elements of K. It coincides with the quadratic form Q of V when V is identified with a subspace of the Clifford algebra. Several authors define the spinor norm slightly differently, so that it differs from the one here by a factor of −1, 2, or −2 on Γ1. The difference is not very important in characteristic other than 2.
The nonzero elements of K have spinor norm in the group (K×)2 of squares of nonzero elements of the field K. So when V is finite-dimensional and non-singular we get an induced map from the orthogonal group of V to the group K×/(K×)2, also called the spinor norm. The spinor norm of the reflection about r, for any vector r, has image Q(r) in K×/(K×)2, and this property uniquely defines it on the orthogonal group. This gives exact sequences:
Note that in characteristic 2 the group {±1} has just one element.
From the point of view of Galois cohomology of algebraic groups, the spinor norm is a connecting homomorphism on cohomology. Writing μ2 for the algebraic group of square roots of 1 (over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action), the short exact sequence
yields a long exact sequence on cohomology, which begins
The 0th Galois cohomology group of an algebraic group with coefficients in K is just the group of K-valued points: H0(G; K) = G(K), and H12; K) ≅ K×/(K×)2, which recovers the previous sequence
where the spinor norm is the connecting homomorphism H0(OV; K) → H12; K).
Spin and Pin groups
In this section we assume that V is finite-dimensional and its bilinear form is non-singular.
The Pin group PinV(K) is the subgroup of the Lipschitz group Γ of elements of spinor norm 1, and similarly the Spin group SpinV(K) is the subgroup of elements of Dickson invariant 0 in PinV(K). When the characteristic is not 2, these are the elements of determinant 1. The Spin group usually has index 2 in the Pin group.
Recall from the previous section that there is a homomorphism from the Lipschitz group onto the orthogonal group. We define the special orthogonal group to be the image of Γ0. If K does not have characteristic 2 this is just the group of elements of the orthogonal group of determinant 1. If K does have characteristic 2, then all elements of the orthogonal group have determinant 1, and the special orthogonal group is the set of elements of Dickson invariant 0.
There is a homomorphism from the Pin group to the orthogonal group. The image consists of the elements of spinor norm 1 ∈ K×/(K×)2. The kernel consists of the elements +1 and −1, and has order 2 unless K has characteristic 2. Similarly there is a homomorphism from the Spin group to the special orthogonal group of V.
In the common case when V is a positive or negative definite space over the reals, the spin group maps onto the special orthogonal group, and is simply connected when V has dimension at least 3. Further the kernel of this homomorphism consists of 1 and −1. So in this case the spin group, Spin(n), is a double cover of SO(n). Please note, however, that the simple connectedness of the spin group is not true in general: if V is Rp,q for p and q both at least 2 then the spin group is not simply connected. In this case the algebraic group Spinp,q is simply connected as an algebraic group, even though its group of real valued points Spinp,q(R) is not simply connected. This is a rather subtle point, which completely confused the authors of at least one standard book about spin groups.
Clifford algebras Clp,q(C), with p + q = 2n even, are matrix algebras which have a complex representation of dimension 2n. By restricting to the group Pinp,q(R) we get a complex representation of the Pin group of the same dimension, called the spin representation. If we restrict this to the spin group Spinp,q(R) then it splits as the sum of two half spin representations (or Weyl representations) of dimension 2n−1.
If p + q = 2n + 1 is odd then the Clifford algebra Clp,q(C) is a sum of two matrix algebras, each of which has a representation of dimension 2n, and these are also both representations of the Pin group Pinp,q(R). On restriction to the spin group Spinp,q(R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2n.
More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see the article on spinors.
Real spinors
To describe the real spin representations, one must know how the spin group sits inside its Clifford algebra. The Pin group, Pinp,q is the set of invertible elements in Clp,q that can be written as a product of unit vectors:
Comparing with the above concrete realizations of the Clifford algebras, the Pin group corresponds to the products of arbitrarily many reflections: it is a cover of the full orthogonal group O(p, q). The Spin group consists of those elements of Pinp,q which are products of an even number of unit vectors. Thus by the Cartan–Dieudonné theorem Spin is a cover of the group of proper rotations SO(p, q).
Let α : Cl → Cl be the automorphism which is given by the mapping v ↦ −v acting on pure vectors. Then in particular, Spinp,q is the subgroup of Pinp,q whose elements are fixed by α. Let
(These are precisely the elements of even degree in Clp,q.) Then the spin group lies within Cl[0]
The irreducible representations of Clp,q restrict to give representations of the pin group. Conversely, since the pin group is generated by unit vectors, all of its irreducible representation are induced in this manner. Thus the two representations coincide. For the same reasons, the irreducible representations of the spin coincide with the irreducible representations of Cl[0]
To classify the pin representations, one need only appeal to the classification of Clifford algebras. To find the spin representations (which are representations of the even subalgebra), one can first make use of either of the isomorphisms (see above)
and realize a spin representation in signature (p, q) as a pin representation in either signature (p, q − 1) or (q, p − 1).
Differential geometry
One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clifford bundle in analogy with the exterior bundle. This has a number of important applications in Riemannian geometry. Perhaps more importantly is the link to a spin manifold, its associated spinor bundle and spinc manifolds.
Clifford algebras have numerous important applications in physics. Physicists usually consider a Clifford algebra to be an algebra with a basis generated by the matrices γ0, …, γ3 called Dirac matrices which have the property that
where η is the matrix of a quadratic form of signature (1, 3) (or (3, 1) corresponding to the two equivalent choices of metric signature). These are exactly the defining relations for the Clifford algebra Cl
, whose complexification is Cl
which, by the classification of Clifford algebras, is isomorphic to the algebra of 4 × 4 complex matrices Cl4(C) ≈ M4(C). However, it is best to retain the notation Cl
, since any transformation that takes the bilinear form to the canonical form is not a Lorentz transformation of the underlying spacetime.
The Clifford algebra of spacetime used in physics thus has more structure than Cl4(C). It has in addition a set of preferred transformations – Lorentz transformations. Whether complexification is necessary to begin with depends in part on conventions used and in part on how much one wants to incorporate straightforwardly, but complexification is most often necessary in quantum mechanics where the spin representation of the Lie algebra so(1, 3) sitting inside the Clifford algebra conventionally requires a complex Clifford algebra. For reference, the spin Lie algebra is given by
This is in the (3, 1) convention, hence fits in Cl
The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and give an explicit isomorphism from the Clifford algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form of Dirac field bilinears.
The use of Clifford algebras to describe quantum theory has been advanced among others by Mario Schönberg,[19] by David Hestenes in terms of geometric calculus, by David Bohm and Basil Hiley and co-workers in form of a hierarchy of Clifford algebras, and by Elio Conte et al.[20][21]
Computer vision
Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Rodriguez et al.[22] propose a Clifford embedding to generalize traditional MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow. Vector-valued data is analyzed using the Clifford Fourier Transform. Based on these vectors action filters are synthesized in the Clifford Fourier domain and recognition of actions is performed using Clifford correlation. The authors demonstrate the effectiveness of the Clifford embedding by recognizing actions typically performed in classic feature films and sports broadcast television.
• While this article focuses on a Clifford algebra of a vector space over a field, the definition extends without change to a module over any unital, associative, commutative ring.[3]
• Clifford algebras may be generalized to a form of degree higher than quadratic over a vector space.[23]
Conferences and journals
There is a vibrant and interdisciplinary community around Clifford and Geometric Algebras with a wide range of applications. The main conferences in this subject include the International Conference on Clifford Algebras and their Applications in Mathematical Physics (ICCA) and Applications of Geometric Algebra in Computer Science and Engineering (AGACSE) series. A main publication outlet is the Springer journal Advances in Applied Clifford Algebras.
See also
• Algebra of physical space, APS
• Cayley–Dickson construction
• Classification of Clifford algebras
• Clifford analysis
• Clifford module
• Complex spin structure
• Dirac operator
• Exterior algebra
• Fierz identity
• Gamma matrices
• Generalized Clifford algebra
• Geometric algebra
• Higher-dimensional gamma matrices
• Hypercomplex number
• Octonion
• Paravector
• Quaternion
• Spin group
• Spin structure
• Spinor
• Spinor bundle
1. Clifford, W.K. (1873). "Preliminary sketch of bi-quaternions". Proc. London Math. Soc. 4: 381–395.
2. Clifford, W.K. (1882). Tucker, R. (ed.). Mathematical Papers. London: Macmillan.
3. see for ex. Oziewicz, Z.; Sitarczyk, Sz. (1992). "Parallel treatment of Riemannian and symplectic Clifford algebras". In Micali, A.; Boudet, R.; Helmstetter, J. (eds.). Clifford Algebras and their Applications in Mathematical Physics. Kluwer. p. 83. ISBN 0-7923-1623-1.
4. Mathematicians who work with real Clifford algebras and prefer positive definite quadratic forms (especially those working in index theory) sometimes use a different choice of sign in the fundamental Clifford identity. That is, they take v2 = −Q(v). One must replace Q with −Q in going from one convention to the other.
5. (Vaz & da Rocha 2016) make it clear that the map i (γ in the quote here) is included in the structure of a Clifford algebra by defining it as "The pair (A, γ) is a Clifford algebra for the quadratic space (V, g) when A is generated as an algebra by { γ(v) | vV } and { a1A | aR }, and γ satisfies γ(v)γ(u) + γ(u)γ(v) = 2g(v, u) for all v, uV."
6. P. Lounesto (1996), "Counterexamples in Clifford algebras with CLICAL", Clifford Algebras with Numeric and Symbolic Computations: 3–30, doi:10.1007/978-1-4615-8157-4_1, ISBN 978-1-4615-8159-8 or abridged version
7. Lounesto 2001, §1.8.
8. McCarthy, J.M. (1990). An Introduction to Theoretical Kinematics. MIT Press. pp. 62–65. ISBN 978-0-262-13252-7.
9. Bottema, O.; Roth, B. (2012) [1979]. Theoretical Kinematics. Dover. ISBN 978-0-486-66346-3.
10. Thus the group algebra K[Z/2] is semisimple and the Clifford algebra splits into eigenspaces of the main involution.
11. The Z-grading is obtained from the N grading by appending copies of the zero subspace indexed with the negative integers.
12. Technically, it does not have the full structure of a Clifford algebra without a designated vector subspace, and so is isomorphic as an algebra, but not as a Clifford algebra.
13. We are still assuming that the characteristic is not 2.
14. The opposite is true when using the alternate (−) sign convention for Clifford algebras: it is the conjugate which is more important. In general, the meanings of conjugation and transpose are interchanged when passing from one sign convention to the other. For example, in the convention used here the inverse of a vector is given by v−1 = vt / Q(v) while in the (−) convention it is given by v−1 = v / Q(v).
15. Vaz & da Rocha 2016, p. 126.
16. Lounesto 2001, §17.2.
17. Perwass, Christian (2009), Geometric Algebra with Applications in Engineering, Springer Science & Business Media,, ISBN 978-3-540-89068-3, §3.3.1
18. Weinberg 2002
19. See the references to Schönberg's papers of 1956 and 1957 as described in section "The Grassmann–Schönberg algebra " of:A. O. Bolivar, Classical limit of fermions in phase space, J. Math. Phys. 42, 4020 (2001) doi:10.1063/1.1386411
20. Conte, Elio (14 Nov 2007). "A Quantum-Like Interpretation and Solution of Einstein, Podolsky, and Rosen Paradox in Quantum Mechanics". arXiv:0711.2260 [quant-ph].
21. Elio Conte: On some considerations of mathematical physics: May we identify Clifford algebra as a common algebraic structure for classical diffusion and Schrödinger equations? Adv. Studies Theor. Phys., vol. 6, no. 26 (2012), pp. 1289–1307
22. Rodriguez, Mikel; Shah, M (2008). "Action MACH: A Spatio-Temporal Maximum Average Correlation Height Filter for Action Classification". Computer Vision and Pattern Recognition (CVPR).
23. Darrell E. Haile (Dec 1984). "On the Clifford Algebra of a Binary Cubic Form". American Journal of Mathematics. The Johns Hopkins University Press. 106 (6): 1269–1280. doi:10.2307/2374394. JSTOR 2374394.
• Bourbaki, Nicolas (1988), Algebra, Springer-Verlag, ISBN 978-3-540-19373-9, section IX.9.
• Carnahan, S. Borcherds Seminar Notes, Uncut. Week 5, "Spinors and Clifford Algebras".
• Garling, D. J. H. (2011), Clifford algebras. An introduction, London Mathematical Society Student Texts, 78, Cambridge University Press, ISBN 978-1-107-09638-7, Zbl 1235.15025
• Jagannathan, R. (2010), On generalized Clifford algebras and their physical applications, arXiv:1005.4300, Bibcode:2010arXiv1005.4300J
• Lam, Tsit-Yuen (2005), Introduction to Quadratic Forms over Fields, Graduate Studies in Mathematics, 67, American Mathematical Society, ISBN 0-8218-1095-2, MR 2104929, Zbl 1068.11023
• Lawson, H. Blaine; Michelsohn, Marie-Louise (1989), Spin Geometry, Princeton University Press, ISBN 978-0-691-08542-5. An advanced textbook on Clifford algebras and their applications to differential geometry.
• Lounesto, Pertti (2001), Clifford algebras and spinors, Cambridge University Press, ISBN 978-0-521-00551-7
• Porteous, Ian R. (1995), Clifford algebras and the classical groups, Cambridge University Press, ISBN 978-0-521-55177-9
• Sylvester, J. J. (1882), A word on Nonions, Johns Hopkins University Circulars, I, pp. 241–2, hdl:1774.2/32845; ibid II (1883) 46; ibid III (1884) 7–9. Summarized in The Collected Mathematics Papers of James Joseph Sylvester (Cambridge University Press, 1909) v III. online and further.
• Vaz, J.; da Rocha, R. (2016), An Introduction to Clifford Algebras and Spinors, Oxford University Press, ISBN 978-0-19-878292-6
• Weinberg, S. (2002), The Quantum Theory of Fields, 1, Cambridge University Press, ISBN 0-521-55001-7
Further reading
|
7b8f5c5f7f3ee8c5 | Graduate School
From Physics
Jump to: navigation, search
Graduate school in physics or related fields generally refers to a post-baccalaureate education sequence where a student earns their doctorate degree, or Ph.D. Unlike a baccalaureate whose requirements primarily focus on classes, Ph.D. programs focus on an original research contribution in the form of a written thesis and typically defense in front of a committee.
Because of the heavy research focus, you should only ever go to graduate school if you enjoy doing research. It is by no means the necessary "next step" (certainly not in comparison to the transition from high school to college, which is a more common and broadly applicable path). While it is true that, in many cases, Ph.D. holders make slightly more money in industry, it is also true that Ph.D. students forego ~5-6 or more years of working in private industry, and thus any promotions or experience associated therewith. That being said, grad school is the next step if you are planning a future career in academia.
Note that grad school does not have to follow directly after undergrad; many people first go to industry for a number of years before deciding to return to grad school, and many of these people go on to take up permanent positions in academia.
It is important to keep in mind that any Ph.D. program worth your while will pay you to go. While there is technically a tuition, the school will almost always cover this, and allot you a stipend anywhere between $13,000 to $34,000 or even more annually. This is because, as a salaried researcher, you are actually contributing value to the university in the form of original research and teaching responsibilities.
Preparing for the Application
Students who go to graduate school straight out of undergrad typically apply to grad school during their senior (fourth) year, first (fall) semester, though some students choose to take a gap year to either do external research, finish up projects, or travel.
While requirements vary, typically grad school in physics (and often related fields such as astrophysics) will require:
• One or more research/purpose/diversity statements
• A score on the Graduate Record Exam (GRE)
• A score on the GRE Physics Subject Test (PGRE)
• Three letters of recommendation
Writing a Statement of Purpose
When writing a statement of purpose to a graduate school, it helps to think about the following questions:
• Why am I applying to graduate school?
• What kind of research have I done in the past, and how does this research prepare me for graduate study?
• What about the specific program I am applying to is appealing to me?
• Who would I want to work with, and what kind of work would I want to do with them, if I went to this school?
All of these questions are immensely important to include in a statement of purpose. Successful grad school applicants typically communicate their enthusiasm for research, extensive past research experience (and deliverables such as papers, talks, and posters), and a number of faculty with whom they would like to work.
Believe it or not, graduate programs are immensely different, and it is extremely valid (and common) to choose a graduate school based off of the faculty who are there. As you reach this point in your academic career, you will realize that, oftentimes, even top schools may lack researchers in your specific field, and you should take this into account. Moreover, it pays to verify directly with potential research advisors whether or not they plan on taking graduate students, and what kind of advising style they have (e.g., hands-off, hands-on). It is also beneficial to keep in mind other nearby institutions with which your prospective school frequently interacts.
Some examples are below:
• California Institute of Technology: Carnegie Observatories
• Columbia University: Flatiron Institute
• Princeton University: Institute of Advanced Study
• University of Chicago: Kavli Institute for Cosmological Physics
• University of California, Berkeley: Lawrence-Berkeley National Laboratory
• Stanford University: SLAC National Accelerator Laboratory
• University of California, Santa Barbara: Kavli Institute for Theoretical Physics
• University of Hawaii at Manoa: Keck Observatory
One common mistake that students make when writing statements of purpose is subconsciously implementing self-deprecating language. This may take an explicit form: "Even though I don't know much about...," but can often take more subtle forms:
• "My research advisor assigned me to ..." - You want to sound directed in your research. While it may be true that you did work assigned by your advisor, the graduate admissions committee what to know what about what you, not your advisor, did.
• "I characterized growths and ran code to fit spectra." - This sort of language is a mistake because, rather than focusing on the day-to-day mechanical applications of what you are doing, you should show some perspective in why what you're doing is interesting. You should make big-picture statements about the motivation of your project, and what you specifically tried to learn and what the outcome was.
• "I learned a lot." - About what? Talking about what you have learned can be extremely helpful, but you should be concrete about this.
If you have a paper (not necessarily first- or even high author position), poster, or presentation, you should absolutely mention this in your statement of purpose, even if it appears in your curriculum vitae (~resume). It will help convince people that you have had exposure to the actual process of doing research, and are thus likely to succeed in grad school.
Even students who go to top schools experience imposter syndrome, but bearing this fact in mind should convince you that writing confidently can only help you, and is not "dishonest," even if you feel like you still have a lot to learn.
Also, when talking about your undergraduate research, you should make sure to establish that you did a substantial amount of research that will actually prepare you to do the work you want to with the person you want to work with. For instance, if you have a lot of experience in particle physics but barely any experience in other fields, then you should express an interest in working with the particle physics faculty member there whom you find most appealing, even if your primary interest now is in condensed matter. You'll have plenty of time to explore other fields in grad school. In the meantime, you just need to make sure that when you talk about what you're interested in, you have enough experience in the field to know what you want to do in it and to do it.
Graduate Record Examination (GRE)
The Graduate Record Examination (GRE) is a standardized examination either conducted on paper or administered via a computer at a specialized facility. It has three parts, a "verbal reasoning" language section (scored 130-170), a "quantitative reasoning" mathematics section (scored 130-170), and an "analytical writing" essay section (scored 0-6). The GRE is administered by the ETS corporation and is taken by basically all students in the United States applying to graduate school in any subject. As such, it is extremely general knowledge which is comparable to the standard SAT/ACT in difficulty and content. This being said, you should be able to take the GRE basically as early in your college experience as you would like, though it is typical for students to take it during the same semester that they are applying for grad school.
Graduate Record Examination, Physics Subject Test (PGRE)
The Graduate Record Examination, Physics Subject Test (PGRE) is a standardized examination typically taken on paper on the contents of a typical physics undergraduate curriculum, and is graded from 200-990 in increments of 10 points. The score on the PGRE is slightly more important than that of the standard GRE. It is a 2 hour and 50 minute multiple choice exam with ~100 questions. According to ETS, the breakdown of the content of the exam is as follows:
• CLASSICAL MECHANICS — 20% (such as kinematics, Newton's laws, work and energy, oscillatory motion, rotational motion about a fixed axis, dynamics of systems of particles, central forces and celestial mechanics, three-dimensional particle dynamics, Lagrangian and Hamiltonian formalism, noninertial reference frames, elementary topics in fluid dynamics)
• ELECTROMAGNETISM — 18% (such as electrostatics, currents and DC circuits, magnetic fields in free space, Lorentz force, induction, Maxwell's equations and their applications, electromagnetic waves, AC circuits, magnetic and electric fields in matter)
• OPTICS AND WAVE PHENOMENA — 9% (such as wave properties, superposition, interference, diffraction, geometrical optics, polarization, Doppler effect)
• THERMODYNAMICS AND STATISTICAL MECHANICS — 10% (such as the laws of thermodynamics, thermodynamic processes, equations of state, ideal gases, kinetic theory, ensembles, statistical concepts and calculation of thermodynamic quantities, thermal expansion and heat transfer)
• QUANTUM MECHANICS — 12%(such as fundamental concepts, solutions of the Schrödinger equation (including square wells, harmonic oscillators, and hydrogenic atoms), spin, angular momentum, wave function symmetry, elementary perturbation theory)
• ATOMIC PHYSICS — 10% (such as properties of electrons, Bohr model, energy quantization, atomic structure, atomic spectra, selection rules, black-body radiation, x-rays, atoms in electric and magnetic fields)
• SPECIAL RELATIVITY — 6% (such as introductory concepts, time dilation, length contraction, simultaneity, energy and momentum, four-vectors and Lorentz transformation, velocity addition)
• LABORATORY METHODS — 6% (such as data and error analysis, electronics, instrumentation, radiation detection, counting statistics, interaction of charged particles with matter, lasers and optical interferometers, dimensional analysis, fundamental applications of probability and statistics)
• SPECIALIZED TOPICS — 9% Nuclear and Particle physics (e.g., nuclear properties, radioactive decay, fission and fusion, reactions, fundamental properties of elementary particles), Condensed Matter (e.g., crystal structure, x-ray diffraction, thermal properties, electron theory of metals, semiconductors, superconductors), Miscellaneous (e.g., astrophysics, mathematical methods, computer applications)
Note that some programs have been gradually phasing out GRE/PGRE requirements, but this is still a minority of programs and it is, in the vast majority of cases, better to take this exam if you have the ability. While there are ~5 publicly released PGRE exams with scoring guidelines, the general scoring formula depends from session to session and is propriety on behalf of ETS (and thus private). While there are other practice examinations on the internet, they do not have officially sanctioned scoring assignments. Hence, when studying for the PGRE, it is better to take the official exams only when you feel ready to get an accurate score estimate, because of the limited number available.
The current iteration of the PGRE does not penalize for wrong answers (which is not true of all of the practice exams), so it is in your advantage to fill out an answer on every question, even if it is a guess. It is not necessary at all to get all the questions right in order to get a good (or even perfect) score. It may feel at times like you are making barely educated guesses on most questions, but you should persevere, as this is a common feeling.
Registration and Study Tips
For both the physics GRE and the general GRE, you should register well in advance. For the physics GRE, for instance the deadline is typically 5 weeks from the test date or 4 weeks with a $25 late fee (though you should check the website of ETS, the company that makes the test). So registering a month and a half or even 2 or more months ahead of time is recommended.
The general GRE is offered year-round, so you can take it at any time. Just be sure to take it by October of the year before the year that you plan to enroll in grad school (assuming you enroll in the fall), to allow time for your grades to arrive at the university you're applying to. Also keep in mind you can only take it once in a 21-day period and 5 times within a year.
The physics GRE is offered 3 times in a year: once in April, once in September, and once in October. It is reported about a month after you take the test. Thus, it's a good idea to take the PGRE in April. That way, your grades for your exam will arrive soon enough for you to take a second test if you did poorly or not take a second test if you did well.
Some advice on preparation: for the general GRE it's pretty much the same as preparing for the SAT/ACT. One very helpful resource is the practice exams, 2 on-paper and 2 online, offered on the ETS website which you can order for free while registering for the exam. For the physics GRE, there's one practice exam on the ETS website, but there are other resources out there. One is this one from UW, and another example is the textbook Conquering the Physics GRE. I'm sure you can find many more examples by Googling, but this is a good starting point.
Letters of Recommendation
Most grad programs require three letters of recommendation. A letter of recommendation is a private statement made on behalf of somebody who knows you to an institution to which you are applying, stating your preparation for the program and their positive research/personal experiences with you. In order to optimize your chances of getting into a good graduate school, it is in your best interest to make as many of your three/four recommenders people with Ph.D's (professors, postdocs) who know you personally and have worked with you in a research context. In the event that this is not possible, you should ask for letters of recommendation from professors of classes in which you have done well, and who know you relatively well (e.g., from office hours, etc.). Do not get a letter of recommendation from somebody who does not like you.
In an optimal case, you should ask your recommenders ~1 month or more in advance for letters of recommendation, and it often pays to also provide excerpts of your statement of purpose. It is a good idea to share to them some kind of online spreadsheet of all places which require a letter of recommendation, complete with links to submit the letter of recommendation/email addresses to send them, etc. as well as deadlines, and clear indications of whether or not this task has been complete.
FERPA is a regulation in the United States giving you the right to see your letter of recommendation, and schools will typically ask you whether or not you would like to waive this right. You should always opt to waive your rights to see your letter of recommendation, as schools typically know whether or not you have done so. Letters for which you have not waived your FERPA right may treat your letter of recommendation with more skepticism, as they know that your letter writer may not feel comfortable to speak completely honestly about their experiences with you.
Applying for External Fellowships
External fellowships are sources of funding outside of the graduate school which you go to. It doesn't cost anything to apply to a fellowship; even though it may feel like a lot of work, it is generally worth it for a number of reasons:
• Your graduate stipend may be increased (this is not always the case)
• Because the tuition burden on the department you are going to is lessened, the department is more eager to attract you (you can even reverse a rejection decision from a school by telling them you have obtained a fellowship)
• Since you come into graduate school with your own funding, you are not prevented from working with certain professors who may not have funding, and will not be bound to work for projects you find uninteresting because some grant is contingent upon it
• You will not be made to TA in order to earn your stipend
Fellowships all have their individualized purposes, and you should write your statements for these fellowships bearing in mind what the fellowship is actually for. Below is a non-exhaustive list of fellowships that you should consider applying to:
• National Science Foundation Graduate Research Fellowship (NSF-GRFP): Funded by the National Science Foundation, for students with demonstrable "intellectual merit" (research accomplishments) and "broader impact" (outreach). Applicants write a personal statement and a research proposal to which they are not bound (as the NSF-GRFP funds people mostly regardless of change in career plans).
• National Defense Science and Engineering Graduate Fellowship (NDSEG): Funded by the Department of Defense, and for projects which have some demonstrable significance, direct or indirect, to national defense
• Ford Foundation Fellowship: Aims to train future researchers devoted to education and diversity
• Computer Science Graduate Fellowship (CSGF): Funded by the Department of Energy, for researchers who plan to do computationally heavy work. Acceptance of this fellowship requires the school to agree not to require more than a certain number of TA semesters. It is also contingent upon the completion of a number of graduate courses in a number of fields for broad training.
• Hertz Foundation Fellowship: An extremely competitive private fellowship emphasizing creativity and patriotism, lengthy process involving an initial application followed by two in-person interviews selecting ~12 students per year in all fields.
If you have the fortune of obtaining multiple fellowships, you should take care of figuring out how the fellowships interact, and how they interact with the funding at your home institution. Whether or not you can accept certain fellowships may also depend on the institution to which you are applying, and also whether or not it is abroad. US government fellowships are not typically available to international students.
One thing to note is: these organizations, especially government ones, are pretty strict about lateness. This is not like in classes or even applications to present at conferences and occasionally even summer research applications, where sometimes if you're a little late you may be able to figure something out, depending on the people running it. Here, if you're just one minute late, it will be sent back without review, no questions asked. So be sure to finish those applications extra early.
Visiting and Choosing Graduate Schools
If you are accepted to a graduate school, typically the graduate school will fund a visit on your behalf to their school in an open house. In the event that you cannot make the time that they scheduled, they are generally willing to accommodate a separate time, and will often construct personalized itineraries just for you.
During this visit, a couple of things will generally happen:
• You will be scheduled with a large number of short meetings with many faculty, including the faculty that you specified you wanted to work with on your application
• You will spend a day or a couple of days within the department, meeting current graduate students and other relevant people
• You will be given a lot of very nice food and put in a very nice hotel, and people will act unnaturally nice to you for about a month
Some things you should keep in mind. You should feel free to talk about these points openly, since people are generally understanding that you are trying to make the best decision for yourself, and are trying to get as much information as possible:
• Do not go to a place where you have nobody you are excited to work for, and be hesitant to go to a place where there is only one such person (as the advising fit may go bad for any number of reasons).
• What is the general approach/philosophy to research within the department there? Schools have fields that they are strong in and fields that they just don't have people in, but it goes further than that. How do they incorporate each field within the department? Who do they work with? Primarily others in the same field? People from all fields? Industry? And, most importantly, can you see yourself enjoying work that goes according to that approach?
• Talk to graduate students, both current ones and those who have dropped out, if you can locate them. They tend to be extremely honest about their experiences, and will generally not hesitate to tell you red flags about a school or potential advisor. If a school or advisor seems to be discouraging you from talking to someone, that is a huge red flag.
• Do you actually see yourself living in the location of the school? This sounds like it wouldn't be that important, but it totally is, because this is where you're spending the next half decade of some of the best years of your life.
• Is your advisor of interest actually taking students? Can they commit to taking you?
• What is the funding situation like? Do students tend to have enough travel funds to go to conferences, etc.
Graduate visits are extremely exhausting since your schedule is often booked completely, so if you are accepted to many places, be honest about which schools you are actually considering. Graduate schools across the country have unanimously decided that April 15th is the deadline to accept or reject a graduate school decision, and you should not hold out later than this for any US grad school you are hoping will admit you off of a waitlist.
Graduate School Abroad
While graduate school abroad can often be very rewarding, there are a lot of significant differences between US grad schools and international grad schools. Some of the big points are below:
• Many fellowships (in particular, government fellowships) do not apply if you go to a school abroad
• In US grad schools, students apply directly to a Ph.D. program where they earn an ancillary masters degree. However, in programs abroad, these are typically two separate programs, with Ph.D. candidates doing a masters program before their doctorate
• Ph.D. programs abroad (especially in Europe) tend to be shorter (~3 years) and are sometimes regarded as being less comprehensive |
008b3db9d715eed3 | Take the 2-minute tour ×
In mathematical derivations of physical identities, it is often more or less implicitly assumed that functions are well behaved. One example are the Maxwell identities in thermodynamics which assume that the order of partial derivatives of the thermodynamic potentials is irrelevant so one can write.
Also, it is often assumed that all interesting functions can be expanded in a Taylor series, which is important when one wants to define the function of an operator, for example $$e^{\hat A} = \sum_{n=0}^\infty \frac{(\hat A)^n}{n!}.$$
Are there some prominent examples where such assumptions of mathematically good behavior lead to wrong and surprising results? Such as... an operator $f(\hat A)$ where $f$ cannot be expanded in a power series?
share|improve this question
I'm wikifying this since it's a list question without a single correct answer. – David Z Aug 6 '11 at 16:08
add comment
5 Answers
up vote 7 down vote accepted
I think, the most transparent example is phase transition: by definition it is when some thermodynamic value does not behave well.
AFAIK when Fourier showed that non-continuous function may be presented as an infinite sum of continuous, he had a hard time convincing people around that he is not crazy. That story might partially answer your question: as long as any not-so-well-behaved function may be presented as a sum of smooth ones, there is no much difference as long as good formulated laws are linear. Functions which are really bad behaved usually do not appear in real problems. If they do, there is some significant physics behind it (as with phase transition, shock wave, etc.) and one can not miss it.
For an operator it is better (for physicist) to think of function from operator as a function acting on its eigenvalues (if it is not diagonalizable, in physics it is bad behaviour). This is equivalent to power series definition, but works for any function.
share|improve this answer
add comment
I have had a surprizing result due to the wave function having different left and right derivatives at a point (see Chapter 2.1 and Appendix 3). Generally this article contains more surprizing results just due to implicit assumptions being wrong.
share|improve this answer
Well, I know that when one solves the 1D Schrödinger equation for a potential $-\gamma \delta(r-a)$, then the left- and right-derivatives of the wavefunction $\Psi(r)$ differs by $\gamma \Psi(a)$ at that point. Is that what you're referring to? – Lagerbaer Aug 7 '11 at 15:10
@Lagerbaer Yes, to some extent. My perturbation is like $\delta(z-z_1) \frac{d}{dz}$. – Vladimir Kalitvianski Aug 8 '11 at 13:11
add comment
Well, I don't know if you want to count that, but QFT is full with functions that have poles which I'd call not well-behaved, and it does have lots of physical effects. If you're talking about observables only, you can approximate any discontinuous function to arbitrary precision with a continuous function, and you can push the difference below measurement precision. The reason one sometimes uses 'ill-behaved' functions (delta, heaviside, etc) is that they're easier to deal with.
share|improve this answer
But the poles in QFT are probably something people are fully aware of? I am thinking more about identities where in the proof an assumption about well behaved functions is made that then can get overlook when one just plugs some function into it – Lagerbaer Aug 8 '11 at 15:34
add comment
This principle fails in the most startling way in second order phase transitions. This is a particularly clean example, because Landau predicted the critical exponents of second-order phase transitions using only the principle that the thermodynamic functions are analytic.
His argument is as follows: given a magnet going through the Curie point, where it loses its magnetization smoothly, the equilibrium magnetization should be the solution of some thermodynamic equation with the derivative some thermodynamic potential is set to zero.
At temperatures lower than T_c, the magnetization is nonzero, and at temperatures higher than Tc, the magnetization is 0, and it goes to 0 in a continuous way. How does it go to zero?
Note that the magnetization m and -m are related by rotational symmetry. Shifting T_c to 0 by translating $f(t,m)= F(T_c - t,m)$, you get a new thermodynamic function, which has the property that f has only the trivial solution m=0 for negative t, and has two small nontrivial solutions in m for positive t.
Because m=0 is a solution at t=0, the function $f$ has no constant term in a Taylor expansion. By the symmetry of $m\rightarrow -m$, only even powers of m contribute to its Taylor series.
$f(t,m) = At + Bm^2 + Ct^2 + Dt^3 + E t m^2 ...$
Assuming that $f(t,m)$ is generic, A and B are not exactly zero. So for small enough t, for temperatures close enough to the critical point, you get that
$m \propto \sqrt{t}$
Further, this scaling only fails if one of the coefficients is zero. If A=0,
$m \propto |t|$
But m is then nonzero on both sides of the transition. If B=0, you get
$m \propto t^{1\over 4}$
and m is zero or if A,B,C are zero, in which case you get
$m \propto |t|^{3/4}$
And each of these cases requires fine tuning of parameters. So Landau predicted that the critical behavior of the magnetization will be as the square root of the temperature at the critical point, and that this behavior will be universal, it won't depend on the system, just on the existence of the phase transition. The Ising model should have the same critical exponent as the physical magnet, a square root dependence of the magnetization on the temperature, and the liquid gas transition will also have a bend in the curve of the density vs. temperature at the critical pressure which goes as the square root.
The exponent turned out to be universal, it was equal for the gas and liquid, and for the Ising model. But it wasn't 1/2, but more like .308 in three dimensions, and .125 in two dimensions, It only turned into Landau's 1/2 in 4 dimensions or higher. This means that Landau's argument fails, and that the thermodynamic function is conspiring to be non-analytic at exactly the place where Landau was expanding. Understanding why it is non-analytic exactly at the phase transition led to the modern renormalization theory.
In mathematics, Rene Thom proposed that a version of Landau's argument is a complete theory of the types of allowed phase transitions in nature. He called the phase transitions "catastrophes", because they showed a sudden change in behavior, and he predicted, based on catastrophe theory all sorts of scaling laws for natural transitions. This was the most ambitious attempt to exploit the observation that naturally occuring functions are nice. This fails for the same reason as Landau's argument: functions describing changes in the critical behavior of interesting systems at a transition point are rarely analytic at this point.
share|improve this answer
add comment
A nice example arises for the "rigorous coupled wave analysis" (RCWA) method (also called Fourier Modal Method), which is used as a Maxwell solver for diffraction gratings. The normal component of the electric field is discontinuous in the normal direction of a material interface. This leads to convergence problems of the RCWA method for TM polarization, because the discontinuous electric field component is expanded into a Fourier series and multiplied by another discontinuous function representing the grating geometry. Many modifications of the RCWA method to overcome this convergence problem where proposed, but the "correct" modification was only discovered in 1996 (by P. Lalanne and M. Morris?). Even so Lifeng Li didn't discover that "correct" modification, he wrote the famous paper "Use of Fourier series in the analysis of discontinuous periodic structures" (also in 1996) which analyzed mathematically what goes wrong (multiplication of "approximations of" discontinuous functions is dangerous) and why the latest proposed modification to the RCWA methods finally solved the convergence problem.
Today, the Fourier Modal Methods are the most efficient and accurate for many types of grating problems.
share|improve this answer
add comment
Your Answer
|
9c2af0a48d109ed5 | Shielding effect
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The shielding effect describes the attraction between an electron and the nucleus in any atom with more than one electron shell. It is also referred to as the screening effect or atomic shielding.
In hydrogen-like atoms (those with only one electron), the net force on the electron is just as large as the electric attraction from the nucleus. However, when more electrons are involved, each electron (in the n-shell) feels not only the electromagnetic attraction from the positive nucleus, but also repulsion forces from other electrons in shells from 1 to n. This causes the net force on electrons in outer shells to be significantly smaller in magnitude; therefore, these electrons are not as strongly bonded to the nucleus as electrons closer to the nucleus. This phenomenon is often referred to as the Orbital Penetration Effect. The shielding theory also explains why valence-shell electrons are more easily removed from the atom.
The size of the shielding effect is difficult to calculate precisely due to effects from quantum mechanics. As an approximation, we can estimate the effective nuclear charge on each electron by the following:
Z_\mathrm{eff}=Z- \sigma \,
Where Z is the number of protons in the nucleus and \sigma\, is the average number of electrons between the nucleus and the electron in question. \sigma\, can be found by using quantum chemistry and the Schrödinger equation, or by using Slater's empirical formulas.
In Rutherford backscattering spectroscopy the correction due to electron screening modifies the Coulomb repulsion between the incident ion and the target nucleus at large distances.
• L. Brown, Theodore; H. Eugene LeMay, Jr., Bruce E. Bursten, Julia R. Burdge (2003). Chemistry: The Central Science (8th Edition ed.). US: Pearson Education. ISBN 0-13-061142-5.
• Dan Thomas, Shielding in Atoms, [1]
• Peter Atkins & Loretta Jones, Chemical principles: the quest for insight [Variation in shielding effect]
See also[edit] |
39c683df05b8aaa7 | Peregrine soliton
From Wikipedia, the free encyclopedia
Jump to: navigation, search
3D view of the spatio-temporal evolution of a Peregrine soliton
The Peregrine soliton (or Peregrine breather) is an analytic solution of the nonlinear Schrödinger equation.[1] This solution has been proposed in 1983 by Howell Peregrine, researcher at the mathematics department of the University of Bristol.
Main properties[edit]
Contrary to the usual fundamental soliton that can maintain its profile unchanged during propagation, the Peregrine soliton presents a double spatio-temporal localization. Therefore, starting from a weak oscillation on a continuous background, the Peregrine soliton develops undergoing a progressive increase of its amplitude and a narrowing of its temporal duration. At the point of maximum compression, the amplitude is three times the level of the continuous background (and if one considers the intensity as it is relevant in optics, there is a factor 9 between the peak intensity and the surrounding background). After this point of maximal compression, the wave's amplitude decreases and its width increases and it finally vanishes.
These features of the Peregrine soliton are fully consistent with the quantitative criteria usually used in order to qualify a wave as a rogue wave. Therefore, the Peregrine soliton is an attractive hypothesis to explain the formation of those waves which have a high amplitude and may appear from nowhere and disappear without a trace.[2]
Mathematical expression[edit]
In the spatio-temporal domain[edit]
Spatial and temporal profiles of a Peregrine soliton obtained at the point of maximum compression
The Peregrine soliton is a solution of the one-dimensional nonlinear Schrödinger equation that can be written in normalized units as follows :
i \frac{\partial \psi}{\partial \tau} + \frac{1}{2} \frac{\partial^2 \psi}{\partial \xi^2} + |\psi|^2 \psi = 0
with \xi the spatial coordinate and \tau the temporal coordinate. \psi (\xi, \tau) being the envelope of a surface wave in deep water. The dispersion is anomalous and the nonlinearity is self-focusing (note that similar results could be obtained for a normally dispersive medium combined with a defocusing nonlinearity).
The Peregrine analytical expression is :
\psi (\xi, \tau) = \left[ 1-\frac{4 (1 + 2 i \tau)}{1+4 \xi^2 + 4 \tau^2} \right] e^{i \tau}
so that the temporal and spatial maxima are obtained for \xi = 0 and \tau = 0.
In the spectral domain[edit]
Evolution of the spectrum of the Peregrine soliton [3]
It is also possible to mathematically express the Peregrine soliton according to the spatial frequency \eta:[3]
\tilde{\psi} (\eta, \tau) = \frac{1}{\sqrt{2 \pi}} \int{\psi (\xi, \tau) e^{i \eta \xi} d\xi} = \sqrt{2 \pi} e^{i \tau} \left[ \frac{1+2 i \tau}{\sqrt{1+4 \tau^2}} \exp \left( -\frac{|\eta|}{2} \sqrt{1+4 \tau^2} \right) - \delta(\eta) \right]
with \delta being the Dirac delta function.
This corresponds to a modulus (with the constant continuous background here omitted) : |\tilde{\psi} (\eta, \tau)| = \sqrt{2 \pi} \exp \left( -\frac{|\eta|}{2} \sqrt{1+4 \tau^2} \right).
One can notice that for any given time \tau, the modulus of the spectrum exhibits a typical triangular shape when plotted on a logarithmic scale. The broadest spectrum is obtained for \tau = 0 , which corresponds to the maximum of compression of the spatio-temporal nonlinear structure.
Different interpretations of the Peregrine soliton[edit]
Peregrine soliton and other nonlinear solutions
As a rational soliton[edit]
The Peregrine soliton is a first-order rational soliton.
As an Akhmediev breather[edit]
The Peregrine soliton can also be seen as the limiting case of the space-periodic Akhmediev breather when the period tends to infinity.[4]
As a Kuznetsov-Ma soliton[edit]
The Peregrine soliton can also be seen as the limiting case of the time-periodic Kuznetsov-Ma breather when the period tends to infinity.
Experimental demonstration[edit]
Mathematical predictions by H. Peregrine had initially been established in the domain of hydrodynamics. This is however very different from where the Peregrine soliton has been for the first time experimentally generated and characterized.
Generation in optics[edit]
Record of the temporal profile of a Peregrine soliton in optics [5]
In 2010, more than 25 years after the initial work of Peregrine, researchers took advantage of the analogy that can be drawn between hydrodynamics and optics in order to generate Peregrine solitons in optical fibers.[4][6] In fact, the evolution of light in fiber optics and the evolution of surface waves in deep water are both modelled by the nonlinear Schrödinger equation (note however that spatial and temporal variables have to be switched). Such an analogy has been exploited in the past in order to generate optical solitons in optical fibers.
More precisely, the nonlinear Schrödinger equation can be written in the context of optical fibers under the following dimensional form :
i \frac{\partial \psi}{\partial z} - \frac{\beta_2}{2} \frac{\partial^2 \psi}{\partial t^2 } + \gamma |\psi|^2 \psi = 0
with \beta_2 being the second order dispersion (supposed to be anomalous, i.e. \beta_2 < 0) and \gamma being the nonlinear Kerr coefficient. z and t are the propagation distance and the temporal coordinate respestively.
In this context, the Peregrine soliton has the following dimensionnal expression:[5]
\psi (z,t) = \sqrt{P_0} \left[ 1-\frac{4 \left( 1 + 2 i \dfrac{z}{L_{NL}} \right) }{1+4 \left( \dfrac{t}{T_0} \right) ^2 + 4 \left( \dfrac{z}{L_{NL}} \right) ^2} \right] e^{ \dfrac{i z}{L_{NL}}} .
L_{NL} is a nonlinear length defined as L_{NL} = \dfrac{1}{\gamma P_0} and T_0 is a duration defined as T_0 = \dfrac{1}{\sqrt{\beta_2 L_{NL}}}. P_0 is the power of the continuous background.
By using exclusively standard optical communication components, it has been shown that even with an approximate initial condition (in the case of this work, an initial sinosoidal beating), a profile very close to the ideal Peregrine soliton can be generated.[5][7] However, the non-ideal input condition lead to substructures that appear after the point of maximum compression. Those substructures have also a profile close to a Peregrine soliton,[5] which can be analytically explained using a Darboux transformation.[8]
The typical triangular spectral shape has also been experimentally confirmed.[4][5][9]
Generation in hydrodynamics[edit]
These results in optics have been confirmed in 2011 in hydrodynamics[10][11] with experiments carried out in a 15-m long water wave tank. In 2013, complementary experiments using scaled chemical tanker have discussed the potential devastating effects on the ship.[12]
Generation in other fields of physics[edit]
Other experiments carried out in the physics of plasmas have also highlighted the emergence of Peregrine solitons in other fields ruled by the nonlinear Schrödinger equation.[13]
See also[edit]
Notes and references[edit]
1. ^ Peregrine, D. H. (1983). "Water waves, nonlinear Schrödinger equations and their solutions". J. Austral. Math. Soc. B 25: 16–43. doi:10.1017/S0334270000003891.
2. ^ Shrira, V.I.; Geogjaev, V.V. (2009). "What makes the Peregrine soliton so special as a prototype of freak waves ?". J. Eng. Math.
3. ^ a b Akhmediev, N., Ankiewicz, A. , Soto-Crespo, J. M. and Dudley J. M. (2011). "Universal triangular spectra in parametrically-driven systems". Phys. Lett. A 375: 775–779. Bibcode:2011PhLA..375..775A. doi:10.1016/j.physleta.2010.11.044.
4. ^ a b c Kibler, B.; Fatome, J.; Finot, C.; Millot, G.; Dias, F.; Genty, G.; Akhmediev, N.; Dudley, J.M. (2010). "The Peregrine soliton in nonlinear fibre optics". Nature Physics 6 (10): 790–795. Bibcode:2010NatPh...6..790K. doi:10.1038/nphys1740.
5. ^ a b c d e Hammani, K.; Kibler, B.; Finot, C.; Morin, P.; Fatome, J.; Dudley, J.M.; Millot, G. (2011). "Peregrine soliton generation and breakup in standard telecommunications fiber". Optics Letters 36 (2): 112–114. Bibcode:2011OptL...36..112H. doi:10.1364/OL.36.000112. PMID 21263470.
6. ^ "Peregrine’s 'Soliton' observed at last". Retrieved 2010-08-24.
7. ^ Erkintalo, M.; Genty, G.; Wetzel, B.; Dudley, J. M. (2011). "Akhmediev breather evolution in optical fiber for realistic initial conditions". Phys. Lett. A 375 (19): 2029–2034. Bibcode:2011PhLA..375.2029E. doi:10.1016/j.physleta.2011.04.002.
8. ^ Erkintalo, M.; Kibler, B.; Hammani, K.; Finot, C.; Akhmediev, N.; Dudley, J.M.; Genty, G. (2011). "Higher-Order Modulation Instability in Nonlinear Fiber Optics". Physical Review Letters 107: 253901. Bibcode:2011PhRvL.107y3901E. doi:10.1103/PhysRevLett.107.253901.
9. ^ Hammani K., Wetzel B. , Kibler B. , Fatome J., Finot C. , Millot G., Akhmediev N., and Dudley J. M. (2011). "Spectral dynamics of modulation instability described using Akhmediev breather theory". Opt. Lett. 36 (2140-2142). Bibcode:2011OptL...36.2140H. doi:10.1364/OL.36.002140.
10. ^ Chabchoub, A.; Hoffmann, N.P.; Akhmediev, N. (2011). "Rogue wave observation in a water wave tank". Phys. Rev. Lett. 106 (20). Bibcode:2011PhRvL.106t4502C. doi:10.1103/PhysRevLett.106.204502.
11. ^ "Rogue waves captured". Retrieved 2011-06-03.
12. ^ Onorato, M.; Proment, D.; Clauss, G.; Clauss, M. (2013). "Rogue Waves: From Nonlinear Schrödinger Breather Solutions to Sea-Keeping Test". Plos One 8. Bibcode:2013PLoSO...854629O. doi:10.1371/journal.pone.0054629.
13. ^ Bailung, H.; Sharma, S. K.; Nakamura, Y. (2011). "Observation of Peregrine solitons in a multicomponent plasma with negative ions". Phys. Rev. Lett. |
937d25daa79bdf9e | Psychology Wiki
Measurement problem
34,114pages on
this wiki
Revision as of 13:50, September 29, 2006 by Lifeartist (Talk | contribs)
The measurement problem is the key set of questions that every interpretation of quantum mechanics must answer. The problem is that the wavefunction in quantum mechanics evolves according to the Schrödinger equation into a linear superposition of different states, but the actual measurements always find the physical system in a definite state, typically a position eigenstate. Any future evolution will be based on the system having the measured value at that point in time, meaning that the measurement "did something" to the process under examination. Whatever that "something" may be does not appear to be explained by the basic theory.
The best known example is the "paradox" of the Schrödinger's cat: a cat is apparently evolving into a linear superposition of basis vectors that can be characterized as an "alive cat" and states that can be described as a "dead cat". Each of these possibilities is associated with a specific nonzero probability amplitude; the cat seems to be in a "mixed" state. However, a single particular observation of the cat does not measure the probabilities: it always finds either an alive cat, or a dead cat. After that measurement the cat stays alive or dead. The measurement problem is the question: how are the probabilities converted to an actual, sharply well-defined outcome?
Different interpretations of quantum mechanics propose different solutions of the measurement problem.
• The old Copenhagen interpretation was rooted in the philosophical positivism. It claimed that the probabilities are the only quantities that should be discussed, and all other questions were considered as unscientific ones. One could either imagine that the wavefunction collapses, or one could think of the wavefunction as an auxiliary mathematical tool with no direct physical interpretation whose only role is to calculate the probabilities.
While this viewpoint was sufficient to understand the outcome of all known experiments, it did not explain why it was legitimate to imagine that the cat's wavefunction collapses once the cat is observed, but it is not possible to collapse the wavefunction of the cat or the electron before it is measured. The collapse of the wavefunction used to be linked to one of two different properties of the measurement:
• The measurement is done by a conscious being. In this specific interpretation, it was the presence of a conscious being that caused the wavefunction to collapse. However, this interpretation depends on a definition of "consciousness". Because of its spiritual flavor, this interpretation was never fully accepted as a scientific explanation.
• The measurement apparatus is a macroscopic object. Perhaps, it is the macroscopic character of the apparata that allows us to replace the logic of quantum mechanics with the classical intuition where the positions are well-defined quantities.
The latter approach was put on firm ground in the 1980s when the phenomenon of quantum decoherence was understood. The calculations of quantum decoherence allow the physicists to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable. Quantum decoherence was proposed in the context of the many-worlds interpretation, but it has also become an important part of modern update of the Copenhagen interpretation that is based on consistent histories ("Copenhagen done right"). Quantum decoherence does not describe the actual process of the wavefunction collapse, but it explains the conversion of the quantum probabilities (that are able to interfere) to the ordinary classical probabilities.
Hugh Everett's relative state interpretation, also referred to as the many-worlds interpretation, attempts to avoid the problem by suggesting it is an illusion. Under this system there is only one wavefunction, the superposition of the entire universe, and it never collapses -- so there is no measurement problem. Instead the act of measurement is actually an interaction between two quantum entities, which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate the way that in measurements the probabilistic nature of quantum mechanics would appear; work later extended by Bryce DeWitt and others and renamed the many-worlds interpretation. Everett/DeWitt's interpretation posits a single universal wavefunction, but with the added proviso that "reality" from the point of view of any single observer, "you", is defined as a single path in time through the superpositions. That is, "you" have a history that is made of the outcomes of measurements you made in the past, but there are many other "yous" with slight variations in history. Under this system our reality is one of many similar ones.
The Bohm interpretation tries to solve the measurement problem very differently: this interpretation contains not only the wavefunction, but also the information about the position of the particle(s). The role of the wavefunction is to create a "quantum potential" that influences the motion of the "real" particle in such a way that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to the Bohm interpretation combined with the von Neumann theory of measurement in quantum mechanics, once the particle is observed, other wave-function channels remain empty and thus ineffective, but there is no true wavefunction collapse. Decoherence provides that this ineffectiveness is stable and irreversible, which explains the apparent wavefunction collapse.
References Edit
• Schlosshauer, Maximilian (2004). Decoherence, the Measurement Problem, and Interpretations of Quantum Mechanics. Rev. Mod. Phys. 76. arXiv:quant-ph/0312059
, DOI:10.1103/RevModPhys.76.1267.he:בעיית המדידה
Advertisement | Your ad here
Around Wikia's network
Random Wiki |
96a8c358c56fc9b7 | Stationary state
From Wikipedia, the free encyclopedia
(Redirected from Energy eigenfunctions)
Jump to: navigation, search
Main articles: quantum state and wavefunction
In quantum mechanics, a stationary state is an eigenvector of the Hamiltonian, implying the probability density associated with the wavefunction is independent of time.[1] This corresponds to a quantum state with a single definite energy (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below.
A harmonic oscillator in classical mechanics (A-B) and quantum mechanics (C-H). In (A-B), a ball, attached to a spring, oscillates back and forth. (C-H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. (C,D,E,F), but not (G,H), are stationary states, or standing waves. The standing-wave oscillation frequency, times Planck's constant, is the energy of the state.
A stationary state is called stationary because the system remains in the same state as time elapses, in every observable way. For a single-particle Hamiltonian, this means that the particle has a constant probability distribution for its position, its velocity, its spin, etc.[2] (This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) The wavefunction itself is not stationary: It continually changes its overall complex phase factor, so as to form a standing wave. The oscillation frequency of the standing wave, times Planck's constant, is the energy of the state according to the de Broglie relation.
Stationary states are quantum states that are solutions to the time-independent Schrödinger Equation:
\hat H |\Psi\rangle=E_{\Psi} |\Psi\rangle,
• |\Psi\rangle is a quantum state, which is a stationary state if it satisfies this equation;
• \hat H is the Hamiltonian operator;
• E_{\Psi} is a real number, and corresponds to the energy eigenvalue of the state |\Psi\rangle.
This is an eigenvalue equation: \hat H is a linear operator on a vector space, |\Psi\rangle is an eigenvector of \hat H, and E_{\Psi} is its eigenvalue.
If a stationary state |\Psi\rangle is plugged into the time-dependent Schrödinger Equation, the result is:[3]
i\hbar\frac{\partial}{\partial t} |\Psi\rangle = E_{\Psi}|\Psi\rangle
Assuming that \hat H is time-independent (unchanging in time), this equation holds for any time t. Therefore this is a differential equation describing how |\Psi\rangle varies in time. Its solution is:
|\Psi(t)\rangle = e^{-iE_{\Psi}t/\hbar}|\Psi(0)\rangle
Therefore a stationary state is a standing wave that oscillates with an overall complex phase factor, and its oscillation angular frequency is equal to its energy divided by \hbar.
Stationary state properties[edit]
Three wavefunction solutions to the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wavefunction. Right: The probability of finding the particle at a certain position. The top two rows are two stationary states, and the bottom is the superposition state \psi_N \equiv (\psi_0+\psi_1)/\sqrt{2}, which is not a stationary state. The right column illustrates why stationary states are called "stationary".
As shown above, a stationary state is not mathematically constant:
However, all observable properties of the state are in fact constant. For example, if |\Psi(t)\rangle represents a simple one-dimensional single-particle wavefunction \Psi(x,t), the probability that the particle is at location x is:
|\Psi(x,t)|^2 = \left| e^{-iE_{\Psi}t/\hbar}\Psi(x,0)\right|^2 = \left| e^{-iE_{\Psi}t/\hbar}\right|^2 \left| \Psi(x,0)\right|^2 = \left|\Psi(x,0)\right|^2
which is independent of the time t.
The Heisenberg picture is an alternative mathematical formulation of quantum mechanics where stationary states are truly mathematically constant in time.
As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a 1s electron in a hydrogen atom is in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed.
Spontaneous decay[edit]
Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic) quantum mechanics, the hydrogen atom has many stationary states: 1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level will spontaneously emit one or more photons to decay into the ground state.[4] This seems to contradict the idea that stationary states should have unchanging properties.
The explanation is that the Hamiltonian used in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian from quantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, but not stationary according to the true Hamiltonian, because of vacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian.
Comparison to "orbital" in chemistry[edit]
An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, an atomic orbital for an electron in an atom, or a molecular orbital for an electron in a molecule.[5]
For a molecule that contains only a single electron (e.g. atomic hydrogen or H2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is a many-particle state requiring a more complicated description (such as a Slater determinant). In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron-electron repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation.") In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system.
In chemistry, calculation of molecular orbitals typically also assume the Born–Oppenheimer approximation.
See also[edit]
1. ^ Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145546-9
2. ^ Cohen-Tannoudji, Claude, Bernard Diu, and Franck Laloë. Quantum Mechanics: Volume One. Hermann, 1977. p. 32.
3. ^ Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-1
5. ^ Physical chemistry, P.W. Atkins, Oxford University Press, 1978, ISBN 0-19-855148-7
Further reading[edit] |
d2a99e79c2f7d9b2 | Take the 2-minute tour ×
Does anybody know if there exists a mathematical explanation of Mendeleev table in quantum mechanics? In some textbooks (for example in "F.A.Berezin, M.A.Shubin. The Schrödinger Equation") the authors present quantum mechanics as an axiomatic system, so one could expect that there is a deduction from the axioms to the main results of the discipline. I wonder if there is a mathematical proof of the Mendeleev table?
P.S. I hope the following will not be offensive for physicists: by a mathematical proof I mean a chain of logical implications from axioms of the theory to its theorem. :) This is the standard approach everywhere in mathematics. For instance, in Griffiths' book I do not see axioms at all, therefore I can't treat the reasonings at pages 186-193 as a proof of Mendeleev table. By the way, that is why I did not want to ask this question at a physical forum: I do not think that people there will even understand my question. However, after Bill Cook's suggestion I made an experiment - and you can look at the results here: http://theoreticalphysics.stackexchange.com/questions/473/is-the-mendeleev-table-explained-in-quantum-mechanics
So I ask my colleagues-mathematicians to be tolerant.
P.P.S. After closing this topic and reopening it again I received a lot of suggestions to reformulate my question, since in its original form it might seem too vague for mathematicians. So I suppose it will be useful to add here, that by the Mendeleev table I mean (not just a picture, as one can think, but) a system of propositions about the structure of atoms. For example, as I wrote here in comments, the Mendeleev table claims that the first electronic orbit (shell) can have only 2 electrons, the second - 8, the third - again 8, the fourth - 18, and so on. Another regularity is the structure of subshells, etc. So my question is whether it is proved by now that these regularities (perhaps not all but some of them) are corollaries of the system of axioms like those from Berezin-Shubin book. Of course, this assumes that the notions like atoms, shells, etc. must be properly defined, otherwise the corresponding statements could not be formulated. I consider this as a part of my question -- if experts will explain that the reasonable definitions are not found by now, this automatically will mean that the answer is 'no'.
The following reformulation of my question was suggested by Scott Carnahan at http://tea.mathoverflow.net/discussion/1202/should-a-mathematician-be-a-robot/#Item_0 : "Do we have the mathematical means to give a sufficiently precise description of the chemical properties of elements from quantum-mechanical first principles, such that the Mendeleev table becomes a natural organizational scheme?"
I hope, this makes the question more clear.
share|improve this question
You might want to try posting this question on theoreticalphysics.stackexchange.com – Bill Cook Nov 5 '11 at 20:13
I would suggest rewriting your question in purely mathematical terms. As it stands, it is best asked in the theoretical physics stackexchange (as Bill Cook mentions). But there is indeed a mathematical question here -- but you did not ask it. – Jacques Carette Nov 5 '11 at 20:31
@Greg: 1) if you ask this "great chemistry-and-physics question" to chemists or physicists, they will treat you as an idiot (and this is what happened to me at theoreticalphysics.stackexchange.com/questions/473/… ). Because they do not understand what logic is. If this were not so, there would not be contradictions between what people write here and what they write there: "Yes, quantum mechanics... – fully, quantitatively, and comprehensively explains all of chemistry..." So I still think that I should address this to mathematicians. – Sergei Akbarov Nov 6 '11 at 0:14
@Mark: 1) if physicists did not convince me, like Luboš Motl at their physical forum, that everything is "fully, quantitatively, and comprehensively" explained, I would not ask this question here. But their arguments are always so definite, uncompromising, unequivocal (and I would say in some sense offensive, don't you find them so? :), that you begin to think that maybe you read wrong books, and if you ask mathematicians who are interested in tags like "quantum-mechanics", they will give an explanation, which could be verified (as this usually happens with mathematicians). – Sergei Akbarov Nov 6 '11 at 1:03
@Sergei: 1) There is no way to keep track of replies to your comment. You should contact Jacques personally if you want a reply. 2) “I already explained what I mean by "proof"”: No, you didn't. You can prove a theorem, but the periodic table is not a theorem. You have to state exactly what do you want to prove. “The Mendeleev table claims…”: You have to define in mathematical terms what do you mean by an “orbit”, “electron”, “subshell”, etc. Right now there is no mathematical content in your statements. “If you know what should be proved”: No, you don't, at least in mathematical terms. – Dmitri Pavlov Nov 7 '11 at 9:42
6 Answers 6
up vote 13 down vote accepted
I doubt any answer will be satisfactory. My opinion is that we are still very far from a mathematical justification. If we accept the mathematical foundations of quantum mechanics, and if we make the approximation that the nucleus of the atom is just one heavy thing with $N$ positive charges, then the motion of the $N$ electrons is governed by a linear equation (Schrödinger) in ${\mathbb R}^{3N}$. The unknown is a function $\psi(r^1,\ldots,r^N,t)$ with the property (Pauli exclusion) that it has full skew-symmetry. For instance, $$\psi(r^2,r^1,\ldots,r^N,t)=-\psi(r^1,r^2,\ldots,r^N,t).$$ In practice, we look for steady states $e^{i\omega t}\phi(r^1,r^2,\ldots,r^N)$. Then $\omega$ is the energy level.
Because of the very large space dimension, one cannot perform reliable calculations on computer, when $N$ is larger than a few units. One attempt to simplify the problem has been to postulate that $\phi$ is a Slatter determinant, which means that $$\phi(r^1,r^2,\ldots,r^N)=\|a_i(r^j)\|_{1\le i,j\le N}.$$ The unknown is then an $N$-tuple of functions $a_i$ over ${\mathbb R}^3$. Of course, we do not expect that steady states be really Slater determinants; after all, the Schrödinger equation does not preserve the class of Slater determinants. Thus there is a price to pay, which is to replace the Schrödinger equation by an other one, obtained by an averaging process (Hartree--Fock model). The drawback is that the new equation is non-linear. Such approximate states have been studied by P.-L. Lions & I. Catto in the 90's.
Update. Suppose $N=2$ only. If we think to $\phi$ as a finite-dimensional object instead of an $L^2$-function, then it is nothing but a skew-symmetric matrix $A$. Approximation à la Slater consists in writing $A\sim XY^T-YX^T$, where $X$ and $Y$ are vectors. In other words, one approximate $A$ by a rank-two skew-symmetric matrix. The approximation must be in terms of the Hilbert-Schmidt norm (also named Frobenius, Schur): this norm is natural because of the requirement $\|\phi\|_{L^2}=N$. If $\pm a_1,\ldots,\pm a_m$ are the pairs of eigenvalues of $A$, with $0\le a_1\le\ldots\le a_m$, then the best Slater approximation $B$ satisfies $\|B\|^2=2a_m^2$, $\|A-B\|^2=2(a_1^2+\cdots+a_{m-1}^2)$. Not that good. Imagine how much worse it can be if $N$ is larger than $2$.
share|improve this answer
Even if no answer will be satisfactory, I have found the answers to this question to be very interesting... – Tom Church Nov 9 '11 at 16:52
The OP asks two completely different questions: (1) "[...] if there exists a mathematical explanation of Mendeleev table in quantum mechanics? " (2) "if there is a mathematical proof of the Mendeleev table? [...] by a mathematical proof I mean a chain of logical implications from axioms of the theory [...]" This answer discusses the use of approximations, which is irrelevant to both 1 and 2. The answer to 1 is yes, and the necessity of making approximations doesn't affect the validity of the explanation. The answer to 2 is no, simply because physical theories aren't axiomatic systems. – Ben Crowell Jun 18 '13 at 14:13
Ben, you should explain yourself, this sounds strong: "The answer to 1 is yes". And this also: "physical theories aren't axiomatic systems". What about classical mechanics? Or probability theory? – Sergei Akbarov Jun 20 '13 at 11:38
There is some rigorous work by Goddard and Friesecke on this, see
My understanding is that even getting accurate numerics for the Schrodinger equation becomes very difficult once one has more than 10 or so electrons in play. The one regime where we do seem to have good asymptotics is when the atomic number is large but the number of electrons are small (i.e. extremely highly ionized heavy atoms).
At any rate, the foundations of the periodic table are pretty much uncontested (i.e. N-body fermionic Schrodinger equation with semi-classical Coulomb interactions as the only significant force). The main difficulty is being able to solve the resulting equations mathematically (or even numerically).
share|improve this answer
Terry, thank you very much for your answer, but your link is just a presentation, it does not contain even references... I would like to thank also the other people who wrote the answers. So, dear colleagues, as far as I understand, we came to a conclusion that from the point of view of logic the Mendeleev table is not explained in quantum mechanics? :) – Sergei Akbarov Nov 5 '11 at 21:59
References to the work given in the above slides can be found at www-m7.ma.tum.de/bin/view/Analysis/ElectronicStructure – Terry Tao Nov 6 '11 at 4:26
It's strange to call "rigorous" a treatment where the charge of the nucleus is much larger then the number of electrons. In the cases considered both are around 10 - some sort of large number! Also, in chemical reactions the atoms are weakly ionized, almost neutral, so in fact $N\approx Z$. The words "use the model of non-interacting fermions" clearly hide much of the work and make a huge leap of faith from the initial equation to the answer. – Anton Fetisov Nov 13 '11 at 11:29
Goddard-Freiseke's work is rigorous in the regime where N is at most 10 and Z is sufficiently large; however, the predictions of that paper agree quite well with the experimental data when N and Z are both near 10, which suggests that the limitation to sufficiently large Z is a technical one rather than a fundamental one. Also, their analysis does consider Coulomb interactions between the fermions (otherwise the problem would be very easy). As I understand it, the non-interacting model is only used as a base model from which one applies rigorous perturbation theory (see p.36 of slides). – Terry Tao Nov 13 '11 at 16:59
I am not offended by the suggestion that physicists should follow the standards of mathematical proof, but I think this suggestion and the phrasing of the question demonstrate a lack of understanding of how physicists think about such things and more importantly why they put such little emphasis on axioms.
In my view it is rarely useful to think of physics as an axiomatic system, and I think this question reflects the difficulty with thinking of it as such. A different question, which is much more in tune with a physicist's point of view, would be to ask what physical description is required to explain various features of the structure of atoms as reflected in the periodic table at a prescribed level of accuracy. Until you specify what features you want to understand, and at what level of accuracy, you don't even know what the correct starting point should be. If you want just the crudest structure of the periodic table, then indeed non-relativistic quantum mechanics along with the Pauli exclusion principle will give you the rough structure as described in any standard QM textbook. If you want to understand the detailed quantum numbers of large atoms then you have to start including relativistic effects. Spin-orbit coupling is one of the most important and its effects are often summarized by a set of Hund's rules which are described in many QM textbooks or physical chemistry textbooks. If you want very accurate numerical values for ionization energies or the detailed structure of wave functions then one must do hard numerical work which probably becomes impossibly difficult for large atoms. As you ask for greater and greater precision you should eventually use a fully relativistic description. This is even harder. The Dirac equation is not sufficient, one cannot restrict to a Hilbert space with a finite number of particles in a relativistic quantum theory, and bound state problems in Quantum Field Theory are notoriously difficult. So as one asks more detailed and more precise questions, one has to keep changing the mathematical framework used to formulate the theory. Of course this process could end and there could be an axiomatic formulation of some ultimate theory of physics, but even if this were the case this would undoubtedly not be the most useful formulation for most problems of practical interest.
share|improve this answer
I think the question does not show misunderstanding of the epistemological principles under wich physicists work and develop their theories. It simply asks if it's possible to mathematically derive (in the usual mathematically rigorous sense) the Mendeleev table from Quantum Mechanics, where the latter is understood simply as a mathematical theory, not as the collection of natural phenomena it is supposed to describe. – Qfwfq Nov 10 '11 at 15:18
The Mendeleev table is not a mathematical theorem. It is a method of organizing atoms into groups with similar chemical properties. Quantum Mechanics is a framework which includes the nonrelativistic Schrödinger equation as well as quantum field theory. If the OP wants to know if X can be proved in a mathematically rigorous way from Y then isn't it reasonable to ask for a mathematically precise definition of both X and Y? – Jeff Harvey Nov 11 '11 at 12:56
@Jeff Harvey: I think it’s quite reasonable to ask without making the statements precise. Finding the right formulation of an informal idea is often as hard as proving it, or even harder. Of course, many statements are too vague to make an interesting question; but what makes this one good, I think, is that while we don’t necessarily have a precise formal statement of it in mind, “we know it when we see it”… as in Terry Tao’s answer, we do start to see it. – Peter LeFanu Lumsdaine Nov 11 '11 at 23:41
@Jeff Hervey: Asking questions about things which are not precisely defined is a tradition in mathematics. For example, mathematicians discussed the problems of probability theory long before Kolmogorov (in 1933) gave his axioms of probability (only after that probability got a precise definition). My question is just another example: in books on mathematical physics (e.g., in the Berezin-Shubin book) they speak very often about atoms, which as far as I understand, are not defined by now. If they can discuss atoms, why can't they discuss the Mendeleev table which describes properties of atoms? – Sergei Akbarov Nov 12 '11 at 21:13
It depends on what you mean by proof. Even the helium atom wavefunction cannot be obtained in closed form (the way the hydrogen atom wavefunction is), so any results about the periodic table will have some level of approximation or phenomenological assumptions in them. That said, there do exists references that explain the qualitative (and quantitative) features of the periodic table based on quantum mechanics principles. Griffiths' Quantum Mechanics for instance has a very quick discussion of the periodic table around pages 186-193. It's not very complete, and also mostly not quantitative, but it nicely illustrates how quantum mechanics gives rise to the structure of the periodic table.
share|improve this answer
I'm arriving after the war, but this is an interesting question, so I'm going to write up what I understand about it.
First of all, for a comprehensive mathematical understanding of the periodic table, you have to settle on a model. The relevant one here is quantum mechanics (for large atoms, relativistic effects start to become important, and that's a whole mess). It's entirely axiomatic, and requires no further tweaking. Then you basically have to solve an eigenvalue on a space of functions of $6N$ coordinates (ignoring spin). That gives you a "mathematical explanation" of the table, in the sense that knowledge of the solution $\psi(x_1,x_2,\dots,x_N)$ is all there is to know about the static structure of an atom. Notice that in this formulation, all electrons are tied together inside one big wavefunctions, so an "electronic state" has no meaning. Mendeleev table is not even compatible with this formulation.
Of course, solving the full eigenproblem is not possible, so all you can do is mess around with approximations. A simplistic but illuminating approximation is to completely neglect electron repulsion. Great simplification occurs, and it turns out one can speak of "electronic states". Non-trivial behaviour occurs because of the Pauli exclusion principle. This is known as the "Aufbau" principle: one builds atoms by successively adding electrons. The first electron gets itself into the lowest energy shell, then the second one gets into the same state, but with opposite spin. The third begins to fill the second shell (which has three spaces, times two because of spin), and so on. This is the basic idea behind the table, and provides a clue as to why it is organised the way it is. So this might be the theory you're looking for. It's explicitely solvable, and only requires the theory of the hydrogenoid atoms.
Of course, because of the approximations, the quantitative results are all wrong, but the organisation is still there. Except for larger elements, where the Mendeleev table is, from what I understand, an ad-hoc hack. You can improve the approximation using ideas like "screening", and this leads to the Hartree-Fock method, which still preserves the notion of shells.
Hope that helps. Then again, if you're looking for a completely logical approach to physics that'll readily explain real life, you're bound to be disappointed. Even simple theories such as the quantum mechanics of atoms are too hard to be solved exactly, which is why we have to compromise and make approximations.
share|improve this answer
Antoine, I am not against approximations. And I know that it's not necessary to solve equation for gathering information about its solutions (people in qualitative theory of differential equations demonstrate this all the way). But in any case there must be a system of axioms, definitions and propositions, otherwise it would be impossible to understand this theory. For example, Berezin and Shubin in their book speak very often about atoms, but they do not define atoms and do not make propositions about them (if not counting atoms in math. sense). Do you know a text, where this is not so? – Sergei Akbarov Nov 18 '11 at 7:38
Well, as I said, you can postulate the following: An atom (or molecule, for that matter), is a core of Z protons and M neutrons described by their position, plus N electrons, each described by a wave function $\phi_i$. The electrons arrange themselve as the N lowest eigenfunctions of the eigenproblem on $H^1(R^3,C^2)$ : $-\Delta \phi_i + V(x) \phi_i = \lambda_i \phi_i$, where $V(x) = Z/|x|$ for an atom at $(0,0,0)$. Then, proposition : the eigenfunctions are organised in shells of same $\lambda_i$. The first eigenvalue has multiplicity 2, the second, multiplicity 8, etc. – Antoine Levitt Nov 18 '11 at 9:19
This is a toy model with little relevance for a quantitative (or even qualitative, for larger atoms) understanding of the true nature of elements. I don't think you have the correct approach, though. Physics is not a subset of mathematics, and cannot be understood as such. – Antoine Levitt Nov 18 '11 at 9:22
I doubt that mathematical physicists will agree with this: "Physics is not a subset of mathematics, and cannot be understood as such". But anyway if you say that everything is so simple, then there must be a reference. What you say (your definition, proposition) - where is this written? – Sergei Akbarov Nov 18 '11 at 12:40
I'm actually pretty sure most mathematical physicists will agree with the sentence. Physics aims to understand the physical world, which no model alone can do without (physical, ie approximate and not rigorous) analysis. For instance, the full Schrodinger equation for an atom is mathematically perfect, and physically useless, because of its sheer complexity. So is a description of a gas by its individual molecules, which is why physicists have invented thermodynamics. But that's philosophy, and I'm not about to get dragged into that debate. – Antoine Levitt Nov 18 '11 at 19:09
It clearly depends upon what do you mean by the Periodic Table of elements? As usually stated, it is a vague and strictly speaking false, yet usually sufficient statement about the similarities of chemical properties of different atoms. In any case they don't repeat exactly, only with a given degree of accuracy and if you forget about some of the much more exotic behaviour, not common in reactions. If you really try to specify all of these, you'll be much better off with the common perturbation theory approach found in QM textbooks. Sure, in a sense it also defines what is being calculated, but there's also no other way to define these properties (at least I don't know any). Analysing the second-or-somewhat order of perturbation theory is a mathematically trivial, yet tedious task, but there can barely be a way to justify the order of PT rigorously, it just works. Or doesn't.
share|improve this answer
Anton, I did not understand this: "As usually stated, it is a vague and strictly speaking false". Vague and even false? – Sergei Akbarov Nov 13 '11 at 17:44
Trivially, it's not a mathematically precise statement, so by the standards of total rigour it's vague. It's more precise (very precise) in predicting the ground state electron configurations, but less precise in predictions of chemical properties. Exceptions are rare, especially for low atomic weights, but they exits and well-known to chemists. I'm not a chemist myself, so I'll just give a few simple links: chemwiki.ucdavis.edu/Inorganic_Chemistry/… en.wikipedia.org/wiki/… – Anton Fetisov Nov 15 '11 at 23:30
As it turned out (and this was not obvious for me at the beginning), what the Mendeleev table states cannot be mathematically statements at all, since the notions like atoms, electrons, shells, etc., are not precisely defined. I think those participants of this discussion, who protest against the "vague formulation of the question", should write a collective letter to a journal like "Notices of the AMS" with protest against using the words like "atom", "electron", etc., in mathematical physics, since these notions have no precise definitions. – Sergei Akbarov Nov 16 '11 at 12:24
No, I don't get your sarcasm. As said before, if you're fine with the common amount of rigour in Mendeleev's table, then you should be just as fine with the common explanations in QM textbooks, so what's the point? – Anton Fetisov Nov 16 '11 at 18:06
Anton, in my opinion there cannot be some amount of rigor, enough amount, common amount, etc. Either there is rigor, or there isn't, that's what I used to think about it. My aim was to understand, if there were successful attempts to interpret in the mathematical language what physicists say about what I asked. Physicists themselves can't explain this, that's why I asked this here. From what people told me I deduce that the attempts were not successful. That's enough for me. – Sergei Akbarov Nov 16 '11 at 21:44
Your Answer
|
811909731cc7e5f1 | History of quantum mechanics
From Wikipedia, the free encyclopedia
Jump to: navigation, search
10 influential figures in the history of quantum mechanics. Left to right:
The history of quantum mechanics is a fundamental part of the history of modern physics. Quantum mechanics' history, as it interlaces with the history of quantum chemistry, began essentially with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday; the 1859–60 winter statement of the black-body radiation problem by Gustav Kirchhoff; the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete; the discovery of the photoelectric effect by Heinrich Hertz in 1887; and the 1900 quantum hypothesis by Max Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete "energy elements" ε (epsilon) such that each of these energy elements is proportional to the frequency ν with which each of them individually radiate energy, as defined by the following formula:
\epsilon = h \nu \,
where h is a numerical value called Planck's constant.
Then, Albert Einstein in 1905, in order to explain the photoelectric effect previously reported by Heinrich Hertz in 1887, postulated consistently with Max Planck's quantum hypothesis that light itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis. The photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal's surface.
The phrase "quantum mechanics" was coined (in German, "quantenmechanik") by the group of physicists including Max Born, Werner Heisenberg, and Wolfgang Pauli, at the University of Göttingen in the early 1920s, and was first used in Born's 1924 paper "Zur Quantenmechanik".[1] In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding.
Ludwig Boltzmann's diagram of the I2 molecule proposed in 1898 showing the atomic "sensitive region" (α, β) of overlap.
Ludwig Eduard Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete. He was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich and Emil Müller. Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would also be the case twenty years later with the first quantum theory put forward by Max Planck.
In 1900, the German physicist Max Planck reluctantly introduced the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body, called Planck's Law, that included a Boltzmann distribution (applicable in the classical limit). Planck's law[2] can be stated as follows: I(\nu,T) =\frac{ 2 h\nu^{3}}{c^2}\frac{1}{ e^{\frac{h\nu}{kT}}-1}, where:
h is the Planck constant;
c is the speed of light in a vacuum;
k is the Boltzmann constant;
ν is the frequency of the electromagnetic radiation; and
T is the temperature of the body in kelvins.
The earlier Wien approximation may be derived from Planck's law by assuming h\nu \gg kT.
Moreover, the application of Planck's quantum theory to the electron allowed Ștefan Procopiu in 1911—1913, and subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, which was later called the "magneton"; similar quantum computations, but with numerically quite different values, were subsequently made possible for both the magnetic moments of the proton and the neutron that are three orders of magnitude smaller than that of the electron.
Photoelectric effect
The emission of electrons from a metal plate caused by light quanta (photons) with energy greater than the work function of the metal.
The photoelectric effect reported by Heinrich Hertz in 1887,
and explained by Albert Einstein in 1905.
Low-energy phenomena: Photoelectric effect
Mid-energy phenomena: Compton scattering
High-energy phenomena: Pair production
In 1905, Einstein explained the photoelectric effect by postulating that light, or more generally all electromagnetic radiation, can be divided into a finite number of "energy quanta" that are localized points in space. From the introduction section of his March 1905 quantum paper, "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states:
"According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of 'energy quanta' that are localized in points in space, move without dividing, and can be absorbed or generated only as a whole."
This statement has been called the most revolutionary sentence written by a physicist of the twentieth century.[3] These energy quanta later came to be called "photons", a term introduced by Gilbert N. Lewis in 1926. The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement; it effectively solved the problem of black-body radiation attaining infinite energy, which occurred in theory if light were to be explained only in terms of waves. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules.
These theories, though successful, were strictly phenomenological: during this time, there was no rigorous justification for quantization, aside, perhaps, from Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quanta.[4][5] They are collectively known as the old quantum theory.
The phrase "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics (1931).
With decreasing temperature, the peak of the blackbody radiation curve shifts to longer wavelengths and also has lower intensities. The blackbody radiation curves (1862) at left are also compared with the early, classical limit model of Rayleigh and Jeans (1900) shown at right. The short wavelength side of the curves was already approximated in 1896 by the Wien distribution law.
Niels Bohr's 1913 quantum model of the atom, which incorporated an explanation of Johannes Rydberg's 1888 formula, Max Planck's 1900 quantum hypothesis, i.e. that atomic energy radiators have discrete energy values (ε = hν), J. J. Thomson's 1904 plum pudding model, Albert Einstein's 1905 light quanta postulate, and Ernest Rutherford's 1907 discovery of the atomic nucleus. Note that the electron does not travel along the black line when emitting a photon. It jumps, disappearing from the outer orbit and appearing in the inner one and cannot exist in the space between orbits 2 and 3.
In 1924, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. This theory was for a single particle and derived from special relativity theory. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan[6][7] developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation to the generalised case of de Broglie's theory.[8] Schrödinger subsequently showed that the two approaches were equivalent.
Heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation started to take shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron. The Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrödinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron. He also pioneered the use of operator theory, including the influential bra–ket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period, still stand, and remain widely used.
The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Caltech, and John C. Slater into various theories such as Molecular Orbital Theory or Valence Theory.
Beginning in 1927, researchers made attempts at applying quantum mechanics to fields instead of single particles, resulting in quantum field theories. Early workers in this area include P.A.M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan. This area of research culminated in the formulation of quantum electrodynamics by R.P. Feynman, F. Dyson, J. Schwinger, and S.I. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, and served as a model for subsequent Quantum Field theories.[6][7][9]
Feynman diagram of gluon radiation in Quantum Chromodynamics
The theory of Quantum Chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilczek in 1975.
Building on pioneering work by Schwinger, Higgs and Goldstone, the physicists Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force, for which they received the 1979 Nobel Prize in Physics.
Founding experiments[edit]
See also[edit]
1. ^ Max Born, My Life: Recollections of a Nobel Laureate, Taylor & Francis, London, 1978. ("We became more and more convinced that a radical change of the foundations of physics was necessary, i.e., a new kind of mechanics for which we used the term quantum mechanics. This word appears for the first time in physical literature in a paper of mine...")
2. ^ M. Planck (1914). The theory of heat radiation, second edition, translated by M. Masius, Blakiston's Son & Co, Philadelphia, pages 22, 26, 42, 43.
3. ^ Folsing, Albrecht (1997), Albert Einstein: A Biography, trans. Ewald Osers, Viking
6. ^ a b David Edwards,The Mathematical Foundations of Quantum Mechanics, Synthese, Volume 42, Number 1/September, 1979, pp. 1–70.
7. ^ a b D. Edwards, The Mathematical Foundations of Quantum Field Theory: Fermions, Gauge Fields, and Super-symmetry, Part I: Lattice Field Theories, International J. of Theor. Phys., Vol. 20, No. 7 (1981).
8. ^ Hanle, P.A. (December 1977), "Erwin Schrodinger's Reaction to Louis de Broglie's Thesis on the Quantum Theory.", Isis 68 (4): 606–609, doi:10.1086/351880
9. ^ S. Auyang, How is Quantum Field Theory Possible?, Oxford University Press, 1995.
10. ^ The Davisson-Germer experiment, which demonstrates the wave nature of the electron
Further reading[edit]
• Bacciagaluppi, Guido; Valentini; Valentini, Antony (2009), Quantum theory at the crossroads: reconsidering the 1927 Solvay conference, Cambridge, UK: Cambridge University Press, p. 9184, arXiv:quant-ph/0609184, Bibcode:2006quant.ph..9184B, ISBN 978-0-521-81421-8, OCLC 227191829
• Bernstein, Jeremy (2009), Quantum Leaps, Cambridge, Massachusetts: Belknap Press of Harvard University Press, ISBN 978-0-674-03541-6
• Jammer, Max (1966), The conceptual development of quantum mechanics, New York: McGraw-Hill, OCLC 534562
• Jammer, Max (1974), The philosophy of quantum mechanics: The interpretations of quantum mechanics in historical perspective, New York: Wiley, ISBN 0-471-43958-4, OCLC 969760
• F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz and D. Sternheimer, Deformation theory and quantization I,and II, Ann. Phys. (N.Y.), 111 (1978) pp. 61–110, 111-151.
• D. Cohen, An Introduction to Hilbert Space and Quantum Logic, Springer-Verlag, 1989. This is a thorough and well-illustrated introduction.
• Finkelstein, D., "Matter, Space and Logic", Boston Studies in the Philosophy of Science V: 1969, doi:10.1007/978-94-010-3381-7_4.
• A. Gleason. Measures on the Closed Subspaces of a Hilbert Space, Journal of Mathematics and Mechanics, 1957.
• R. Kadison. Isometries of Operator Algebras, Annals of Mathematics, Vol. 54, pp. 325–338, 1951
• G. Ludwig. Foundations of Quantum Mechanics, Springer-Verlag, 1983.
• G. Mackey. Mathematical Foundations of Quantum Mechanics, W. A. Benjamin, 1963 (paperback reprint by Dover 2004).
• R. Omnès. Understanding Quantum Mechanics, Princeton University Press, 1999. (Discusses logical and philosophical issues of quantum mechanics, with careful attention to the history of the subject).
• N. Papanikolaou. Reasoning Formally About Quantum Systems: An Overview, ACM SIGACT News, 36(3), pp. 51–66, 2005.
• C. Piron. Foundations of Quantum Physics, W. A. Benjamin, 1976.
• Hermann Weyl. The Theory of Groups and Quantum Mechanics, Dover Publications, 1950.
• A. Whitaker. The New Quantum Age: From Bell's Theorem to Quantum Computation and Teleportation, Oxford University Press, 2011, ISBN 978-0-19-958913-5
• Stephen Hawking. The Dreams that Stuff is Made of, Running Press, 2011, ISBN 978-0-76-243434-3
External links[edit] |
9a7d69f5f513f53c | Take the 2-minute tour ×
In my notes, I have the Time Independent Schrodinger equation for a free particle $$\frac{\partial^2 \psi}{\partial x^2}+\frac{p^2}{\hbar^2}\psi=0\tag1$$
The solution to this is given, in my notes, as $$\Large \psi(x)=C e^\left(\frac{ipx}{\hbar}\right)\tag2$$
Now, since (1) is a second order homogeneous equation with constant coefficients, given the coefficients we have, we get a pair of complex roots:$$r_{1,2}=\pm \frac{ip}{\hbar}\tag3$$
Thus, the most general solution looks something like:$$\psi(x)=c_1 \cos \left(\frac{px}{\hbar}\right)+c_2 \sin \left(\frac{px}{\hbar}\right)\tag4$$
However, instead of writing the solution as a cosine plus a sin, the professor seems to have taken a special case of the general solution (with $c_1=1$ and $c_2=i$) and converted the resulting $$\psi(x)=\cos \left(\frac{px}{\hbar}\right)+ i\sin \left(\frac{px}{\hbar}\right)\tag5$$ into exponential form, using $$e^{i\theta}=\cos \theta + i\sin \theta \tag6$$ to get (2).
The main question I have concerning this is: shouldn't we be going after real solutions, and ignoring the complex ones for this particular situation? According to my understanding $\Psi(x,t)$ is complex but $\psi(x)$ should be real. Thanks in advance.
share|improve this question
The wavefunction needn't and shouldn't be real. – Mew Feb 8 '13 at 10:38
There are cases where you can get away with a real wavefunction, but the complex case is more general and fundamental. The free particle Hamiltonian $\hat{H}$ commutes with reflection $x\rightarrow -x$,$p\rightarrow -p$, so states with momenta $\pm p$ are both solutions. In equation (2) they have chosen the solution which is an eigenvalue of the momentum operator $\hat{p}$ with a plus sign $+$. The other sign is also a solution, representing a wave going in the opposite direction. Your real solution contains both left moving and right moving waves. – Michael Brown Feb 8 '13 at 11:05
If you look at the particle current $\vec{j}\propto \psi^\star \nabla \psi - \psi \nabla \psi^\star$ you'll see that real wavefunctions correspond to states where there is no net current, so you can only really expect them to turn up when you have bound states. If there is nothing to reflect a particle back the way it came then it is free to move off to infinity and the current can't vanish, so the wavefunction can't be real. – Michael Brown Feb 8 '13 at 11:10
Related: The book of Griffiths, Intro to QM, Problem 2.1b, p.24; and this Phys.SE post. – Qmechanic Feb 8 '13 at 15:54
1 Answer 1
There is no need for the solution $\psi(x)$ to be real. What must be real is the probability density that is "carried" by $\psi(x)$. In some loose and imprecise intuitive way, you may think about a TV image carried by electromagnetic waves. The signal that travels is not itself the image, but it carries it, and you can recover the image by decoding the signal properly.
Somewhat similarly, the complex wave function that is found by solving Schrödinger equation carries the information of "where the particle is likely to be", but in an indirect manner. The information on the probability density $P(x)$ of finding the particle is recovered from $\psi(x)$ simply by multiplying it times its complex conjugate:
$$\psi(x)^*\psi(x) = P(x)$$
that gives a real function as a result. Note that it is a density: what you compute eventually is the probability of finding the particle between $x=a$ and $x=b$ as $\int_{a}^{b} P(x) dx$
As you know, when you multiply a complex number(/function) times its complex conjugate, the information on the phase is lost:
$$\rho e^{i \theta}\rho e^{-i \theta}=\rho^{2}$$
For that reason, in some places one can (not quite correctly) read that the phase has no physical meaning (see footnote), and then you may wonder "if I eventually get real numbers, why did not they invent a theory that directly handles real functions?".
The answer is that, among other reasons, complex wave functions make life interesting because, since the Schrödinger equation is linear, the superposition principle holds for its solutions. Wave functions add, and it is in that addition where the relative phases play the most important role.
The archetypical case happens in the double slit experiment. If $\psi_{1}$ and $\psi_{2}$ are the wave functions that represent the particle coming from the hole number $1$ and $2$ respectively, the final wave function is $$\psi_{1}+\psi_{2}$$ and thus the probability density of finding the particle after it has crossed the screen with two holes is found from $$P_{1+2}= (\psi_{1}+\psi_{2})^{*}(\psi_{1}+\psi_{2}) $$
That is, you shall first add the wave functions representing the individual holes to have the combined complex wave function, and then compute the probability density. In that addition, the phase informations carried by $\psi_{1}$ and $\psi_{2}$ play the most important role, since they give rise to interference patterns.
Comment: Feynman is quoted to have said "One of the miseries of life is that everybody names things a little bit wrong, and so it makes everything a little harder to understand in the world than it would be if it were named differently." It is quite similar here. Every book says that the phase of the wave function has no physical meaning. That is not 100% correct, as you see.
share|improve this answer
Your Answer
|
f8f7835f48711716 | Time Evolution
The Time Evolution node is one of the building blocks to observe the dynamics of a system over time. It is inserted inside a time loop which evolves an initial state by solving the time-dependent Schrödinger equation (TDSE).
The node has the following inputs:
• Initial State ($\psi_{0}$): An initial state is required as an input which is evolved over time.
• Hamiltonian (H(t)): An input from a Hamiltonian node which is evolved over time.
• Time step (dt): A scalar quantity that defines the rate of change of time in the evolution of the system.
At each time step, it will numerically solve the TDSE and update the initial state to give the time-evolved state.
After the inputs are provided, the node gives the following output:
• Time-evolved state ($\psi_{t}$): The state evolved after each time step.
In the example below, the set-up shows the time evolution of a linear superposition state in a harmonic oscillator. The Time Evolution node is inserted inside the time loop which evolves the superposition state.
The time dynamics of the system can be seen in the plot when the simulation is running. If the Potential and Hamiltonian is also time dependent, they must be placed inside the For Loop as well. The output $i$ from the For Loop boundary node can be used as input to these as the time variable. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.