content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
A Fibonacci-like Sequence of Composite Numbers
In 1964, Ronald Graham proved that there exist relatively prime natural numbers $a$ and $b$ such that the sequence $\{A_n\}$ defined by $$ {A}_{n} =A_{n-1}+A_{n-2}\qquad (n\ge 2;A_0=a,A_1=b)$$
contains no prime numbers, and constructed a 34-digit pair satisfying this condition.
In 1990, Donald Knuth found a 17-digit pair satisfying the same conditions. That same year, noting an improvement to Knuth's computation, Herbert Wilf found a yet smaller 17-digit pair.
Here we improve Graham's construction and generalize Wilf's note, and show that the 12-digit pair $$(a,b)= (407389224418,76343678551)$$ also defines such a sequence.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v6i1r44/0","timestamp":"2014-04-16T04:28:17Z","content_type":null,"content_length":"14563","record_id":"<urn:uuid:82c9087a-c4b6-4c01-bec4-4aa7d862c073>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density of the Earth
The Physics Factbook™
Edited by Glenn Elert -- Written by his students
An educational, Fair Use website
topic index | author index | special index
Bibliographic Entry Result Standardized
(w/surrounding text) Result
Neff, Robert F. & Zitewitz, Paul W. Physics, Principles and Problems. New York: "Mass of the Earth 5.979 × 10^24 kg 5.519 g/cm^3
Glencoe, 1995: 159. Radius of the Earth 6.3713 × 10^3 km"
Compton's Interactive Encyclopedia. Compton's, 1995. "They divide the mass of the Earth by the volume, which gives the average density of the material in the 5.5 g/cm^3
earth as 3.2 ounces per cubic inch (5.5 g/cm^3)."
Orbits Voyage Through The Solar System. Phoenix, AZ: Software Marketing, 1989. "Mean Density: (water = 1) 5.52." 5.52 g/cm^3
Morse, Joseph Laffan. Funk & Wagnalls Standard Reference Encyclopedia. New York: "The average density of the planet [Earth] is 5.52" 5.52 g/cm^3
Standard Reference Works, 1967: 2934.
Hamilton, Calvin J. Earth Introduction. Views of the Solar System. "Mean density (g/cm^3) 5.515." 5.515 g/cm^3
The density of the Earth is higher than that of any other planet in our solar system. Sources vary when it comes to the density of the Earth. All the numbers that were provided are so close to each
other, however, that they can each be considered valid. Some assorted numbers given would be: 5.5, 5.52, and 5.15 g/cm^3 (estimations that are made can change the outcome of a calculation).
Density is found by dividing the mass by the volume (ρ = m/V). A scientist named Henry Cavendish is known for calculating the mass (and then density) of the Earth. Cavendish assembled an apparatus
that consisted of a suspended metal rod with two lead balls hanging from it. He placed masses of metal near these balls in order to measure the force of attraction between them. Correspondingly, he
could then find the attraction on a mass the size of the Earth and then determine its density. This famous procedure is known as the Cavendish Experiment.
In order to find the volume of the Earth you need more information than just the volume of a sphere formula. This formula (4/3πr^3) requires the radius of the Earth. The diameter of the Earth at the
equator is 7926.68 miles (or 12756.75 km). Now, to find the radius, divide the diameter by 2 (because any radius is exactly half of its diameter).
The mass of the Earth is found to be 6 sextillion, 587 quintillion short tons (or 5.98 × 10^21 metric tons). Since the Earth is a sphere, the formula 4/3πr^3 is used to find the volume. The volume of
the Earth is considered to be 1.08 × 10^12 km^3 (or 2.5988 × 10^11 miles^3).
I also calculated numbers for the density of the Earth. I knew the mass of the Earth in grams is 5.979 × 10^27. Also, the radius in centimeters is 6.3713 × 10^8. I plugged this information into the 4
/3πr^3 formula in order to find the volume. I got the answer 1.084227366 × 10^27 cm^3. I then divided the volume into the mass and got 5.519 g/cm^3 as the density of the Earth.
Since the Earth's mass is so great, it makes a gravity that compresses it more than every other kind of inner planet. The Earth actually lacks all the huge "vapor envelopes" of the gas giants, so
almost all of the matter on our planet consists of heavy solids. This is what makes Earth have the highest average density compared to any other planet in the solar system.
Katherine Malfucci -- 2000
Another quality webpage by home | contact
Glenn Elert bent | chaos | eworld | facts | physics | {"url":"http://hypertextbook.com/facts/2000/KatherineMalfucci.shtml","timestamp":"2014-04-17T09:45:36Z","content_type":null,"content_length":"8058","record_id":"<urn:uuid:b046121a-f020-4ed6-8ca4-033fef42ba24>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to convert k
How to convert kilowatts to kilojoules
How to convert electric power in kilowatts (kW) to energy in kilojoules (kJ).
You can calculate kilojoules from kilowatts and seconds, but you can't convert kilowatts to kilojoules since kilowatt and kilojoule units represent different quantities.
kW to kJ calculation formula
The energy E in kilojoules (kJ) is equal to the power P in kilowatts (kW), times the time period t in seconds (s):
E[(kJ)] = P[(kW)] × t[(s)]
kilojoules = kilowatts × seconds
kJ = kW×s
What is the energy consumption of an electrical circuit that has power consumption of 3 kilowatts for time duration of 3 seconds?
E[(kJ)] = 3kW × 3s = 9kJ
See also | {"url":"http://rapidtables.com/convert/electric/kw-to-kj.htm","timestamp":"2014-04-19T04:19:43Z","content_type":null,"content_length":"10991","record_id":"<urn:uuid:962f57a8-19aa-423d-a8ff-f17bf1f7ed77>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of im'prisoner
The Prisoner's Dilemma constitutes a problem in game theory. It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison
sentence payoffs and gave it the "Prisoner's Dilemma" name (Poundstone, 1992).
In its "classical" form, the prisoner's dilemma (PD) is presented as follows:
Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies
("defects") for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both
prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain
silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?
If we assume that each player prefers shorter sentences to longer ones, and that each gets no utility out of lowering the other player's sentence, and that there are no reputation effects from a
player's decision, then the prisoner's dilemma forms a non-zero-sum game in which two players may each "cooperate" with or "defect" from (i.e., betray) the other player. In this game, as in all game
theory, the only concern of each individual player ("prisoner") is maximizing his/her own payoff, without any concern for the other player's payoff. The unique equilibrium for this game is a
Pareto-suboptimal solution—that is, rational choice leads the two players to both play defect even though each player's individual reward would be greater if they both played cooperately.
In the classic form of this game, cooperating is strictly dominated by defecting, so that the only possible equilibrium for the game is for all players to defect. In simpler terms, no matter what the
other player does, one player will always gain a greater payoff by playing defect. Since in any situation playing defect is more beneficial than cooperating, all rational players will play defect,
all things being equal.
In the iterated prisoner's dilemma the game is played repeatedly. Thus each player has an opportunity to "punish" the other player for previous non-cooperative play. Cooperation may then arise as an
equilibrium outcome. The incentive to defect is overcome by the threat of punishment, leading to the possibility of a cooperative outcome. So if the game is infinitely repeated, cooperation may be a
subgame perfect Nash equilibrium although both players defecting always remains an equilibrium and there are many other equilibrium outcomes.
In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the formal criteria of the classic or iterative games; for instance, those in which two entities
could gain important benefits from cooperating or suffer from the failure to do so, but find it merely difficult or expensive, not necessarily impossible, to coordinate their activities to achieve
Strategy for the classical prisoner's dilemma
The classical prisoner's dilemma can be summarized thus:
Prisoner B Stays Silent Prisoner B Betrays
Prisoner A Stays Silent Each serves 6 months Prisoner A: 10 years
Prisoner B: goes free
Prisoner A Betrays Prisoner A: goes free Each serves 5 years
Prisoner B: 10 years
In this game, regardless of what the opponent chooses, each player always receives a higher payoff (lesser sentence) by betraying; that is to say that betraying is the strictly dominant strategy. For
instance, Prisoner A can accurately say, "No matter what Prisoner B does, I personally am better off betraying than staying silent. Therefore, for my own sake, I should betray." However, if the other
player acts similarly, then they both betray and both get a lower payoff than they would get by staying silent. Rational self-interested decisions result in each prisoner's being worse off than if
each chose to lessen the sentence of the accomplice at the cost of staying a little longer in jail himself. Hence a seeming dilemma. In game theory, this demonstrates very elegantly that in a
non-zero sum game a Nash Equilibrium need not be a Pareto optimum.
Generalized form
We can expose the skeleton of the game by stripping it of the prisoner
framing device
. The generalized form of the game has been used frequently in
experimental economics
. The following rules give a typical realization of the game.
There are two players and a banker. Each player holds a set of two cards: one printed with the word "Cooperate", the other printed with "Defect" (the standard terminology for the game). Each
player puts one card face-down in front of the banker. By laying them face down, the possibility of a player knowing the other player's selection in advance is eliminated (although revealing
one's move does not affect the dominance analysis). At the end of the turn, the banker turns over both cards and gives out the payments accordingly.
If player 1 (red) defects and player 2 (blue) cooperates, player 1 gets the Temptation to Defect payoff of 5 points while player 2 receives the Sucker's payoff of 0 points. If both cooperate they get
the Reward for Mutual Cooperation payoff of 3 points each, while if they both defect they get the Punishment for Mutual Defection payoff of 1 point. The checker board payoff matrix showing the
payoffs is given below.
Example PD payoff matrix
Cooperate Defect
Cooperate 3, 3 0, 5
Defect 5, 0 1, 1
In "win-lose" terminology the table looks like this:
Cooperate Defect
Cooperate win-win lose much-win much
Defect win much-lose much lose-lose
These point assignments are given arbitrarily for illustration. It is possible to generalize them, as follows:
Canonical PD payoff matrix
Cooperate Defect
Cooperate R, R S, T
Defect T, S P, P
Where T stands for Temptation to defect, R for Reward for mutual cooperation, P for Punishment for mutual defection and S for Sucker's payoff. To be defined as Prisoner's dilemma, the following
inequalities must hold:
T > R > P > S
This condition ensures that the equilibrium outcome is defection, but that cooperation Pareto dominates equilibrium play. In addition to the above condition, if the game is repeatedly played by two
players, the following condition should be added.
2 R > T + S
If that condition does not hold, then full cooperation is not necessarily Pareto optimal, as the players are collectively better off by having each player alternate between cooperate and defect.
These rules were established by cognitive scientist Douglas Hofstadter and form the formal canonical description of a typical game of Prisoner's Dilemma.
A simple special case occurs when the advantage of defection over cooperation is independent of what the co-player does and cost of the co-players defection is independent of one's own action, i.e. T
+S = P+R.
Human behavior in the Prisoner's Dilemma
One experiment based on the simple dilemma found that approximately 40% of participants played "cooperate" (i.e., stayed silent).
The iterated prisoner's dilemma
If two players play Prisoner's Dilemma more than once in succession, having memory of at least one previous game, it is called iterated Prisoner's Dilemma. Amongst results shown by Nobel Prize winner
Robert Aumann
in his 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome. Popular interest in the iterated prisoners dilemma (IPD) was kindled by
Robert Axelrod
in his book
The Evolution of Cooperation
(1984). In this he reports on a tournament he organized in which participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited
academic colleagues all over the world to devise computer strategies to compete in an
IPD tournament
. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.
Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run
while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behaviour from mechanisms that are initially
purely selfish, by natural selection.
The best deterministic strategy was found to be "Tit for Tat," which Anatol Rapoport developed and entered into the tournament. It was the simplest of any program entered, containing only four lines
of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his opponent did on the previous move. Depending on the
situation, a slightly better strategy can be "Tit for Tat with forgiveness." When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around
1%-5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents.
By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful. Nice: The most important condition is that the strategy must be "nice", that is,
it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore a purely selfish strategy will
not "cheat" on its opponent, for purely utilitarian reasons first. Retaliating: However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An
example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players. Forgiving: Successful strategies must also be
forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge,
maximizing points. Non-envious: The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a 'nice' strategy can never score
more than the opponent).
Therefore, Axelrod reached the oxymoron-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious.
The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game
the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time,
except for a single individual following the Tit-for-Tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for
that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being Tit-for-Tat players, the optimal strategy for an individual depends on the
percentage, and on the length of the game.
A strategy called Pavlov (an example of Win-Stay, Lose-Switch) cooperates at the first iteration and whenever the player and co-player did the same thing at the previous iteration; Pavlov defects
when the player and co-player did different things at the previous iteration. For a certain range of parameters, Pavlov beats all other strategies by giving preferential treatment to co-players which
resemble Pavlov.
Deriving the optimal strategy is generally done in two ways:
1. Bayesian Nash Equilibrium: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit-for-tat, 50% always cooperate) an optimal counter-strategy can be derived
2. Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The
mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the
initial population; empirical experiments with such systems tend to produce Tit-for-Tat players (see for instance Chess 1988), but there is no analytic proof that this will always occur.
Although Tit-for-Tat is considered to be the most robust basic strategy, a team from Southampton University in England (led by Professor Nicholas Jennings and consisting of Rajdeep Dash, Sarvapali
Ramchurn, Alex Rogers, Perukrishnen Vytelingum) introduced a new strategy at the 20th-anniversary Iterated Prisoner's Dilemma competition, which proved to be more successful than Tit-for-Tat. This
strategy relied on cooperation between programs to achieve the highest number of points for a single program. The University submitted 60 programs to the competition, which were designed to recognize
each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of
points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result,
this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.
This strategy takes advantage of the fact that multiple entries were allowed in this particular competition, and that the performance of a team was measured by that of the highest-scoring player
(meaning that the use of self-sacrificing players was a form of minmaxing). In a competition where one has control of only a single player, Tit-for-Tat is certainly a better strategy. Because of this
new rule, this competition also has little theoretical significance when analysing single agent strategies as compared to Axelrod's seminal tournament. However, it provided the framework for
analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Richard Dawkins in his book
The Selfish Gene pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that most probably Axelrod would not have allowed them if they had been
submitted. It also relies on circumventing rules about the prisoner's dilemma in that there is no communication allowed between the two players. When the Southampton programs engage in an opening
"ten move dance" to recognize one another, this only reinforces just how valuable communication can be in shifting the balance of the game.
If an iterated PD is going to be iterated exactly N times, for some known constant N, then it is always game theoretically optimal to defect in all rounds. The only possible Nash equilibrium is to
always defect. The proof goes like this: one might as well defect on the last turn, since the opponent will not have a chance to punish the player. Therefore, both will defect on the last turn. Thus,
the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. For cooperation to emerge between game theoretic rational
players, the total number of rounds must be random, or at least unknown to the players. However, even in this case always defect is no longer a strictly dominant strategy, only a Nash equilibrium.
The superrational strategy in this case is to cooperate against a superrational opponent, and in the limit of large fixed N, experimental results on strategies agree with the superrational version,
not the game-theoretic rational one.
Another odd case is "play forever" prisoner's dilemma. The game is repeated infinitely many times, and the player's score is the average (suitably computed).
The prisoner's dilemma game is fundamental to certain theories of human cooperation and trust. On the assumption that the PD can model transactions between two people requiring trust, cooperative
behaviour in populations may be modelled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count
of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma has also been referred to as the "Peace-War game".
Continuous Iterated Prisoner's Dilemma
Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some
researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd found that in such
situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. The basic intuition for this result is straightforward: in a continuous prisoner's dilemma, if a
population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a
discrete prisoner's dilemma, Tit-for-Tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since Nature arguably offers
more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of Tit-for-Tat-like
cooperation are extremely rare in Nature (ex. Hammerstein) even though Tit-for-Tat seems robust in theoretical models.
Learning psychology and game theory
Where game players can learn to estimate the likelihood of other players defecting, their own behaviour is influenced by their experience of the others' behaviour. Simple statistics show that
inexperienced players are more likely to have had, overall, atypically good or bad interactions with other players. If they act on the basis of these experiences (by defecting or cooperating more
than they would otherwise) they are likely to suffer in future transactions. As more experience is accrued a truer impression of the likelihood of defection is gained and game playing becomes more
successful. The early transactions experienced by immature players are likely to have a greater effect on their future playing than would such transactions affect mature players. This principle goes
part way towards explaining why the formative experiences of young people are so influential and why, for example, those who are particularly vulnerable to bullying sometimes become bullies
The likelihood of defection in a population may be reduced by the experience of cooperation in earlier games allowing trust to build up. Hence self-sacrificing behaviour may, in some instances,
strengthen the moral fibre of a group. If the group is small the positive behaviour is more likely to feed back in a mutually affirming way, encouraging individuals within that group to continue to
cooperate. This is allied to the twin dilemma of encouraging those people whom one would aid to indulge in behaviour that might put them at risk. Such processes are major concerns within the study of
reciprocal altruism, group selection, kin selection and moral philosophy.
Douglas Hofstadter in his Metamagical Themas proposed that the definition of "rational" that led "rational" players to defect is faulty. He proposed that there is another type of rational behavior,
which he called "superrational", where players take into account that the other person is presumably superrational, like them. Superrational players behave identically, and know that they will behave
identically. They take that into account before they maximize their payoffs, and they therefore cooperate.
This view of the one-shot PD leads to cooperation as follows:
• Any superrational strategy will be the same for both superrational players, since both players will think of it.
• therefore the superrational answer will lie on the diagonal of the payoff matrix
• when you maximize return from solutions on the diagonal, you cooperate
However, if a superrational player plays against a rational opponent, he will serve a 10-year sentence, and the rational player will go free.
One-shot cooperation is observed in human culture, wherever religious and ethical codes exist.
Superrationality is not studied by academic economists, as rationality excludes any superrational behavior.
While it is sometimes thought that
must involve the constraint of self-interest,
David Gauthier
famously argues that co-operating in the prisoners dilemma on moral principles is consistent with self-interest and the axioms of game theory. In his opinion, it is most prudent to give up
straightforward maximizing and instead adopt a disposition of constrained maximization, according to which one resolves to cooperate in the belief that the opponent will respond with the same
choice, while in the classical PD it is explicitly stipulated that the response of the opponent does not depend on the player's choice. This form of
claims that good moral thinking is just an elevated and subtly strategic version of basic means-end reasoning.
Douglas Hofstadter expresses a strong personal belief that the mathematical symmetry is reinforced by a moral symmetry, along the lines of the Kantian categorical imperative: defecting in the hope
that the other player cooperates is morally indefensible. If players treat each other as they would treat themselves, then they will cooperate.
Real-life examples
These particular examples, involving prisoners and bag switching and so forth, may seem contrived, but there are in fact many examples in human interaction as well as interactions in nature that have
the same payoff matrix. The prisoner's dilemma is therefore of interest to the
social sciences
such as
, as well as to the biological sciences such as
evolutionary biology
. Many natural processes have been abstracted into models in which living beings are engaged in endless games of Prisoner's Dilemma (PD). This wide applicability of the PD gives the game its
substantial importance.
In politics
In political science, for instance, the PD scenario is often used to illustrate the problem of two states engaged in an arms race. Both will reason that they have two options, either to increase
military expenditure or to make an agreement to reduce weapons. Neither state can be certain that the other one will keep to such an agreement; therefore, they both incline towards military
expansion. The paradox is that both states are acting rationally, but producing an apparently irrational result. This could be considered a corollary to deterrence theory.
In science
In sociology or criminology, the PD may be applied to an actual dilemma facing two inmates. The game theorist Marek Kaminski, a former political prisoner, analysed the factors contributing to payoffs
in the game set up by a prosecutor for arrested defendants (cf. References). He concluded that while the PD is the ideal game of a prosecutor, numerous factors may strongly affect the payoffs and
potentially change the properties of the game.
In environmental studies, the PD is evident in crises such as global climate change. All countries will benefit from a stable climate, but any single country is often hesitant to curb emissions. The
benefit to an individual country to maintain current behavior is greater than the benefit to all countries if behavior was changed, therefore explaining the current impasse concerning climate change.
In program management and technology development, the PD applies to the relationship between the customer and the developer. Capt Dan Ward, an officer in the US Air Force, examined The Program
Manager's Dilemma in an article published in Defense AT&L, a defense technology journal.
In sports
PD frequently occurs in cycling races, for instance in the Tour de France. Consider two cyclists halfway in a race, with the peloton (larger group) at great distance behind them. The two riders often
work together (mutual cooperation) by sharing the tough load of the front position, where there is no shelter from the wind. If neither of the riders makes an effort to stay ahead, the peloton will
soon catch up (mutual defection). An often-seen scenario is one rider doing the hard work alone (cooperating), keeping the two ahead of the peloton. Nearer to the finish (where the threat of the
peloton has disappeared), the game becomes a simple zero-sum game, with each rider trying to avoid at all costs giving a slipstream advantage to the other rider. If there was a (single) defecting
rider in the preceding prisoners' dilemma, it is usually he who will win this zero-sum game, having saved energy in the cooperating rider's slipstream. The cooperating rider's attitude may seem
extremely naive, but he often has no other choice when both riders have different physical profiles. The cooperating rider typically has an endurance profile, whereas the defecting rider will more
likely be a sprinter. When continuously taking the head position of the twosome, the 'cooperating' rider is merely trying to ride away from the defecting sprinter using his endurance advantage over
long distance, thus avoiding a sprint duel at the finish, which he would be bound to lose, even if the sprinting rider had cooperated. Just after the escape from the peloton, the endurance-sprinter
difference is less of importance, and it is therefore at this stage of the race that mutual cooperation PD can usually be observed. Arguably, it is this almost unavoidably present of PD (and its
transition in zero-sum games) that (unconsciously) makes cycling an exciting sport to watch.
PD hardly applies to running sports, because of the negligible importance of air resistance (and shelter from it).
In high school wrestling, sometimes participants intentionally lose unnaturally large amounts of weight so as to compete against lighter opponents. In doing so, the participants are clearly not at
their top level of physical and athletic fitness and yet often end up competing against the same opponents anyway, who have also followed this practice (mutual defection). The result is a reduction
in the level of competition. Yet if a participant maintains their natural weight (cooperating), they will most likely compete against a stronger opponent who has lost considerable weight.
In economics
Advertising is sometimes cited as a real life example of the prisoner’s dilemma. When cigarette advertising was legal in the United States, competing cigarette manufacturers had to decide how much
money to spend on advertising. The effectiveness of Firm A’s advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is
affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period the advertising cancels out, receipts remain constant, and expenses increase due to
the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the
optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy and this
is not a prisoner's dilemma but rather is an example of a stag hunt. The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium.
Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the creation of laws banning cigarette advertising, understanding that this would
reduce costs and increase profits across the industry. This analysis is likely to be pertinent in many other business situations involving advertising.
Members of a cartel are also involved in a (multi-player) prisonners' dilemma. 'Cooperating' typically means keeping prices at a pre-agreed minimum level. 'Defecting' means selling under this minimum
level, instantly stealing business (and profits) from other cartel members. Ironically, anti-trust authorities want potential kartel members to mutually defect, ensuring the lowest possible prices
for consumers.
In law
The theoretical conclusion of PD is one reason why, in many countries, plea bargaining is forbidden. Often, precisely the PD scenario applies: it is in the interest of both suspects to confess and
testify against the other prisoner/suspect, even if each is innocent of the alleged crime. Arguably, the worst case is when only one party is guilty — here, the innocent one is unlikely to confess,
while the guilty one is likely to confess and testify against the innocent.
In the media
In the 2008 edition of Big Brother (UK), the dilemma was applied to two of the housemates. A prize fund of £50,000 was available. If housemates chose to share the prize fund, each would receive
£25,000. If one chose to share, and the other chose to take, the one who took it would receive the entire £50,000. If both chose to take, both housemates would receive nothing. The housemates had a
minute to discuss their decision, and were given the possibility to lie. Both housemates declared they would share the prize fund, but either could have potentially been lying. When asked to give
their final answers by big brother, both housemates did indeed choose to share, and so won £25,000 each.
Multiplayer dilemmas
Many real-life dilemmas involve multiple players. Although metaphorical, Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the PD: Each villager makes a
choice for personal gain or restraint. The collective reward for unanimous (or even frequent) defection is very low payoffs (representing the destruction of the "commons"). Such multi-player PDs are
not formal as they can always be decomposed into a set of classical two-player games. The commons are not always exploited: William Poundstone, in a book about the Prisoner's Dilemma (see References
below), describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for someone to take a paper without paying (defecting) but very few do, feeling that if they do
not pay then neither will others, destroying the system.
Because there is no mechanism for personal choice to influence others' decisions, this type of thinking relies on correlations between behavior, not on causation. Because of this property, those who
do not understand superrationality often mistake it for magical thinking. Without superrationality, not only petty theft, but voluntary voting requires widespread magical thinking, since a non-voter
is a free rider on a democratic system.
Related games
Closed-bag exchange
once suggested that people often find problems such as the PD problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was
"closed bag exchange":
Two people meet and exchange closed bags, with the understanding that one of them contains money, and the other contains a purchase. Either player can choose to honour the deal by putting into
his bag what he agreed, or he can defect by handing over an empty bag.
In this game, defection is always the best course, implying that rational agents will never play. However, in this case both players cooperating and both players defecting actually give the same
result, so chances of mutual cooperation, even in repeated games, are few.
Friend or Foe?
Friend or Foe?
is a game show that aired from 2002 to 2005 on the
Game Show Network
in the
United States
. It is an example of the prisoner's dilemma game tested by real people, but in an artificial setting. On the game show, three pairs of people compete. As each pair is eliminated, they play a game of
Prisoner's Dilemma to determine how their winnings are split. If they both cooperate (Friend), they share the winnings 50-50. If one cooperates and the other defects (Foe), the defector gets all the
winnings and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the payoff matrix is slightly different from the standard one given above, as the payouts for the "both
defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's
dilemma. If you know your opponent is going to vote Foe, then your choice does not affect your winnings. In a certain sense,
Friend or Foe
has a payoff model between "Prisoner's Dilemma" and "
The payoff matrix is
Cooperate Defect
Cooperate 1, 1 0, 2
Defect 2, 0 0, 0
This payoff matrix was later used on the British television programmes Shafted and Golden Balls.
See also
• Robert Aumann, “Acceptable points in general cooperative n-person games”, in R. D. Luce and A. W. Tucker (eds.), Contributions to the Theory 23 of Games IV, Annals of Mathematics Study 40,
287–324, Princeton University Press, Princeton NJ.
• Axelrod, R. (1984). The Evolution of Cooperation. ISBN 0-465-02121-2
• Kenneth Binmore, Fun and Games.
• David M. Chess (1988). Simulating the evolution of behavior: the iterated prisoners' dilemma problem. Complex Systems, 2:663–670.
• Dresher, M. (1961). The Mathematics of Games of Strategy: Theory and Applications Prentice-Hall, Englewood Cliffs, NJ.
• Flood, M.M. (1952). Some experimental games. Research memorandum RM-789. RAND Corporation, Santa Monica, CA.
• Kaminski, Marek M. (2004) Games Prisoners Play Princeton University Press. ISBN 0-691-11721-7 http://webfiles.uci.edu/mkaminsk/www/book.html
• Poundstone, W. (1992) Prisoner's Dilemma Doubleday, NY NY.
• Greif, A. (2006). Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge University Press, Cambridge, UK.
• Rapoport, Anatol and Albert M. Chammah (1965). Prisoner's Dilemma. University of Michigan Press.
• S. Le and R. Boyd (2007) "Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma" Journal of Theoretical Biology, Volume 245, 258–267. Full text
Further reading
• Plous, S. (1993). Prisoner's Dilemma or Perceptual Dilemma? Journal of Peace Research, Vol. 30, No. 2, 163-179.
External links | {"url":"http://www.reference.com/browse/im'prisoner","timestamp":"2014-04-21T03:40:45Z","content_type":null,"content_length":"127596","record_id":"<urn:uuid:6752081a-1c95-441a-bf23-6e86043d22f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
the scalar field lagrangian
I have a question about a statement I've seen in many a Quantum Field Theory book (e.g. Zee). They say that the general form of the Lagrangian density for a scalar field, once two conditions are
(1) Lorentz invariance, and
(2) At most two time derivatives,
L = 1/2(d\phi)^2 - V(\phi)
where V(\phi) is a polynomial in \phi.
Why is this? I can understand how the conditions restrict the kinetic energy term to being what it is, but I don't understand why V has to be _polynomial_ in \phi. | {"url":"http://www.physicsforums.com/showthread.php?t=71448","timestamp":"2014-04-20T23:39:40Z","content_type":null,"content_length":"25102","record_id":"<urn:uuid:1f56866b-06ec-4e1f-a61b-1d270ab41bba>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asvab question
Asvab question
Hello all,
Got a quick question about the asvab. I'm trying to go CA ANG. I've been studying for the asvab, only issue is that I'm not very good at math. I was wondering if its possible to obtain a 50 by
passing the arithmetic reasoning (AR) portion on the asvab? I know the most important part of the test is the AFQT. Which is the four sections including word knowledge, arithmetic reasoning, math
knowledge ( algebra) and paragraph comprehension.
I've been reading the Asvab for dummies book, and they state according to their research. You only have about a 1:12 min per question, so obviously you gotta 'move with a purpose'.
Tags: None
Re: Asvab question
If you have the ASVAB for Dummies book then you should do well. I too am terribly bad bad bad at math. No, seriously. I failed math several times in high school, and it took me three times to pass
College Algebra. My brain just does not function that way. Worry not! I took the computer ASVAB and found that the math questions were MUCH easier than what was in the Dummies book, and I had lots of
My advice to you is, just keep taking those practice tests over and over until you have a decent grasp. Be sure to study FOIL....lots of those questions. There were a couple questions on the ASVAB
that maybe I didn't work out properly in Algebra, but I was still able to figure out the answer. Remember 2 of the questions are totally wrong, so that should give you 50/50 there. Just breathe and
relax and take your time!! Don't worry about the countdown....you get a total amount of time per question, its not like you only get 1:12 to answer then you must move on.
I was striving to get a 104 on my GT for 68W and ended up getting 120 and qualifying for 12Y. Good luck!
Re: Asvab question
Originally posted by Solrider View Post
If you have the ASVAB for Dummies book then you should do well. I too am terribly bad bad bad at math. No, seriously. I failed math several times in high school, and it took me three times to pass
College Algebra. My brain just does not function that way. Worry not! I took the computer ASVAB and found that the math questions were MUCH easier than what was in the Dummies book, and I had lots of
My advice to you is, just keep taking those practice tests over and over until you have a decent grasp. Be sure to study FOIL....lots of those questions. There were a couple questions on the ASVAB
that maybe I didn't work out properly in Algebra, but I was still able to figure out the answer. Remember 2 of the questions are totally wrong, so that should give you 50/50 there. Just breathe and
relax and take your time!! Don't worry about the countdown....you get a total amount of time per question, its not like you only get 1:12 to answer then you must move on.
I was striving to get a 104 on my GT for 68W and ended up getting 120 and qualifying for 12Y. Good luck!
Thanks Solrider,
I feel you on the math part also, I think when I say I'm bad at math is a understatement lol. I already have a hard enough time with the arithmetic reasoning, some of the pre-algebra I understand. My
strong points are english, mechanical comprehension, and automotive. But.... I know those are used for the vocational part of the ASVB, my main focus is getting a high enough score on the (AFQT). I
guess what am I asking is it possible to nearly fail mathematics knowledge (MK) ? and do decently in the arithmetic reasoning portion and achieve a 50?
Re: Asvab question
Originally posted by Tacit View Post
Thanks Solrider,
I feel you on the math part also, I think when I say I'm bad at math is a understatement lol. I already have a hard enough time with the arithmetic reasoning, some of the pre-algebra I understand. My
strong points are english, mechanical comprehension, and automotive. But.... I know those are used for the vocational part of the ASVB, my main focus is getting a high enough score on the (AFQT). I
guess what am I asking is it possible to nearly fail mathematics knowledge (MK) ? and do decently in the arithmetic reasoning portion and achieve a 50?
That I don't know. What I'm saying is, stop worrying about what you can squeak by with or worry about what you can fail and what you can't. Instead spend your time studying your *** off and you will
be fine. =)
Re: Asvab question
Don't worry about the scores; they are hard to calculate and relative. If you practice you will do fine.
I got a 90+ on AFQT so let me give you some advice on the math.
You want to practice 2 ways. Get a practice book with full tests.
First sit down and take 1 of each type of math section like you would on the test, timed and check at the end.
Then take 10-20 practice questions from each math section and take your time make sure they are correct and you understand the answers.
Now take another round of timed math sections, your answers right should have improved noticeably.
Repeat until satisfied.
If your skills are really lacking simply practice using basic math with paper and pencil.
large number addition
long division
fraction manipulation
large number multiplication
You can find videos with tips for paper and pencil math on youtube.
I practiced and it payed off. If you don't put the work in your not going to do well.
Re: Asvab question
Originally posted by JR7775 View Post
Don't worry about the scores; they are hard to calculate and relative. If you practice you will do fine.
I got a 90+ on AFQT so let me give you some advice on the math.
You want to practice 2 ways. Get a practice book with full tests.
First sit down and take 1 of each type of math section like you would on the test, timed and check at the end.
Then take 10-20 practice questions from each math section and take your time make sure they are correct and you understand the answers.
Now take another round of timed math sections, your answers right should have improved noticeably.
Repeat until satisfied.
If your skills are really lacking simply practice using basic math with paper and pencil.
large number addition
long division
fraction manipulation
large number multiplication
You can find videos with tips for paper and pencil math on youtube.
I practiced and it payed off. If you don't put the work in your not going to do well.
I took everyone's advice and I'm studying everyday. I went out to Barnes and Nobbles last night and bought this book:
I highly recommend this book for anybody else who is struggling in math! It gives you examples step by step in easy to understand format. Wish me luck guys!
Re: Asvab question
How are you doing sir? I have a question what site did you use to study for the math parts? Also I have a MEPS question, I have about a 8 inch meta ord in my left ankle a long with 7 screws, I got it
put in 6 years ago in which it has healed 100%. Do you know if that is a disqualifier?
Originally posted by Solrider View Post
If you have the ASVAB for Dummies book then you should do well. I too am terribly bad bad bad at math. No, seriously. I failed math several times in high school, and it took me three times to pass
College Algebra. My brain just does not function that way. Worry not! I took the computer ASVAB and found that the math questions were MUCH easier than what was in the Dummies book, and I had lots of
My advice to you is, just keep taking those practice tests over and over until you have a decent grasp. Be sure to study FOIL....lots of those questions. There were a couple questions on the ASVAB
that maybe I didn't work out properly in Algebra, but I was still able to figure out the answer. Remember 2 of the questions are totally wrong, so that should give you 50/50 there. Just breathe and
relax and take your time!! Don't worry about the countdown....you get a total amount of time per question, its not like you only get 1:12 to answer then you must move on.
I was striving to get a 104 on my GT for 68W and ended up getting 120 and qualifying for 12Y. Good luck!
Re: Asvab question
Originally posted by Tacit View Post
Got a quick question about the asvab. I'm trying to go CA ANG. I've been studying for the asvab, only issue is that I'm not very good at math. I was wondering if its possible to obtain a 50 by
passing the arithmetic reasoning (AR) portion on the asvab? I know the most important part of the test is the AFQT. Which is the four sections including word knowledge, arithmetic reasoning, math
knowledge ( algebra) and paragraph comprehension.
To be in the 50th percentile, you will need 204 points on your AFQT (2VE + AR + MK). VE is out of 62 points, but I can't figure out what the maximum AR or MK scores are -- and the internet seems to
be devoid of this information. Check this out.
Edit: I am somewhat skeptical of all the ASVAB stuff I have found online. I took the test in 2002 and my VE score was a 64, which contradicts the supposed maximum score of 62. But it is possible that
the test has changed since then.
Re: Asvab question
If you need more asvab practice, there are many free ones online. expecially math ones, seems like alot of people are having the same issue. check out www.free-asvab-test.com
great free practice.... | {"url":"http://www.nationalguard.com/forums/forum/joining-the-guard/requirements-what-you-need-to-join/16032-asvab-question?goto=nextnewest","timestamp":"2014-04-25T07:55:26Z","content_type":null,"content_length":"109786","record_id":"<urn:uuid:8446b99c-205f-4742-acbb-29480abb504c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Uniformly continuous maps between ends of R-trees
Martinez Pérez, Álvaro and Morón, Manuel A. (2009) Uniformly continuous maps between ends of R-trees. Mathematische Zeitschrift, 263 (3). pp. 583-606. ISSN 0025-5874
Restricted to Repository staff only until 31 December 2020.
Official URL: http://www.springerlink.com/content/3711900186614502/fulltext.pdf
There is a well-known correspondence between infinite trees and ultrametric spaces which can be interpreted as an equivalence of categories and comes from considering the end space of the tree. In
this equivalence, uniformly continuous maps between the end spaces are translated to some classes of coarse maps (or even classes of metrically proper Lipschitz maps) between the trees.
Item Type: Article
Uncontrolled Keywords: Tree; ultrametric; end space; coarse map; uniformly continuous; non-expansive map
Subjects: Sciences > Mathematics > Topology
ID Code: 15114
References: Baues, H.J., Quintero, A.: Infinite Homotopy Theory. K-Monographs in Mathematics, no. 6. Kluwer,Boston (2001)
Bestvina, M.: R-trees in Topology, Geometry and Group Theory. Handbook of geometric topology, pp.55–91. North-Holland, Amsterdam (2002)
Borsuk, K.: On some metrization of the hyperspace of compact sets. Fund. Math. 41, 168–202 (1954)
Hughes, B.: Trees and ultrametric spaces: a categorical equivalence. Adv. Math. 189, 148–191 (2004)
Hughes, B., Ranicki: Ends of Complexes. Cambridge Tracts in Mathematics, vol. 123. Cambridge University Press, Cambridge (1996)
Mac Lane, S.: Categories for the Working Mathematician. Springer, New York (1971)
Morgan, J.W.: -trees and their applications. Bull. Am. Math. Soc. 26(1), 87–112 (1992)
Morón, M.A., Ruiz del Portal, F.R.: Shape as a Cantor completion process. Mathematische Zeitschrift 225, 67–86 (1997)
Robert, A.M.: A Course in p-adic Analysis. Grad. Text Math., vol. 198. Springer, New York (2000)
Roe, J.: Lectures onCoarseGeometry. University Lecture Series, vol. 31. American Mathematical Society,Providence (2003)
Roe, J.: Coarse Cohomology and Index Theory on Complete Riemannian Manifolds. Memoirs of the American Mathematical Society, vol. 104, no. 497 (1993)
Serre, J.P.: Trees. Springer, New York (1980)
Deposited On: 07 May 2012 08:48
Last Modified: 06 Feb 2014 10:16
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/15114/","timestamp":"2014-04-20T21:23:55Z","content_type":null,"content_length":"27443","record_id":"<urn:uuid:cc8b5550-03d3-48e5-bc2a-f477308bbc71>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] recursively divide a value to get a sequence
Jim Lemon jim at bitwrit.com.au
Wed Jul 9 12:33:39 CEST 2008
On Wed, 2008-07-09 at 11:40 +0200, Anne-Marie Ternes wrote:
> Hi,
> if given the value of, say, 15000, I would like to be able to divide
> that value recursively by, say, 5, and to get a vector of a determined
> length, say 9, the last value being (set to) zero- i.e. like this:
> 15000 3000 600 120 24 4.8 0.96 0.192 0
> These are in fact concentration values from an experiment. For my
> script, I get only the starting value (here 15000), and the factor by
> which concentration is divided for each well, the last one having, by
> definition, no antagonist at all.
> I have tried to use "seq", but it can "only" do positive or negative
> increment. I didn't either find a way with "rep", "sweep" etc. These
> function normally start from an existing vector, which is not the case
> here, I have only got a single value to start with.
> I suppose I could do something "loopy", but I'm sure there is a better
> way to do it.
Well, if you really want to do it recursively (and maybe loopy as well)
recursivdiv<-function(x,denom,lendiv,firstpass=TRUE) {
if(firstpass) lendiv<-lendiv-1
if(lendiv > 1) {
else divvec<-0
if(firstpass) divvec<-c(x,divvec)
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2008-July/167154.html","timestamp":"2014-04-21T10:12:41Z","content_type":null,"content_length":"4222","record_id":"<urn:uuid:77e44960-7d3b-4fff-bf69-805ebb8a12f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wave Intensity
Wave intensity is the average power that travels through a given area as the wave travels through space. The intensity of sound waves is measured using the decibel scale.
Let's talk about wave intensity; intensity is a word that we use to describe how much energy is associated with a periodic wave. Now it's not a direct correlation so let's kind of go through this. So
first off the energy carried by a wave can be considered analogous to the energy carried in simple harmonic motion. The energy of a mass in simple harmonic motion is equal to one half times spring
constant times the square of the amplitude. So that indicates to us that energy should be proportional to the square of the amplitude of a wave. But with a periodic wave we keep on getting assaulted
again and again and again so it doesn't really make sense to talk about how much energy is carried because it keeps on increasing as the wave keeps on coming. So what we really ought is ask about how
much power is being carried, how much energy divided by time. So it's the rate of energy flow, the power change in energy over change in time. Now that's measured in watts, 1 watt is 1 joule per
second and of course we're all used to that from power.
Alright so that's all fine and good but let's say that I've got a sound source here and it's got a power of 100 watts alright so it's sending out these sound waves. But I'm not hearing all 100 watts,
I'm surrounding the speaker taking all the energy, right I'm only hearing part of it. So what I'm really interested in is what part of that power I'm I really absorbing, so what this is associated
with is something called intensity. Intensity is power per unit area, so we can see here that a sound wave coming I'm only going to get a little bit of it. So if I want to know how much power I'm
absorbing from the sound wave then what I'm going to do is I'm going to take the area of my ear and multiply by the intensity of the sound wave. So then I've got power divided by area times the area
that I'm absorbing and it all makes perfect sense. Alright there was a couple of subtleties in these and they're associated with different shapes of waves.
And let's just go ahead and think about sound waves here because I think it's a pretty simple analogy that we can make to rock concerts, so let's say that we go to a rock concert and there's a wall
of speakers. So they've got speaker, speaker, speaker just rows of them on top of each other. Now when those speakers send out a sound wave, they send out what we call a plane wave. The wave is a
plane and it comes out so equal wave fronts, equal phase comes out in the shape of a plane. Now if I'm twice as far away it's still a plane I get the whole thing I get the exactly the same intensity.
So that means that the intensity of a plane wave is constant, it doesn't depend on how far I am from the speakers as long as my distance from the speakers is small compared to how big the plane wave
or ray is alright.
What about a cylindrical wave? So in this case we're going to think about a tower of speakers right? So just one on top of the other but not a plane, so now they send out a sound wave and it goes out
in cylinders out like that. Now what's the difference, well Geometrically as it goes out in cylinders the surface area of the cylinder gets bigger as I get farther from the tower. And so what happens
is, if I'm farther away from the tower I don't get as much intensity because the power is spread out over more area. So that means that power divided by area got to be smaller. It turns out that
because the circumference of a circle is 2 pi r , the intensity for a cylindrical wave is proportional to 1 over r, 1 over the distance from that tower. So that means that if I've got a friend that's
twice as far from the tower than I am, then he's only going to experience half the intensity because that 2 goes down stairs.
Alright, what about the last one, a spherical wave? So in this case we're thinking about just a single speaker sitting there all by himself. Now he sends out sound waves and they take the shape of
spheres. They just go out in all 3 dimensions, so what's the difference here? Well now the sound is spread out over the area of a sphere, but the area a sphere is 4 pi r squared so that means that in
this case the intensity is going to drop off like 1 over r squared. So now my friend who's 2 times as far away from the speaker as I am is only going to hear a quarter, 1 over 2 squared of the
intensity. If he was 3 times farther away then he'd only hear a ninth of the intensity. Alright so that's intensity and the Geometry of sound waves.
wave energy intensity | {"url":"https://www.brightstorm.com/science/physics/vibration-and-waves/wave-intensity/","timestamp":"2014-04-21T07:13:10Z","content_type":null,"content_length":"63230","record_id":"<urn:uuid:b34aa65d-ba19-4b70-b06d-e7dd69b567e6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the Universe a Computer?
"I am thinking about something much more important than bombs. I am thinking about computers." - John von Neumann, 1946
Is the Universe a Computer?
There is a currently fashionable idea in physics to treat the behaviour of the universe as if it were a vast, digital computer. The universe is described by information, and that information is
modified over time just like information is transformed in a conventional computer.
We could consider the position (and velocity) of each particle as representing one "bit" of information within the universe, just like a conventional computer contains binary "1"s and "0"s. The
universe can then be viewed as an "information processor", just like any conventional computer: the data within the computer (universe) is transformed as time progresses.
So if the universe is a computer, then what, precisely, is it computing? What is its output? Well, the output is the state of the universe: the sum total of all the positions and velocities of all
the particles within the universe. As Seth Lloyd says in his book "Programming the Universe": "What does the universe compute? It computes itself". He goes on to say: "The world is composed of
elementary particles - electrons, photons, quarks - and each elementary piece of a physical system registers a chunk of information: one particle, one bit. When these pieces interact, they transform
and process that information, bit by bit. Each collision between elementary particles acts as a simple logical operation".
It's as if we are stepping through a computer program, each step of the algorithm modifying the states of the computer variables and registers. As John Barrow says: "We now have an image of the
universe as a great computer program, whose software consists of the laws of nature which run on hardware composed of the elementary particles of nature."
(Note that this is a subtly different idea to the claim that the universe is some sort of Matrix-style simulated reality created by an advanced simulation. That theory is considered on the Living in
the Matrix page. On this page we are just examining the idea that the physical universe can be thought of as behaving as though it was a form of digital computer, with the constituent physical parts
of the universe (the particles) behaving as though they were data bits of a computer.)
For further reading on this topic, Jürgen Schmidhuber has written a paper called A Computer Scientist's View of Life, the Universe, and Everything in which he analyzes some of the implications of
actually writing a computer program equivalent to our universe. There is also a four-page Wired article: God is the Machine. There is also a Wikipedia entry on this topic: Digital Physics. There is
also a video lecture online by Seth Lloyd recorded at the Perimeter Institute: Programming the Universe.
The Universe as Information
There is currently a fashion in physics to treat all of the physical world as though it was information. This certainly ties in with the idea of the universe behaving as though it was a computer:
basically we can ask questions about the physical world, but the only response we can ever receive will be in the form of information. The idea was originally proposed by John Wheeler: "About a
decade ago, John Archibald Wheeler urged that information should take centre stage. What we call reality, he thinks, arises from the questions we ask about it and the responses we receive. "Tomorrow,
we will have learned to understand and express all of physics in the language of information", he said. The atom of information is the bit: the quantity contained in the answer to a yes or no
question. If experiments are questions we ask of nature, then the simplest of them have yes or no answers: "Did the photon arrive here, or not?", "Did the counter click, or not?" We can also ask more
complex questions, but they can always be built up from simpler yes or no questions like these." (quote taken from here).
I feel the motivation for this approach to physics comes from the difficulty we have in defining what is physical, tangible, reality: what, precisely, is a particle? In the absence of any suitable
definition of physical reality, all we have left is the yes/no answer to our questions about the environment around us. At the end of the day, all we really have left is information, and that
information is all we have to define our real, tangible universe. Ed Fredkin said: "I've come to the conclusion that the most concrete thing in the world is information" (quoted from here).
So that information effectively becomes our reality. John Wheeler referred to this process by which tangible reality emerges from pure information as It from Bit. The yes/no "bits" of information (in
our universe computer) then represent individual particles.
The MP3 Universe
This fashionable trend in physics moving from a physically-oriented view of the world to one of pure information could be considered as being similar to the transition in audio recording formats.
These have moved from a physical analogue approach (vinyl records), to a physical digital approach (CD), to a format of pure information (MP3) which is independent of any physical storage medium.
Information is then viewed as more fundamental than any "physical" entity: "Ask anybody what the physical world is made of, and you are likely to be told "matter and energy". Yet if we have learned
anything from engineering, biology and physics, information is just as crucial an ingredient. Indeed, a current trend, initiated by John A. Wheeler of Princeton University, is to regard the physical
world as made of information, with energy and matter as incidentals" (quote taken from here).
This view of information as being fundamental means information should never get lost (this is a corollary of Landauer's Principle). This has led to the infamous black hole information loss paradox:
"Physicists often talk about information rather than matter because information is thought to be more fundamental".
There is a video discussion on this subject recorded at the Perimeter Institute featuring Seth Lloyd, Anthony Leggett, and Leonard Susskind: The Physics of Information.
The Monkey Universe
In his book Programming the Universe, Seth Lloyd considers how the structure of the universe could have been produced by a million monkeys typing on typewriters for ten hours a day. Clearly, the
chances of one of them successfully typing the information which defines the universe is almost infinitely small. However, he then considers what would happen if the monkeys typed into computers
rather than typewriters. The computer then interprets the monkey's random output not as text but as a computer program, i.e., as a sequence of instructions in a particular computer language. It is
now possible for a short program (with a relatively high probability of being typed by a monkey) to produce the vast complexity of our universe. For example, a few lines of computer code can produce
the infinite sequence of random digits of
Give a monkey a typewriter and you'll just get a load of rubbish ...
... but give him a laptop computer and he might write a (very) simple computer program which could produce something highly-complex.
Hence, this analogy of the universe being a computer proves valuable in providing an explanation as to how the complexity of the universe could arise from a short computer program. (For a technical
description of this, see Section 3 of Seth Lloyd's paper Universe as Quantum Computer).
This "Monkey Universe" idea is revisited on the Living in the Matrix page.
The Game of Life
Perhaps we can see the universe behaving like a computer when we consider the Game of Life computer program, which was developed by John Conway in 1970. The Game of Life is a form of artificial life
simulation in which cells on a square grid can evolve (live, die, or remain unchanged) according to simple mathematical rules:
1. A dead cell with exactly three live neighbours becomes a live cell (birth).
2. A live cell with two or three live neighbours stays alive (survival).
3. In all other cases, a cell dies or remains dead (overcrowding or loneliness).
The behaviour of the cells is clearly very similar to the development of living cells on a Petri dish.
The Game of Life is an example of cellular automata. The field is dominated by Stephen Wolfram - see this Forbes article. According to that article: "Conway himself, soon began to wonder if a giant
Game of Life played on an equally giant computer wouldn't create its own living, breathing universe. Perhaps, Conway and his acolytes mused, we are merely cells on God's great grid."
Game of Life applet instructions
Using your mouse, just click in the grid below to draw yellow shapes and dots, then click the Start button to run the simulation. You can even click inside the grid while it is running!
(You need to have Java installed in your browser to run the applet, so if you cannot see the grid immediately below, go to www.java.com to get the Java Runtime Environment for your browser).
(Applet created by Edwin Martin)
The Turing Machine and Non-Computability
In the 1936, the British mathematician Alan Turing proposed a design for a generalised computer which is now known as a Turing Machine. The machine was composed of a tape on which symbols were
imprinted, and that tape could move backwards and forwards in the machine. The symbols on the tape represented instructions for the computer. The machine was capable of reading the instructions off
the tape, and writing the resultant output data back to the tape:
The remarkable and important feature of a Turing Machine is that it is universal in that it can compute any computable function. In other words, if you can compute something on any computer, you can
compute it on a Turing Machine (this remarkable discovery led to the general-purpose processor and modern computing). So a Turing Machine is effectively equivalent to one of today's common digital
computers - a PC. Indeed, it is perhaps easier to think of what is possible on a PC when we consider the function of a Turing Machine.
The Turing Machine appears technologically primitive, but it was intended as a "thought experiment" rather than a practical implementation. Alan Turing intended it as a means to show that there are
some problems which can never be solved by a particular computer, and it is these non-computable problems we will consider next (well, after the next blue box bit).
How Turing's Universal Computer applies to you
So we have discussed how Turing's Universal Computer is capable of computing anything which can be computed on any other computer. This is because the solution to any problem can be computed in a
step-by-step manner, each step being a very simple operation (the Turing Machine considers writing simple symbols to a tape).
Now, what is quite fascinating is that your brain is itself a Universal Computer, a form of Turing Machine, in that it is certainly capable of manipulating simple symbols and storing data. So if
your brain is a Universal Computer, then your brain is capable of computing (or understanding) anything which can be computed or understood by any other brain.
So you are actually capable of computing or understanding anything which anyone else can understand: even when considering the brain of a genius such as Einstein. As long as a problem is split-up
into a series of simple steps, you should be able to follow the steps and come to the same conclusions. (Though it might take you considerably longer than Einstein! The Turing Machine says
nothing about how long it takes to compute the solution to a problem, it just considers whether a solution is computable or not).
Of course, this principle is not going to help you win TV general knowledge quizzes (not like me
The Halting Problem
Gödel's Incompleteness Theorem in mathematics states that for a particular, specified Formal Axiomatic System (FAS) (i.e., a mathematical set of theorems built-up from fundamental axioms) then there
will always be a theorem which is true but you cannot prove (from those particular axioms). That theorem is then said to be undecidable (for that particular FAS). (For more details of Gödel's
Incompleteness Theorem, see the Meta Maths section on the Mathematical Universe page).
Likewise, Turing found a very similar result in computing. Let's imagine we have a computer program and we want to know if it will halt (i.e., return a result) or not (i.e., by getting stuck in an
infinite loop). Now, just running the program will not give us an answer for all cases because even if the program does not halt after 1,000 years that does not prove that it will never halt - it
might halt after 1,001 years. So there is is no way to tell if a program will never halt just by running it. As Gregory Chaitin says in his book, "Meta Maths": "If it does halt, you can eventually
discover that. The problem is to decide when to give up and decide that it will never halt. But there is no way to do that."
So, with this in mind that it is impossible to get a definite answer by running the program, let's try to find an answer by writing a second really intelligent program (let's call it a halting tester
) which will tell if the first program will halt or not merely by examining the code, by doing some sort of pattern recognition on the code, but not by running the program.
Now, Turing's Halting Problem result says that for that particular halting tester there will always exist a program which it is unable to tell whether it halts or not: the problem is said to be
uncomputable. This will be the case if the algorithmic complexity of the program we are testing is greater than the algorithmic complexity of our halting tester program. Our halting tester will be
simply not up to the job - the question is too hard for it.
So there is a very close connection between an FAS in maths (and Gödel's Incompleteness Theorem) and a halting tester computer program (and Turing's Halting Problem). Given a specific FAS or halting
tester there will always be a theorem which it cannot prove to be true or a program which it cannot tell if it halts or not.
Proof of the Halting Problem
The proof of the halting problem is beautiful, and it's worth presenting it here. An excellent, clear proof of the halting problem is presented in Understanding the Halting Problem, and this is
adapted into graphical form here:
The first step of this proof is to imagine we have managed to create our "halting tester" (see last section) which can solve the halting problem for all computer programs (we will later show that
this results in a contradiction, so this perfect halting tester could never actually exist). The halting tester program is called HaltTest and it takes two inputs:
1. A program, P.
2. An input, I, for the program P.
The output of HaltTest is "Loop" if the program P goes into an infinite loop when it is given I as input. Alternatively, HaltTest outputs "Halt" if the program P halts:
For the next step, it is important to realise that a computer program can be expressed as a sequence of bytes: binary data. So a computer program can be treated as input data, and we can input the
same program, P, to both inputs of HaltTest:
The next step is to construct a simple algorithm called StrangeProgram that takes the output of HaltTest and does the following:
1. If HaltTest outputs "Loop" then StrangeProgram halts.
2. If HaltTest outputs "Halt" then StrangeProgram goes into an infinite loop.
That is, StrangeProgram does the reverse of HaltTest's output. Here is the code for StrangeProgram:
and here is the graphical representation of StrangeProgram:
The final stage is to feed the function StrangeProgram back into itself as input:
and the graphical representation of that is shown here:
Now let's consider this rather peculiar arrangement. There are two possible situations depending on the output of HaltTest:
1. If HaltTest says that StrangeProgram halts when fed itself as input, then StrangeProgram goes into an infinite loop.
2. If HaltTest says that StrangeProgram goes into an infinite loop when fed itself as input, then StrangeProgram halts.
In either case, HaltTest gives the wrong result for StrangeProgram. Therefore, our "perfect" HaltTest program does not work for all cases. It is therefore not possible to create a "halting tester"
algorithm which can test for all cases of halting programs.
Diophantine Equations
The halting problem (and non-computability in general) has serious implications for mathematics. The Diophantine equations, for example, are a group of equations in which the unknowns must be
positive integers (natural numbers).
The most famous Diophantine equation is Fermat's Last Theorem which states that there are no positive integers x, y, and z which satisfy the equation:
x^n + y^n = z^n for n > 2
To solve a Diophantine equation on a computer would involve progressively searching through all the integers until a set of integers is found which satisfies the equation. However, if you fail to
find a set of suitable integers on your progressive search, there is no way of knowing if there is not still a set of suitable larger integers out there waiting to be found. So you would be
unable to say for certain that the equation cannot be solved. Do you see the similarity with the halting problem, waiting to see if a computer program halts? Even if your program has not yet
halted, that is not to say it might not halt in the distant future. So if you try to use a computer to solve a Diophantine equation you will be unable to say that it cannot be solved for the same
reason that you cannot say for definite that a program will never halt. A solution might be found in the future, just like a computer program might halt in the future.
So just like in the halting problem example, we might be tempted to build the equivalent of a "halting tester" program which could analyse a Diophantine equation and say whether it has any
solutions. This would involve writing a program to loop through all possible integers, substituting them into the Diophantine equation. We wouldn't actually run that program - we would just pass
it to be analysed by our halting tester to determine if the program halts and outputs a solution. If it halts we would know that the Diophantine equation had a solution. However, this approach
would be doomed to failure because of the halting problem: the halting tester would never be able to tell if the program halted or not (see here).
Are there uncomputable functions in physical reality?
We now move on to consider the million-dollar question: are there any uncomputable functions in physical reality? This is such an important question because if it could be shown that there were
uncomputable functions in the way the universe works it would kill stone dead the idea that the universe behaves like a computer.
If we believe in Max Tegmark's idea that the universe is a purely mathematical construct (see the section "Max Tegmark's Mathematical Universe" on the Mathematical Universe page) then we would expect
to find uncomputable functions in the universe as there are certainly uncomputable functions in mathematics, and Tegmark's idea is that reality is mathematics. So if uncomputable functions do not
appear in physical reality then there must be some constraints imposed somehow, limiting the subset of mathematics which has relevance to our universe. Those constraints would be responsible for
"filtering-out" the uncomputable functions so they would not appear in physical reality.
But if uncomputable functions are to be somehow filtered-out then how might this be achieved? I think it is useful at this point to examine the proof of the halting problem (given in the section
above). In that proof, we initially proposed the idea that we could build a "halting tester" program capable of answering any question purely by running through its mechanised process. So the
functioning of that halting tester was unhindered by the presence of any uncomputable functions which it was incapable of solving. So we can think of the smooth functioning of that halting tester as
the smooth functioning of a "universe computer", if that universe contains no troublesome uncomputable functions. So let's start our analysis by thinking of our universe as that "halting tester" in
the earlier discussion about the halting problem.
What we discovered in that halting problem proof was that we were able to construct a peculiar function called StrangeProgram which basically broke our halting tester. StrangeProgram showed that our
HaltTest program which tested for halting programs did not work in all cases. So what was it about StrangeProgram that proved so destructive to the smooth operation of our halting tester? And could a
form of StrangeProgram exist in physical reality, breaking our "computer universe" model by proving that uncomputable functions existed in our universe?
StrangeProgram worked firstly by taking the output of the HaltTest function and inverting it (see the discussion in the section above for more details). The final step of the proof was to create a
form of feedback loop so that StrangeProgram fed back into itself in a recursive manner:
The result of this feedback loop was that whatever came out of HaltTest got inverted and fed back into HaltTest, leading to a form of unstable oscillator: if the output of HaltTest said "Halt" then
the input of HaltTest was switched to make the output say "Loop", and if the output of HaltTest said "Loop" then the input was switched to make the output say "Halt".
The Halting Problem proof
If this situation was in physical reality it would lead to a logical inconsistency: a paradox. The output of HaltTest could not be both "Loop" and "Halt". In physical reality, an object cannot be two
things. For example, an object cannot be both present and absent at the same time. So if the halting problem is to have relevance in physical reality (as opposed to a purely mathematical conceptual
model) we should consider if such a paradox could possibly occur in physical reality.
The Grandfather Paradox
It seems to me that it was the inclusion of the inverting function (StrangeProgram) together with the recursive feedback loop which causes all the problems. I wondered if there could possibly be a
similar situation in physical reality, and I think the grandfather paradox is equivalent. In the grandfather paradox a man travels back in time and kills his own grandfather before the latter met the
traveller's grandmother. According to Wikipedia: "As a result, one of the traveller's parents (and by extension, the traveller himself) would never have been conceived. This would imply that he could
not have travelled back in time after all, which in turn implies the grandfather would still be alive, and the traveller would have been conceived, allowing him to travel back in time and kill his
grandfather. Thus each possibility seems to imply its own negation."
So just as in the halting problem proof, we get the same sort of feedback loop and negation that proves so troublesome. So we end up with the same sort of oscillatory paradox that we found in the
halting problem proof. And the structure of the grandfather paradox is also very similar to the structure of the halting problem proof:
The Grandfather Paradox
Note that this "grandfather paradox" program is now running on the "universe computer" itself, not just any old PC as in the halting problem proof. So the input to the grandfather paradox structure
is no longer a simple computer program, but rather it is the software of the universe itself: the state of the universe.
The grandfather paradox structure (shown in the diagram immediately above) functions in the same way as the halting problem proof. The HaltTest of the halting problem (which took a computer program
as input) is replaced by a GrandfatherAliveTest (which takes the state of the universe as input and inspects to see if the grandfather is alive). The effect of the time travel backward in time is to
effectively feed the whole "situation" - the state of the universe - back into itself in a recursive manner:
Hence we finish with the same paradoxical situation as in the halting problem proof.
So the question of whether or not uncomputable functions could exist in physical reality would depend on whether we could get this form of recursive situation caused by backward time travel, or
whether it would be prohibited (filtered-out) by some law such as Stephen Hawking's chronology protection conjecture.
Admittedly, the halting problem is just one form of uncomputable problem, but I think it represents an excellent case study to analyse the sort of "filtering" which would be required if there were to
be no uncomputable functions in physical reality. I think it's especially interesting that this type of uncomputable problem would lead to logical inconsistencies in physical reality (the grandfather
being both dead and alive at the same time). As we know there are no inconsistencies in physical reality this leads me to believe that there are no uncomputable functions in physical reality.
However, this does not prevent us from discussing and imagining all mathematical structures, even uncomputable ones. As Schmidhuber suggests here, we can talk about time travel and the grandfather
paradox even if time travel is not possible in our universe: "Although we live in a computable universe, we occasionally chat about incomputable things, such as the halting probability of a universal
Turing machine (which is closely related to Gödel's incompleteness theorem). And we sometimes discuss inconsistent worlds in which, say, time travel is possible. Talk about such worlds, however, does
not violate the consistency of the processes underlying it."
The idea that there are no uncomputable functions in the universe is in line with Max Tegmark's proposal for a Mathematical Universe which includes a Computable Universe Hypothesis (CUH). The CUH
proposes that physical reality can be entirely described by computable functions (in order to avoid all the paradoxes and inconsistencies which would otherwise wreck the Mathematical Universe):
"According to the CUH, the mathematical structure that is our universe is computable and hence well-defined in the strong sense that all its relations can be computed. There are thus no physical
aspects of our universe that are uncomputable/undecidable, eliminating the concern that Gödel's work makes it somehow incomplete or inconsistent" (for more on this, see the section And now ... the
Ultiverse! on the Anthropic Principle page). Max Tegmark was forced to append the CUH to his Mathematical Universe model in order to retain the idea that the universe is created by mathematics, in
order to avoid destructive paradoxes (see the section "Max Tegmark's Mathematical Universe" on the Mathematical Universe page).
"The integers were made by God; all else is the work of man."
- Leopold Kronecker
Mathematical Constructivism
(or ... "Why are there apparently no non-computable functions in the universe?")
So, as just discussed in the previous section, if it could be shown that there were uncomputable functions in physical reality then it would kill stone dead the idea that the universe behaves like a
computer. However, fortunately, so far it would appear that every aspect of the universe's behaviour can be modelled on a computer. For example, we can plot the orbits of the planets around the Sun
using a computer. So if there are no non-computable functions in the universe, why should this be the case? In order to provide a possible answer to this question, we will examine a radical approach
to mathematics: constructivism.
Constructivism says that mathematics should only include statements which can be deduced by a finite sequence of step-by-step constructions, starting from the "natural" numbers (1, 2, 3, etc.). This
is a major departure from conventional maths because many of the more exotic mathematical structures (such as infinity, and irrational numbers) would not be eligible as part of maths under the strict
rules of constructivism. To quote Gregory Chaitin from his book MetaMaths (in an attack on the existence of irrational numbers which are generated from an infinite series): "Some mathematicians have
what is called a 'constructive' attitude. This means that they only believe in mathematical objects that can be constructed, that, given enough time, in theory one could actually calculate. They
think that there ought to be some way to calculate a real number, to calculate it digit by digit, otherwise in what sense can it be said to have some kind of mathematical existence?"
So why is constructivism relevant to our current discussion? Well, it's because computers construct their results in precisely the same step-by-step method as constructivism. And if the structure of
the universe can really be thought of as the product of some form of computer program then mathematical constructivism would appear to be the perfect type of mathematics to model the universe.
Viewed this way, the absence of exotic mathematical structures - such as infinity - from mathematical constructivism now seems like less of a major omission: if there are no infinities in physical
reality then it does not matter if there are no infinities in a mathematics based on constructivism. And if the structure of spacetime is discrete at very small scales (rather than continuous) as
many physicists now believe then the integer natural numbers of constructivism are sufficient to describe our world, rather than requiring the infinite-precision irrational numbers (which are
prohibited under constructivism). In fact, constructivism might now be appearing to be a perfect tool for describing the "real world".
As discussed at the end of the previous section, Max Tegmark had to append a Computable Universe Hypothesis (CUH) to his Mathematical Universe paper in order to avoid destructive paradoxes. Tegmark
suggested the behaviour of the universe was restricted to computable functions, and went on to suggest that mathematical constructivism played a role in determining this behaviour: "According to the
finitist school of mathematicians (which included Kronecker, Weyl and Goodstein), representing an extreme form of so-called intuitionism and constructivism, a mathematical object does not exist
unless it can be constructed from natural numbers in a finite number of steps. This leads directly to a computable structure (a computable universe)."
To sum-up, it appears that there might be no non-computable functions in the universe: a mathematics based on constructivism might be the best tool to describe the universe. So I think it's important
to state something which might appear obvious but I have never actually seen written-down anywhere: if the universe DOES behave like a computer, then there would be NO non-computable functions in the
universe. Like I say, pretty obvious, though it's important to state it as it is the most obvious reason where there might be no non-computable functions in the universe.
Referring to the diagram above, perhaps mathematical constructivism suggests a universe which is "built-up" out of nothing rather than "carved" out of everything. This diagram and idea is considered
in the section From Ultiverse ... To Nulltiverse! at the end of the Anthropic Principle page. Instead of a situation in which absolutely everything exists without need of a reason, we would now have
a situation in which absolutely nothing would exist without a good reason for its existence. We would then have to consider ex nihilo (creation out of nothing) scenarios for the universe.
Arguments against the universe behaving like a computer
Several physicists have raised objections to this idea of the universe behaving like a computer. Their objections have generally presented examples of non-computable behaviour in the universe (so the
universe could not possibly have computed those functions).
For example, if quantum behaviour is fundamentally, genuinely random (as opposed to just pseudo-random) then no computer could possibly produce those results (the output of a computer is always
deterministic, not random). However, many physicists now believe that quantum behaviour is actually fundamentally deterministic after all in processes which are hidden from our analytic methods (we
can only see the apparently random results). Hence, to quote Einstein, "God does not play dice". As Schmidhuber says in his paper A Computer Scientist's View of Life, the Universe, and Everything:
"Is there true randomness in our universe, in addition to the simple physical laws? True randomness essentially means that there is no short algorithm computing 'the precise collapse of the wave
function', and what is perceived as noise by today's physicists. Our fundamental inability to perceive our universe's state does not imply its true randomness, though. For instance, there may be a
very short algorithm computing the positions of electrons lightyears apart in a way that seems like noise to us but actually is highly regular." So there might very well not be true (uncomputable)
randomness in the universe - there might just be the appearance of randomness.
In his book New Theories of Everything, John Barrow suggests that if we have a law of physics which depends on some physical property being differentiated (e.g., the wave equation) then this could be
non-computable if the curve contained a "kink" or "crease". This would be the case if spacetime was actually discrete (i.e., the universe being a digital computer) rather than smooth and continuous:
The absolute value function is continuous, but fails to be differentiable at
= 0 as there is a "kink" in the curve at
= 0 (i.e., the curve is not
(Diagram from the
Wikipedia entry for Derivative
Hence, when we consider the derivative of the curve (the "rate of change" of the curve) we find it is not continuous: it is disjoint at the point of the "kink" at x = 0. The derivative (the "rate of
change" of the top curve) is -1 for negative values of x, and is +1 for positive values of x, but the derivative is not defined (i.e., non-computable) at x = 0.
John Barrow then goes on to argue that this implies that some aspects of the universe are non-computable. However, I feel it just shows how inadequate our formulae for the laws of nature would be if
the universe really was discrete. The Newtonian calculus provided mathematical methods for dealing with the infinitely small, and modelled the physical world as a continuum. As a result, our
equations based on calculus containing derivatives (such as the wave equation) have been developed on the basis that spacetime is smooth and continuous, not discrete. If spacetime is discrete then we
would need to develop new, more accurate analytical formulae rather than these approximations which are only capable of describing a continuous spacetime (just like Newton's approximate theory of
gravitation was replaced by Einstein's more accurate theory of general relativity). As the Wikipedia entry on Digital Physics notes: "Proponents of digital physics, however, reject the very notion of
the continuum, and claim that the existing continuous theories are just approximations of a true discrete theory".
To sum-up, we certainly cannot use our current (possibly approximate) laws to infer that the universe is continuous and not discrete, as John Barrow suggests.
Comments are now closed on this page.
Interesting article. So the idea is that the current state of the universe is the output of all the previous operations, and the input for the next operation? what a cool idea. I wonder...if and when
they prove this to be true and go about trying to hack the universe...I wonder what they will find? I bet the universe is running on Linux. - Bob, 9th November 2008
Hi Bob, yes, I'm still working on this article. It's definitely quite a fashionable approach at the moment. You think the universe would be open source? Maybe that would explain why we can understand
it! - Andrew Thomas, 10th November 2008
The Earth is a computer, and the answer is 42. - Glen, 12th November 2008
Excellent article, a really good read. I don't know how many physicists believe that quantum processes are deterministic - Bell's theorem did a number on that. Also, I always thought that radioactive
decay was one of the few examples of true randomness, but how do you test for randomness? Is there a RNG with an algorithm so complex and with such a large period that we could not tell it for
otherwise? (Mersenne Twister?)
I think I just hurt my head.
And the answer is DEFINITELY 42. - Dave, 13th November 2008
Hi Dave and Glen, I'm glad you've found the answer already!
Dave, Bell's theorem is a test for quantum non-locality, not randomness - see the "Quantum Entanglement" page for more details. As far as testing for randomness, yes, you're right: we consider a
sequence to be random if it cannot be compressed, cannot be produced by a simpler algorithm. If an algorithm cannot be compressed to something simpler, it is said to be "elegant". Gregory Chaitin has
shown that we cannot prove an algorithm to be elegant if the program you are using to prove the elegance of the algorithm is simpler than the algorithm you are testing: "How can we be sure that we
have the best theory? How can we tell whether a program is elegant? The answer, surprisingly enough, is that we can't!":
So you're quite right: if the apparent quantum randomness is actually just produced by a really complex algorithm then it would be impossible to prove it was not random after all. (I hadn't heard of
the Mersenne Twister - very interesting, thanks:
However, if the answer really is 42 then that sounds like a very elegant solution. -
Andrew Thomas, 13th November 2008
Very good article. A great read - I just had one question though. You stated:
"The output of HaltTest could not be both "Loop" and "Halt". In physical reality, an object cannot be two things. For example, an object cannot be both present and absent at the same time."
But isn't that pretty similar to the basis of quantum theory? Of course if the universe were a quantum computer... - Alec, 14th November 2008
Thanks Alec. Very interesting. As you suggest, if the universe really does behave like a computer then it behaves like a quantum computer. This is because the fundamentals of the universe are based
on quantum mechanical foundations.
So the particles which compose the universe are in quantum superposition states (i.e., a mixture of both 0 and 1, or "in two places at the same time"). But, in quantum theory an object can only be
"in two places at the same time" (i.e., a particle in a superposition state) UNTIL we try to detect the particle. And then we only find the particle in one precise position. We DON'T see
superposition states in our physical reality. We don't see cats alive and dead at the same time. And when we observe a quantum qubit in quantum computing we only observe it to be 0 or 1 ("Eventually,
however, observing the system would cause it to collapse into a single quantum state corresponding to a single answer" -
). So even a quantum computer only gives a single output.
Things in our physical reality - when we observe them - are not in superposition states, they have definite values. So, yes, the output of HaltTest COULD be a mix of "Loop" and "Halt" if it was a
particle in a quantum superposition state, but as soon as we observe it we find it has only one of those two values. The qubit is EITHER 1 or 0. The grandfather cannot be both alive and dead.
(I deliberately did not talk about quantum computers in the main article as they are basically just another form of computer: all the discussion of non-computability applies equally to quantum
computers as it does to conventional computers. As Gordon McCabe said: "Although quantum computers might be able to perform certain calculations faster than computers based upon the notion of a
Turing machine, the collection of uncomputable functions for a quantum computer is the same as the collection of uncomputable functions for a Turing machine".
) -
Andrew Thomas, 15th November 2008
Hi Andrew, I enjoyed your website very much. After reading this article one thing confuses me a little bit. You stated: "Are there any uncomputable functions in physical reality? This is such an
important question because if it could be shown that there were uncomputable functions in the way the universe works it would kill stone dead the idea that the universe behaves like a computer."
I donīt think so. An uncomputable function will result in a closed loop, but will not loop endlessly because of the finiteness of the universe. And stating "TRUE=FALSE" will not harm any physical
computer or universe. (Otherwise our world would have dissolved in the very moment I wrote this comment?) - Donnerstilzchen, 19th November 2008
Hi Donnerstilzchen, thanks, that's some really interesting points.
Firstly, yes, I suppose there would be no such thing as an "infinite loop" in a finite universe. But I think if we did have a similar inverting loop then we would have things flashing on and off! For
example, the grandfather might be flashing alive, then dead, then alive, then dead!
As for your second statement, just writing "TRUE=FALSE" doesn't actually prove anything about mathematics. The famous example of this is the statement "This statement is false". If the statement is
true, then the statement is false. But if the statement is false, then its true. So we appear to have an inconsistent paradox (so that's the kind of thing you were suggesting). But the resolution to
this paradox was provided by Alfred Tarski who said you are actually making the statement in a "meta-language", i.e., a language ABOUT mathematics, you are not making the statement in mathematics
itself. So that is allowed and there is no paradox.
From Wikipedia: "It is legitimate for sentences in "languages" higher on the semantic hierarchy to refer to sentences lower in the "language" hierarchy, but not the other way around. This prevents a
system from becoming self-referential."
Now, if you could *prove* that TRUE=FALSE using mathematics, then we really would be in trouble. -
Andrew Thomas, 19th November 2008
Thank you Andrew for pointing me to Alfred Tarski. Thinking about this, I came to the conclusion that there can be no self reference problems on the physical layer of reality. All the fuzz is on the
interpretation layer. Next time I see a traffic light flashing I can imagine that this is a mechanism in the process of evaluating a deep question with a self contradiction in it... -
Donnerstilzchen, 23rd November 2008
He he, quite. Thanks for your comments. - Andrew, 24th November 2008
Yes, we are in a computer. People have called this computer 'God'.
At one time, people thought there were many of these 'computers' fighting each other in the sky.
Then the one true computer started downloading messages to selected agents who searched for the one true computer.
Then the one true computer downloaded himself into the very matrix he created to straiten things out but people killed him, (he had the cheat code for 'god mode' but didn't want to use it to prove
that he could come back from death.
Noah Hornberger, 24th July 2009
i have been saying this for years. the earth can be considered like a magnetic hard drive, fill it up too much, or dont defrag and it will crash and fail. its all about magnetism. we are in the final
days of filling the earths 'disk', and something has to give soon. get ready for the great defrag of the late great planet earth......... - nerf, 25th September 2009
Nerf, I like that analogy. Maybe ideally we'd prefer a defragging, but a reformatting is more likely what we'll end up with. - Andrew Thomas, 25th September 2009
You state that a Turing Machine can compute anything which can be computed on any other computer. Can you prove that? - Robert Puttnam, 12th October 2009
Hi Robert. Yes, Paul Davies explains a nice proof in his book "The Mind Of God". Here it is: "Turing demonstrated that it was possible to construct a *universal* Turing Machine which is capable of
simulating all other Turing Machines. The reason why such a universal machine can exist is simple. Any machine can be specified by giving a systematic procedure for its construction: washing
machines, sewing machines, adding machines, Turing Machines. The fact that a Turing Machine is itself a machine to carry out a procedure is the key point. Hence, a universal Turing Machine can be
instructed to first read off the specification of any given Turing Machine, then reconstruct its internal logic, and finally execute its function. Clearly, then, the possibility exists of a
general-purpose machine capable of performing all mathematical tasks. One no longer needs an adding machine to add, a multiplying machine to multiply, and so on. A single machine could do it all." -
Andrew Thomas, 12th October 2009
Hi Andrew,
Truly a nice piece outlining the current arguments which essentially comes down to asking if mathematics being a discovery or an invention and there by extension if it capable of capturing all of
logic or merely being a subset of a larger truth. In as Godel has already made it quite evident as the latter being true then I would submit the question of whether or not the universe is a computer
cannot by itself serve as proof if it is a structure mandated by reason or not. I have long believed it was this realization that had Bertrand Russell abandon mathematics as his full time concern in
favour of philosophy, since he felt only through it could he explore beyond the limitations of mathematics in relation to reason.
- Phil Warnell, 26th November 2009
Hi Phil, thanks for that. Yes, John Barrow has used the result of Godel and Gregory Chaitin to suggest that even if we think we have found a theory of everything to describe the universe, we could
never be certain that it was the most elegant (i.e., simplest) description. Basicallly, if the only analytical tools you have at hand are the tools you can find WITHIN the universe then you're always
going to have a problem describing the ENTIRE universe (as the complexity of the entire universe will be greater than that of your tools). Your tools will not be up to the job. - Andrew Thomas, 26th
November 2009 | {"url":"http://www.ipod.org.uk/reality/reality_universe_computer.asp","timestamp":"2014-04-17T21:33:53Z","content_type":null,"content_length":"66650","record_id":"<urn:uuid:80d9d736-be1c-4fb7-b6ca-145c9e3ee89d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Molecular Modelling: Principles and Applications, second edition
On this page:
Preface to the second edition
Preface to the first edition
Comprehensive contents listing
Colour figures
Appendices (Acronyms in Quantum Chemistry, Bioinformatics abbreviations and acronyms, Sequence/structure databases)
To send email
Separate pages for:
3D visualisation using Chime
This book provides a detailed description of the techniques employed in molecular modelling and computational chemistry. The first part of the book covers the two major methods used to describe the
interactions within a system (quantum mechanics and molecular mechanics). The second part then deals with techniques that use such energy models, including energy minimisation, molecular dynamics,
Monte Carlo simulations and conformational analysis. The author also discusses the use of more advanced modelling techniques such as the calculation of free energies and the simulation of chemical
reactions. In addition he considers aspects of both chemoinformatics and bioinformatics and techniques that can be used to design new molecules with specific properties. Many of the topics are
treated in considerable depth but the reader is assumed to have but a basic knowledge of the relevant physical and chemical principles.
Most of the theoretical sections are accompanied by simple calculations together with examples drawn from the literature. The book is well illustrated and a colour plate section highlights the impact
of computer molecular graphics. The book will prove a valuable text for postgraduate students and professionals and many sections will be useful to final-year undergraduates taking courses in
molecular modelling or computational chemistry.
The impetus for this second edition is a desire to include some of the new techniques that have emerged in recent years and also extend the scope of the book to cover certain areas that were
under-represented (even neglected) in the first edition. In this second volume there are three topics that fall into the first category (density functional theory, bioinformatics/protein structure
analysis and chemoinformatics) and one main area in the second category (modelling of the solid-state). In addition, of course, a new edition provides an opportunity to take a critical view of the
text and to re-organise and update the material. Thus whilst much remains from the first edition, and this second book follows much the same path through the subject, readers familiar with the first
edition will find some changes which I hope they will agree are for the better.
As with the first edition we initially consider quantum mechanics, but this is now split into two chapters. Thus Chapter 2 provides an introduction to the ab initio and semi-empirical approaches
together with some examples of the uses of quantum mechanics. Chapter 3 covers more advanced aspects of the ab initio approach, density functional theory and the particular problems of the
solid-state. Molecular mechanics is the subject of chapter 4 and then in Chapter 5 we consider energy minimisation and other "static" techniques. Chapters 6, 7 and 8 deal with the two main simulation
methods (molecular dynamics and Monte Carlo). Chapter 9 is devoted to the conformational analysis of "small" molecules but also includes some topics (e.g. cluster analysis, principal components
analysis) that are widely used in informatics. In Chapter 10 the problems of protein structure prediction and protein folding are considered; this chapter also contains an introduction to some of the
more widely used methods in bioinformatics. In Chapter 11 we draw upon material from the previous chapters in a discussion of free energy calculations, continuum solvent models, methods for
simulating chemical reactions and defects in solids. Finally, Chapter 12 is concerned with modelling and chemoinformatics techqniques for discovering and designing new molecules, including database
searching, docking, de novo design, quantitative structure-activity relationships and combinatorial library design.
As in the first edition, the inexorable pace of change means that what is currently considered "cutting edge" will soon become routine. The examples are thus chosen primarily because they illuminate
the underlying theory rather than because they are the first application of a particular technique or are the most recent available. In a similar vein, it is impossible in a volume such as this to
even attempt to cover everything and so there are undoubtedly areas which are under-represented. This is not intended to be a definitive historical account nor a review of the current state-
of-the-art. Thus, whilst I have tried to include many literature references it is possible that the invention of some technique may appear to be incorrectly attributed or a " ;classic"
application may be missing. A general guiding principle has been to focus on those techniques that are in widespread use rather than those which are the province of one particular research group.
Despite these caveats I hope that the coverage is sufficient to provide an solid introduction to the main areas and also that those readers who are "experts" will find something new to interest them.
Molecular modelling used to be restricted to a small number of scientists who had access to the necessary computer hardware and software. Its practitioners wrote their own programs, managed their own
computer systems and mended them when they broke down. Today's computer workstations are much more powerful than the mainframe computers of even a few years ago and can be purchased relatively
cheaply. It is no longer necessary for the modeller to write computer programs as software can be obtained from commercial software companies and academic laboratories. Molecular modelling can now be
performed in any laboratory or classroom.
This book is intended to provide an introduction to some of the techniques used in molecular modelling and computational chemistry, and to illustrate how these techniques can be used to study
physical, chemical and biological phenomena. A major objective is to provide, in one volume, some of the theoretical background to the vast array of methods available to the molecular modeller. I
also hope that the book will help the reader to select the most appropriate method for a problem and so make the most of his or her modelling hardware and software. Many modelling programs are
extremely simple to use and are often supplied with seductive graphical interfaces which obviously helps to make modelling techniques more accessible, but it can also be very easy to select a wholly
inappropriate technique or method.
Most molecular modelling studies involve three stages. In the first stage a model is selected to describe the intra- and inter- molecular interactions in the system. The two most common models that
are used in molecular modelling are quantum mechanics and molecular mechanics. These models enable the energy of any arrangement of the atoms and molecules in the system to be calculated, and allow
the modeller to determine how the energy of the system varies as the positions of the atoms and molecules change. The second stage of a molecular modelling study is the calculation itself, such as an
energy minimisation, a molecular dynamics or Monte Carlo simulation, or a conformational search. Finally, the calculation must be analysed, not only to calculate properties but also to check that it
has been performed properly.
The book is organised so that some of the techniques discussed in later chapters refer to material discussed earlier, though I have tried to make each chapter as independent of the others as
possible. Some readers may therefore be pleased to know that it is not essential to completely digest the chapters on quantum mechanics and molecular mechanics in order to read about methods for
searching conformational space! Readers with experience in one or more areas may of course wish to be more selective.
I have tried to provide as much of the underlying theory as seems appropriate to enable the reader to understand the fundamentals of each method. In doing so I have assumed some background knowledge
of quantum mechanics, statistical mechanics, conformational analysis and mathematics. A reader with an undergraduate degree in chemistry should have covered this material, which should also be
familiar to many undergraduates in the final year of their degree course. Full discussions can be found in the suggestions for further reading at the end of each chapter. I have also attempted to
provide a reasonable selection of original references, though in a book of this scope it is obviously impossible to provide a comprehensive coverage of the literature. In this context, I apologise in
advance if any technique is inappropriately inattributed.
In Chapter 1 we consider some of the historical background to molecular modelling and discuss a number of important general principles that are common to many modelling methods. We also examine the
use of computer graphics, the Internet and the World-Wide Web and the molecular modelling literature. Chapter 1 concludes with a brief summary of some relevant mathematical concepts. Chapters 2 and 3
describe quantum mechanics and molecular mechanics , which are the two major methods used to model the interactions within a molecular system. These methods can be used to calculate the energy of a
given arrangement of the atoms as well as certain other properties. In chapters 4-8 we examine energy minimisation, molecular dynamics, Monte Carlo simulations and conformational analysis. These
techniques use an appropriate energy model to determine a wide range of structural and thermodynamic properties. The final two chapters describe various techniques that combine concepts from previous
chapters. In Chapter 8 we discuss the calculation of free energies using computer simulation , continuum solvent models, and methods for simulating chemical reactions. Chapter 9 is concerned with
computational methods for discovering and designing new molecules, such as database searching , de novo design and quantitative structure-activity relationships.
The range of systems that can be considered in molecular modelling is extremely broad, from isolated molecules through simple atomic and molecular liquids to polymers, biological macromolecules such
as proteins and DNA and solids. Many of the techniques are illustrated with examples chosen to reflect the breadth of applications. It is inevitable that for reasons of space some techniques must be
dealt with in a rudimentary fashion (or not at all), and that many interesting and important applications cannot be described. Molecular modelling is a rapidly developing discipline, and has
benefitted from the dramatic improvements in computer hardware and software of recent years. Calculations that were major undertakings only a few years ago can now be performed using personal
computing facilities. Thus, examples used to indicate the 'state of the art' at the time of writing will invariably be routine within a short time.
Preface to the second edition
Preface to the first edition
Symbols and physical constants
1. USEFUL CONCEPTS IN MOLECULAR MODELLING
1.1 Introduction
1.2 Coordinate systems
1.3 Potential energy surfaces
1.4 Molecular graphics
1.5 Surfaces
1.6 Computer hardware and software
1.7 Units of length and energy
1.8 The molecular modelling literature
1.9 The Internet
1.10 Mathematical concepts
1.10.1 Series expansions
1.10.2 Vectors
1.10.3 Matrices, eigenvectors and eigenvalues
1.10.4 Complex numbers
1.10.5 Lagrange multipliers
1.10.6 Multiple integrals
1.10.7 Some basic elements of statistics
1.10.8 The Fourier series, Fourier transform and fast-Fourier transform
2. AN INTRODUCTION TO COMPUTATIONAL QUANTUM MECHANICS
2.1 Introduction
2.1.1 Operators
2.1.2 Atomic units
2.1.3 Exact solutions to the Schrödinger equation
2.2 One-electron atoms
2.3 Polyelectronic atoms and molecules
2.3.1 The Born-Oppenheimer approximation
2.3.2 The helium atom
2.3.3 General polyelectronic systems and Slater determinants
2.4 Molecular orbital calculations
2.4.1 Calculating the energy from the wavefunction: the hydrogen molecule
2.4.2 The energy of a general polyelectronic system
2.4.3 Shorthand representations of the one- and two-electron integrals
2.4.4 The energy of a closed-shell system
2.5 The Hartree-Fock equations
2.5.1 Hartree-Fock calculations for atoms and Slater's rules
2.5.2 Linear combination of atomic orbitals (LCAO) in Hartree-Fock theory
2.5.3 Closed-shell systems and the Roothaan-Hall equations
2.5.4 Solving the Roothaan-Hall equations
2.5.5 A simple illustration of the Roothaan-Hall approach
2.5.6 Application of the Hartree-Fock equations to molecular systems
2.6 Basis sets
2.6.1 Creating a basis set
2.7 Calculating molecular properties using ab initio quantum mechanics
2.7.1 Setting up the calculation and the choice of coordinates
2.7.2 Energies, Koopman's theorem and ionisation potentials
2.7.3 Calculation of electric multipoles
2.7.4 The total electron density distribution and molecular orbitals
2.7.5 Population analysis
2.7.6 Mulliken and Löwdin population analysis
2.7.7 Partitioning electron density: the theory of atoms in molecules
2.7.8 Bond orders
2.7.9 Electrostatic potentials
2.7.10 Thermodynamic and structural properties
2.8 Approximate molecular orbital theories
2.9 Semi-empirical methods
2.9.1 Zero-differential overlap
2.9.2 CNDO
2.9.3 INDO
2.9.4 NDDO
2.9.5 MINDO/3
2.9.6 MNDO
2.9.7 AM1
2.9.8 PM3
2.9.9 SAM1
2.9.10 Programs for semi-empirical quantum mechanical calculations
2.10 Hückel theory
2.10.1 Extended Hückel theory
2.11 Performance of semi-empirical methods
Appendix 2.1 Some Common Acronyms Used in Computational Quantum Chemistry
3. ADVANCED AB INITIO METHODS, DENSITY FUNCTIONAL THEORY AND SOLID-STATE QUANTUM MECHANICS
3.1 Introduction
3.2 Open-shell systems
3.3 Electron correlation
3.3.1 Configuration interaction
3.3.2 Many body perturbation theory
3.4 Practical considerations when performing ab initio calculations
3.4.1 Convergence of self-consistent field calculations
3.4.2 The direct SCF method
3.4.3 Calculating derivatives of the energy
3.4.4 Basis set superposition error
3.5 Energy component analysis
3.5.1 Morokuma analylsis of the water dimer
3.6 Valence bond theories
3.7 Density functional theory
3.7.1 Spin-polarised density functional theory
3.7.2 The exchange-correlation functional
3.7.3 Beyond the local density approximation: gradient-corrected functionals
3.7.4 Hybrid Hartree-Fock/Density Functional Methods
3.7.5 Performance and applications of density functional theory
3.8 Quantum mechanical methods for studying the solid-state
3.8.1 Introduction
3.8.2 Band theory and orbital-based approaches
3.8.3 The periodic Hartree-Fock approach to studying the solid state
3.8.4 The nearly-free electron approximation
3.8.5 The Fermi surface and density of states
3.8.6 Density Functional Methods for studying the solid state: plane waves and pseudopotentials
3.8.7 Application of solid-state quantum mechanics to the group 14 elements
3.9 The future role of quantum mechanics: theory and experiment working together
Appendix 3.1 Alternative Expression for a Wavefunction Satisfying Bloch's Function
4. EMPIRICAL FORCE FIELD MODELS: MOLECULAR MECHANICS
4.1 Introduction
4.1.1 A simple molecular mechanics force field
4.2 Some general features of molecular mechanics force fields
4.3 Bond stretching
4.4 Angle bending
4.5 Torsional terms
4.6 Improper torsions and out-of-plane bending motions
4.7 Cross terms Class 1, 2 and 3 force fields
4.8 Introduction to non-bonded interactions
4.9 Electrostatic interactions
4.9.1 The central multipole expansion
4.9.2 Point-charge electrostatic models
4.9.3 Calculating partial atomic charges
4.9.4 Charges derived from the molecular electrostatic potential
4.9.5 Deriving charge models for large systems
4.9.6 Rapid methods for calculating atomic charges
4.9.7 Beyond partial atomic charge models
4.9.8 Distributed multipole models
4.9.9 Using charge schemes to study aromatic-aromatic interactions
4.9.10 Polarisation
4.9.11 Solvent dielectric models
4.10 van der Waals interactions
4.10.1 Dispersive interactions
4.10.2 The repulsive contribution
4.10.3 Modelling van der Waals interactions
4.10.4 van der Waals interactions in polyatomic systems
4.10.5 Reduced units
4.11 Many-body effects in empirical potentials
4.12 Effective pair potentials
4.13 Hydrogen bonding in molecular mechanics
4.14 Force field models for the simulation of liquid water
4.14.1 Simple water models
4.14.2 Polarisable water models
4.14.3 Ab initio potentials for water
4.15 United atom force fields and reduced representations
4.15.1 Other simplified models
4.16 Derivatives of the molecular mechanics energy function
4.17 Calculating thermodynamic properties using a force field
4.18 Force field parametrisation
4.19 Transferability of force field parameters
4.20 The treatment of delocalised ?-systems
4.21 Force fields for inorganic molecules
4.22 Force fields for solid-state systems
4.22.1 Covalent solids: zeolites
4.22.2 Ionic solids
4.23 Empirical potentials for metals and semiconductors
Appendix 4.1 The Interaction Between Two Drude Molecules
5. ENERGY MINIMISATION AND RELATED METHODS FOR EXPLORING THE ENERGY SURFACE
5.1 Introduction
5.1.1 Energy minimisation: statement of the problem
5.1.2 Derivatives
5.2 Non-derivative minimisation methods
5.2.1 The simplex method
5.2.2 The sequential univariate method
5.3 Introduction to derivative minimisation methods
5.4 First-order minimisation methods
5.4.1 The steepest descents method
5.4.2 Line search in one dimension
5.4.3 Arbitrary step approach
5.4.4 Conjugate gradients minimisation
5.5 Second derivative methods: the Newton-Raphson method
5.5.1 Variants on the Newton-Raphson method
5.6 Quasi-Newton methods
5.7 Which minimisation method should I use?
5.7.1 Distinguishing between minima, maxima and saddle points
5.7.2 Convergence criteria
5.8 Applications of energy minimisation
5.8.1 Normal mode analysis
5.8.2 The study of intermolecular processes
5.9 Determination of transition structures and reaction pathways
5.9.1 Methods to locate saddle points
5.9.2 Reaction path following
5.9.3 Transition structures and reaction pathways for large systems
5.9.4 The transition structures of pericyclic reactions
5.10 Solid-state systems: lattice statics and lattice dynamics
6. COMPUTER SIMULATION METHODS
6.1 Introduction
6.1.1 Time averages, ensemble averages and some historical background
6.1.2 A brief description of the molecular dynamics method
6.1.3 The basic elements of the Monte Carlo method
6.1.4 Differences between the molecular dynamics and Monte Carlo methods
6.2 Calculation of simple thermodynamic properties
6.2.1 Energy
6.2.2 Heat capacity
6.2.3 Pressure
6.2.4 Temperature
6.2.5 Radial distribution functions
6.3 Phase space
6.4 Practical aspects of computer simulation
6.4.1 Setting up and running a simulation
6.4.2 Choosing the initial configuration
6.5 Boundaries
6.5.1 Periodic boundary conditions
6.5.2 Non-periodic boundary methods
6.6 Monitoring the equilibration
6.7 Truncating the potential and the minimum image convention
6.7.1 Non-bonded neighbour lists
6.7.2 Group-based cutoffs
6.7.3 Problems with cutoffs and how to avoid them
6.8 Long-range forces
6.8.1 The Ewald summation method
6.8.2 The reaction field and image charge methods
6.8.3 The cell multipole method for non-bonded interactions
6.9 Analysing the results of a simulation and estimating errors
Appendix 6.1 Basic Statistical Mechanics
Appendix 6.2 Heat Capacity and Energy Flucutations
Appendix 6.3 The Real Gas Contribution to the Virial
Appendix 6.4 Translating Particle Back into Central Box
7. MOLECULAR DYNAMICS SIMULATION METHODS
7.1 Introduction
7.2 Molecular dynamics using simple models
7.3 Molecular dynamics with continuous potentials
7.3.1 Finite difference methods
7.3.2 Predictor-corrector integration methods
7.3.3 Which integration algorithm is most appropriate?
7.3.4 Choosing the time step
7.3.5 Multiple time step dynamics
7.4 Setting up and running a molecular dynamics simulation
7.4.1 Calculating the temperature
7.5 Constraint dynamics
7.6 Time-dependent properties
7.6.1 Correlation functions
7.6.2 Orientational correlation functions
7.6.3 Transport properties
7.7 Molecular dynamics at constant temperature and pressure
7.7.1 Constant temperature dynamics
7.7.2 Constant pressure dynamics
7.8 Incorporating solvent effects into molecular dynamics: potentials of mean force and stochastic dynamics
7.8.1 Practical aspects of stochastic dynamics simulations
7.9 Conformational changes from molecular dynamics simulations
7.10 Molecular dynamics simulations of chain amphiphiles
7.10.1 Simulation of lipids
7.10.2 Simulations of Langmuir-Blodgett films
7.10.3 Mesoscale modelling: Dissipative Particle Dynamics
Appendix 7.1 Energy Conservation in Molecular Dynamics
8. MONTE CARLO SIMULATION METHODS
8.1 Introduction
8.2 Calculating properties by integration
8.3 Some theoretical background to the Metropolis method
8.4 Implementation of the Metropolis Monte Carlo method
8.4.1 Random number generators
8.5 Monte Carlo simulation of molecules
8.5.1 Rigid molecules
8.5.2 Monte Carlo simulations of flexible molecules
8.6 Models used in Monte Carlo simulations of polymers
8.6.1 Lattice models of polymers
8.6.2 'Continuous' polymer models
8.7 'Biased' Monte Carlo methods
8.8 Tackling the problem of quasi-ergodicity: J-walking and multicanonical Monte Carlo
8.8.1 J-walking
8.8.2 The multicanonical Monte Carlo method
8.9 Monte Carlo sampling from different ensembles
8.9.1 Grand canonical Monte Carlo simulations
8.9.2 Grand canonical Monte Carlo simulations of adsorption processes
8.10 Calculating the chemical potential
8.11 The configurational bias Monte Carlo method
8.11.1 Applications of the configurational bias Monte Carlo method
8.12 Simulating phase equilibria by the Gibbs ensemble Monte Carlo method
8.13 Monte Carlo or molecular dynamics?
Appendix 8.1 The Marsaglia Random Number Generator
9. CONFORMATIONAL ANALYSIS
9.1 Introduction
9.2 Systematic methods for exploring conformational space
9.3 Model-building approaches
9.4 Random search methods
9.5 Distance geometry
9.5.1 The use of distance geometry in NMR
9.6 Exploring conformational space using simulation methods
9.7 Which conformational search method should I use? A comparison of different approaches
9.8 Variations upon the standard methods
9.8.1 The systematic unbounded multiple minimum method (SUMM)
9.8.2 Low-Mode Search
9.9 Finding the global energy minimum: Evolutionary algorithms and simulated annealing
9.9.1 Genetic and evolutionary algorithms
9.9.2 Simulated annealing
9.10 Solving protein structures using restrained molecular dynamics and simulated annealing
9.10.1 X-ray crystallographic refinement
9.10.2 Molecular dynamics refinement of NMR data
9.10.3 Time-averaged NMR refinement
9.11 Structural databases
9.12 Molecular fitting
9.13 Clustering algorithms and pattern recognition techniques
9.14 Reducing the dimensionality of a data set
9.14.1 Principal components analysis
9.15 Covering conformational space: poling
9.16 A "classic" optimisation problem: predicting crystal structures
10. PROTEIN STRUCTURE PREDICTION, SEQUENCE ANALYSIS AND PROTEIN FOLDING
10.1 Introduction
10.2 Some basic principles of protein structure
10.2.1 The hydrophobic effect
10.3 First-principles methods for predicting protein structure
10.3.1 Lattice models for investigating protein structure
10.3.2 Rule-based approaches using secondary structure prediction
10.4 Introduction to comparative modelling
10.5 Sequence alignment
10.5.1 Dynamic programming and the Needleman-Wunsch algorithm
10.5.2 The Smith-Waterman algorithm
10.5.3 Heuristic search methods: FASTA and BLAST
10.5.4 Multiple sequence alignment
10.5.5 Protein structure alignment and structural databases
10.6 Constructing and evaluating a comparative model
10.7 Predicting protein structures by 'threading'
10.8 A comparison of protein structure prediction methods: CASP
10.8.1 Automated protein modelling
10.9 Protein folding and unfolding
Appendix 10.1 Some Common Abbreviations and Acronyms Used in Bioinformatics
Appendix 10.2 Some of the Most Common Sequence and Structural Databases Used in Bioinformatics
Appendix 10.3 Mutation Probability Matrix for 1 PAM
Appendix 10.4 Mutation Probability Matrix for 250 PAM
11. FOUR CHALLENGES IN MOLECULAR MODELLING: FREE ENERGIES, SOLVATION, REACTIONS AND SOLID-STATE DEFECTS
11.1 Free energy calculations
11.1.1 The difficulty of calculating free energies by computer
11.2 The calculation of free energy differences
11.2.1 Thermodynamic perturbation
11.2.2 Implementation of free energy perturbation
11.2.3 Thermodynamic integration
11.2.4 The 'slow growth' method
11.3 Applications of methods for calculating free energy differences
11.3.1 Thermodynamic cycles
11.3.2 Applications of the thermodynamic cycle perturbation method
11.3.3 The calculation of absolute free energies
11.4 The calculation of enthalpy and entropy differences
11.5 Partitioning the free energy
11.6 Potential pitfalls with free energy calculations
11.6.1 Implementation aspects
11.7 Potentials of mean force
11.7.1 Umbrella sampling
11.7.2 Calculating the potential of mean force for flexible molecules
11.8 Approximate/"rapid" free energy methods
11.9 Continuum representations of the solvent
11.9.1 Thermodynamic background
11.10 The electrostatic contribution to the free energy of solvation: the Born and Onsager models
11.10.1 Calculating the electrostatic contribution via quantum mechanics
11.10.2 Continuum models for molecular mechanics
11.10.3 The Langevin dipole model
11.10.4 Methods based upon the Poisson-Boltzmann equation
11.10.5 Applications of finite difference Poisson-Boltzmann calculations
11.11 Non-electrostatic contributions to the solvation free energy
11.12 Very simple solvation models
11.13 Modelling chemical reactions
11.13.1 Empirical approaches to simulating reactions
11.13.2 The potential of mean force of a reaction
11.13.3 Combined quantum mechanical/molecular mechanical approaches
11.13.4 Ab initio molecular dynamics and the Car-Parrinello method
11.13.5 Examples of ab initio molecular dynamics simulations
11.14 Modelling solid-state defects
11.14.1 Defect studies of the high-Tc superconductor YBa2Cu3O7-x
Appendix 11.1 Calculating Free Energy Differences Using Thermodynamic Integration
Appendix 11.2 Using the Slow Growth Method for Calculating Free Energy Differences
Appendix 11.3 Expansion of Zwanzig Expression for the Free Energy Difference for the Linear Response Method
12. THE USE OF MOLECULAR MODELLING AND CHEMOINFORMATICS TO DISCOVER AND DESIGN NEW MOLECULES
12.1 Molecular modelling in drug discovery
12.2 Computer representations of molecules, chemical databases and 2D substructure searching
12.3 3D database searching
12.4 Deriving and using three-dimensional pharmacophores
12.4.1 Constrained systematic search
12.4.2 Ensemble distance geometry, ensemble molecular dynamics and genetic algorithms
12.4.3 Clique detection methods for finding pharmacophores
12.4.4 Maximum likelihood method
12.4.5 Incorporating additional geometric features into a 3D pharmacophore
12.5 Sources of data for 3D databases
12.6 Molecular docking
12.6.1 Scoring functions for molecular docking
12.7 Applications of 3D database searching and docking
12.8 Molecular similarity and similarity searching
12.9 Molecular Descriptors
12.9.1 Partition coefficients
12.9.2 Molar refractivity
12.9.3 Topological indices
12.9.4 Pharmacophore keys
12.9.5 Calculating the similarity
12.9.6 Similarity based on 3D properties
12.10 Selecting "diverse" sets of compounds
12.10.1 Data manipulation
12.10.2 Selection of diverse sets using cluster analysis
12.10.3 Dissimiliarity-based selection methods
12.10.4 Partition-based methods for compound selection
12.11 Structure-based de novo ligand design
12.11.1 Locating favourable positions of molecular fragments within a binding site
12.11.2 Connecting molecular fragments in a binding site
12.11.3 Structure-based design methods to design HIV-1 protease inhibitors
12.11.4 Structure-based design of templates for zeolite synthesis
12.12 Quantitative structure-activity relationships
12.12.1 Selecting the compounds for a QSAR analysis
12.12.2 Deriving the QSAR equation
12.12.3 Cross-validation
12.12.4 Interpreting a QSAR equation
12.12.5 Alternatives to multiple linear regression: discriminant analysis, neural networks and classification
12.12.6 Principal Components Regression
12.13 Partial least squares
12.13.1 Partial least squares and molecular field analysis
12.14 Combinatorial libraries
12.14.1 The design of "drug-like" libraries
12.14.2 Library enumeration
12.14.3 Combinatorial subset selection
12.14.4 The future
Click the number to view the image (gif format).
• 1.4 Some of the common molecular graphics representations of molecules, illustrated using the crystal structure of nicotinamide adenine dinucleotide phosphate (NADPH) [Reddy et al 1981].
Clockwise, from top left: stick, CPK/space filling, 'balls and stick' and 'tube'. Image generated using InsightII.
• 1.5 Graphical representations of proteins illustrated using the enzyme dihydrofolate reductase [Bolin et al 1982]. Clockwise from top left: stick, CPK, 'cartoon' and 'ribbon'. Image generated
using InsightII.
• 1.7 Graphical representations of the molecular surface of tryptophan. Clockwise from top left: dots, opaque solid, mesh, transluscent solid. Image generated using InsightII.
• 2.11 Surface representation of electron density around formamide at a contour of 0.0001au (electrons/bohr3). Image generated using Spartan.
• 2.12 HOMO of formamide. The red contour indicates a negative part of the wavefunction and blue a positive part of the wavefunction. The formamide molecule is oriented with the oxygen atom on the
left pointing towards the viewer as in Figure 2.17. Image generated using Spartan.
• 2.13 LUMO of formamide. Image generated using Spartan.
• 2.18 Electrostatic potential mapped onto the electron density surface of formamide. The orientation of the molecule is as in Figure 2.17. Red indicates negative electrostatic potential and blue
is positive potential. Image generated using Spartan.
• 5.36 The zeolite NU-87. Image generated using InsightII.
• 7.21 Snapshot from molecular dynamics simulation of a solvated lipid bilayer [Robinson et al 1995]. The disorder of the alkyl chains can be clearly seen. Image generated using InsightII with data
from Alan Robinson.
7.24. Graphical representation of final configurations obtained from dissipative particle dynamics simulations on block copolymers. Figures redrawn from [Groot and Madden 1998]. Images generated
using Cerius2.
• 7.24(a) shows the lamellar phase obtained for the A5B5 system.
• 7.24(b) shows the hexagonal phase from A3B7
• 7.24(c) shows the body-centred-cubic phase for A2B8.
• 8.21 Final configuration obtained from a configurational bias Monte Carlo simulation of thioalkanes adsorbed on a gold surface [Siepmann and MacDonald 1993a]. The system contains 224 molecules
which are colour coded according to the number of gauche defects, with red chains being all trans, yellow chains containing three gauche bonds and green chains containing five gauche bonds. Data
and figure supplied by J. Ilja Siepmann.
• 9.18 Twelve conformations of the chemokine RANTES generated from NMR data using distance geometry. [Chung et al 1995]. Image generated using InsightII.
• 9.23 Fitting a polypeptide chain to the electron density when determining the structure of a protein using X-ray crystallography. The figure shows part of the structure of rat ADP-ribosylation
factor-1 (ARF-1) [Greasley et al 1995]. Image generated using Quanta.
• 9.27 Distribution of hydroxyl groups around thiazole ring systems as extracted from the Cambridge Structural Database, illustrating the greated propensity of the nitrogen atom to act as a
hydrogen-bond acceptor. Image generated using InsightII with data from Isostar/Cambridge Structural Database.
• 10.9 Trypsin [Turk et al 1991], chymotrypsin [Birktoft and Blow 1972] and thrombin [Turk et al 1992] have similar three-dimensional structures. Image generated using InsightII.
• 10.11 A superposition of the aspartic acid, histidine and serine amino acids in the active sites of trypsin (yellow), chymotrypsin (red) and thrombin (green) Image generated using InsightII.
• 11.29 3D Electrostatic isopotential contours around trypsin [Marquart et al 1983]. Contours are drawn at -1kT (red) and +1kT (blue). The trypsin inhibitor is also shown with its electrostatic
potential mapped onto the molecular surface. Figure generated using GRASP.
• 11.30 Electrostatic potential around Cu-Zn superoxide dismutase [McRee et al 1990]. Red contours indicate negative electrostatic potential and blue contours indicate positive electrostatic
potential. Two active sites are present in the dimer, and are located at the top left and bottom right of the Figure where there is a significant concentration of positive electrostatic
potential. Figure generated using GRASP.
• 11.12 Surface complementarirty of streptavidin (purple) and biotin (white) [Freitag et al 1997]. Image generated using InsightII.
12.16 3D pharmacophore derived for a series of molecules with activity at the 5HT3 receptor. The spheres indicate location constraints where an appropriate pharmacophore group should be located
(red: positively ionisable, green: hydrogen-bond acceptor, blue: hydrophobic region).
• 12.16(a) A very active molecule, JMC-35-903-10 superimposed on the pharmacophore.
• 12.16(b) A much less potent molecule, 2Me-5HT. The inactive molecule is not able to match all of the points in the pharmacophore in a low-energy conformation. Images generated using catalyst.
• 12.32 The result of a GRID calculation using carboxylate and amidine probes in the binding site of neuraminidase. The regions of minimum energy are contoured (carboxylate red; amidine blue). Also
shown is the inhibitor 4-guanidino-Neu5Ac2en which contains these two functional groups [von Itzstein et al 1993]. Image generated using InsightII.
• 12.34 The HIV-1 protease with the inhibitor CGP53820 bound [Priestle et al 1995]. The water molecule that hydrogen bonds to the inhibitor and to the 'flaps' is drawn as a white sphere, and the
catalytic aspartate groups of the enzyme are also represented. Image generated using InsightII.
• 12.41 Contour representation of key features from a CoMFA analysis of a series of coumarin substrates and inhibitors of cytochrome P4502A5 [Poso et al 1995]. The red and blue regions indicate
positions where it would be favourable and unfavourable respectively to place a negative charge and the green/yellow regions where it would be favourable/unfavourable to locate steric bulk. Image
generated using Sybyl.
Appendix 2.1 Some Common Acronyms in Quantum Chemistry
┃ AM1 │ Austin Model 1 ┃
┃ AO │ Atomic Obital ┃
┃ BSSE │ Basis-Set Superposition Error ┃
┃ CI │ Configuration Interaction ┃
┃ CIS │ Configuration Interaction Singles ┃
┃ CISD │ Configuration Interaction Singles and Doubles ┃
┃ CNDO │ Complete Neglect of Differential Overlap ┃
┃ DFT │ Density Functional Theory ┃
┃ DIIS │ Direct Inversion of Iterative Subspace ┃
┃ DVP │ Double Zeta with Polarisation ┃
┃ DZ │ Double Zeta ┃
┃ EHT │ Extended Huckel Theory ┃
┃ GVB │ Generalised Valence Bond model ┃
┃ HF │ Hartree-Fock ┃
┃ HOMO │ Highest Occupied Molecular Orbital ┃
┃ INDO │ Intermediate Neglect of Differential Overlap ┃
┃ LCAO │ Linear Combination of Atomic Orbitals ┃
┃ LUMO │ Lowest Unoccupied Molecular Orbital ┃
┃ MBPT │ Many-body Perturbation Theory ┃
┃ MINDO/3 │ Modified INDO version 3 ┃
┃ MNDO │ Modified Neglect of Diatomic Overlap ┃
┃ MO │ Molecular Orbital ┃
┃ MP2, MP3 etc │ Moller-Plesset theory at second order, third order etc. ┃
┃ NDDO │ Neglect of Diatomic Differential Overlap ┃
┃ PM3 │ Parameterisation 3 of MNDO ┃
┃ QCISD │ Quadratic Configuration Interaction Singles and Doubles ┃
┃ RHF │ Restricted Hartree Fock ┃
┃ SAM1 │ Semi-Ab-initio Model 1 ┃
┃ SCF │ Self-Consistent Field ┃
┃ STO │ Slater Type Orbital ┃
┃ STO-3G, STO-4G, etc. │ Minimal basis sets in which 3, 4 etc, Gaussian functions are used to represent the atomic orbitals on an atom ┃
┃ UHF │ Unrestricted Hartree Fock ┃
┃ ZDO │ Zero Differential Overlap ┃
┃ CASSCF │ Complete Active Space Self-Consistent Field ┃
┃ QCISD(T) │ Configuration interation method involving single, double and quadratic excitations with an estimated triple excitation ┃
┃ LSDFT │ Local Spin Density Functional Theory ┃
┃ LDA │ Local Density Approximation ┃
┃ BLYP │ Becke-Lee-Yang-Parr gradient-corrected functional for use with density functional theory ┃
┃ WVN │ correlation functional due to Wilk, Vosko and Nusair ┃
┃ B3LYP │ Scheme for hybrid Hartree-Fock/Density functional theory introduced by Becke ┃
Appendix 10.1 Some common abbreviations and acronyms used in bioinformatics
┃ A, G, C, T (U) │ Adenine, Guanine, Cytosine, Thymine - the four bases present in DNA. Uracil replaces thymine in RNA ┃
┃ Bp │ Base pair ┃
┃ cDNA │ Complementary DNA, synthesised from mesenger RNA ┃
┃ Chromosome │ Discrete unit of the genome consisting of a single molecule of DNA that carries many genes. ┃
┃ Clone │ Genetically identical copy (of a gene, cell or organism) ┃
┃ Codon │ Sequence of three nucleotides that codes for a single amino acid (or a termination signal) ┃
┃ Contig │ A group of pieces of DNA, derived from a cloning experiment (often a series of ESTs, see below), that represent overlapping regions of a chromosome. ┃
┃ Deletion │ One or nucleotides that are not copied during DNA replication ┃
┃ DNA │ Deoxyribose nucleic acid ┃
┃ Domain │ Sequence of polypeptide chain that can independently fold into a stable three-dimensional structure ┃
┃ Dynamic programming │ Technique widely used in sequence alignment ┃
┃ EST │ Expressed Sequence Tag. An EST is a partial sequence (typically less than 400 bases) selected from cDNA and used to identify genes expressed in a particular tissue. ┃
┃ Eukaryote │ Organism whose cells have a discrete nucleus and other subcellular compartments (cf. prokaryote) ┃
┃ Exon │ Translated sequence of DNA ┃
┃ Gap │ A break in DNA or protein sequence which enables two or more sequences to be aligned ┃
┃ gene │ A sequence of DNA at a particular position on a specific chromosome that encodes a precise functional product (usually protein) ┃
┃ Genome │ All of the genetic material in the chromosomes of an organism ┃
┃ Indel │ Insertion or deletion required to optimise sequence alignment ┃
┃ Intron │ Non-translated sequence of DNA ┃
┃ Kb │ Kilobase - one thousand nucleotide bases ┃
┃ ktup │ k-tuple. Parameter used in FASTA and FASTP sequence alignment methods ┃
┃ Mbp │ Megabase - one million nucleotide bases ┃
┃ mRNA │ Messenger RNA ┃
┃ Mutation │ A change in the DNA sequence ┃
┃ Nucleotide │ Three components that make up the basic building block in DNA and RNA: a nitrogenous base (A, T, G, C, U), a phosphate and a sugar ┃
┃ Oligonucleotide │ A molecule composed of a small number of nucleotides ┃
┃ Orthologue │ Homologous proteins that perform the same function within different organisms ┃
┃ ORF │ Open Reading Frame - region of DNA that is transcribed into RNA. Delineated by an initiator codon at one end and a stop codon at the other end. ┃
┃ PAM │ Point Accepted Mutation per 100 residues ┃
┃ Paralogue │ Homologous proteins that perform different but related functions within one organism ┃
┃ PCR │ Polymerase Chain Reaction. Widely used method for amplifying a DNA base sequence ┃
┃ Polymorphism │ Differences in DNA sequence among individuals ┃
┃ Prokaryote │ Organism lacking a nucleus and subcellular compartments (cf. eukaryote). Includes bacteria and viruses ┃
┃ RNA │ Ribonucleic acid ┃
┃ SNP │ Single Polynucleotide Polymorphism - single base-pair variations in DNA ┃
┃ STS │ Sequence tagged site. A short DNA sequence that occurs just once in the human genome and whose locatino and base sequence are known. ┃
┃ Transcription │ First step in gene expression, corresponding to the generation of mRNA from the original DNA ┃
┃ Translation │ Second step in gene expression, the synthesis of proteins from mRNA ┃
┃ tRNA │ Transfer RNA ┃
Appendix 10.2. Some of the most common sequence and structural databases used in bioinformatics
┃ GenBank (NCBI, USA) EMBL Nucleotide Sequence │ The three main nucleotide sequence databases which are synchronised daily ┃
┃ Database (Europe) DDBJ (Japan) │ ┃
┃ PIR-International Protein Sequence Database │ Redundant protein sequence database ┃
┃ Swiss-Prot, TrEMBL │ Annotated non-redundant protein sequence database. TrEMBL is a computer- annotated supplement to Swiss-Prot. TrEMBL contains the translations of all ┃
┃ │ coding sequences present in the EMBL Nucleotide Sequence Database, which are not yet integrated into Swiss-Prot ┃
┃ GenPept │ Compendium of amino acid translations derived from GenBank ┃
┃ PDB, NRL3D │ Protein Data Bank - protein structures (mostly from X-ray crystallography). NRL3D is a derived sequence database in PIR format. ┃
┃ SCOP │ Structural Classification of Proteins. Hierarchical protein structure database ┃
┃ CATH, FSSP │ Sequence-structure classification databases ┃
┃ Prosite │ Motif database. ┃
References for colour figures
Reddy B. S., W. Saenger, K. Muehlegger and G Weimann 1981. Crystal and molecular structure of the lithium salt of nicotinamide adenine dinucleotide dihydrate NAD, DPN, cozymase, codehydrase I.
Journal of the American Chemical Society. 103 907-14
Bolin J T, D J Filman, D A Matthews, R C Hamlin and J Kraut 1982. Crystal Structures of Escherichia coli and Lactobacillus casei Dihydrofolate Reductase Refined at 1.7 ngstroms Resolution. I.
Features and Binding of Methotrexate. Journal of Biological Chemistry 25713650-13662.
Robinson A J, W G Richards, P J Thomas and M M Hann 1994. Head Group and Chain Behaviour in Biological Membranes-A Molecular Dynamics Simulation. Biophysical Journal 672345-2354.
Groot R D and T J Madden 1998. Dynamics simulation of diblock copolymer microphase separation. The Journal of Chemical Physics 108:8713-8724.
Siepmann J I and I R McDonald 1993b. Monte Carlo Study of the Properties of Self-Assembled Monolayers Formed by Adsorption of CH3CH215SH on the 111 Surface of Gold. Molecular Physics 79457-473.
Chung C-W, R M Cooke, A E I Proudfoot and T N C Wells 1995. The Three-Dimensional Structure of RANTES. Biochemistry 349307-9314.
Greasley S E, H Jhoti, C Teahan, R Solari, A Fensom, G M H Thomas, S Cockroft and B Bax 1995. The Structure of Rat ADP-Ribosylation Factor-1 ARF-1 Complexed to GDP Determined from Two Different
Crystal Forms. Nature Structural Biology 2797-806.
Turk D, J Sturzebecher and W Bode 1991. Geometry of Binding of the N-Alpha-Tosylated Piperidides of meta-Amidino-Phenylalanine, Para Amidino-Phenylalanine and para-Guanidino-Phenylalanine to Thrombin
and Trypsin-X-ray Crystal Structures of Their Trypsin Complexes and Modeling of their Thrombin Complexes. Febs Letters 287133-138.
Birktoft J J and D M Blow 1972. The structure of Crystalline Alpha-Chymotrypsin V. The Atomic Structure of Tosyl-Alpha-Chymotrypsin at 2 Angstroms Resolution. Journal Of Molecular Biology 68187-240.
Turk D, H W Hoeffken, D Grosse, J Stuerzebecher, P D Martin, B F P Edwards and W Bode 1992. Refined 2.3 ngstroms X-Ray Crystal Structure of Bovine Thrombin Complexes Formed with the 3 Benzamidine and
Arginine-Based Thrombin Inhibitors NAPAP, 4-TAPAP and MQPA A Starting Point for Improving Antithrombotics. Journal Of Molecular Biology 2261085-1099.
Bruno I J, J C Cole, J P M Lommerse, R S Rowland, R Taylor and M L Verdonk 1997. Isostar a library of information about nonbonded interactions. The Journal of Computer-Aided Molecular Design
Marquart M, J Walter, J Deisenhofer, W Bode, R Huber 1983. The Geometry of the Reactive Site and of the Peptide Groups in Trypsin, Trypsinogen and its Complexes with Inhibitors. Acta
Crystallographica B39480-490.
McRee D E, S M Redford, E D Getzoff, J R Lepock, R A Hallewell and J A Tainer 1990. Changes in Crystallographic Structure and Thermostability of a Cu, Zn Superoxide Dismutase Mutant Resulting from
the Removal of Buried Cysteine. Journal of Biological Chemistry 26514234-14241.
Freitag S, I Le Trong, P S Stayton and R E Stenkamp 1997. Structural Studies of the Streptavidin Binding Loop. Protein Science 61157-
Priestle J P, A Fassler, J Rosel, M Tintelnog-Blomley, P Strop and M G Gruetter 1995. Comparative Analysis of The X-Ray Structures of HIV-1 and HIV-2 Proteases in Complex with a Novel Pseudosymmetric
Inhibitor. Structure London 3381-389.
Poso A, R Juvonen and J Gynther 1995. Comparative molecular field analysis of compounds with CYP2A5 binding affinity. Quantitative Structure-Activity Relationships 14507-511
Von Itzstein M, W Y Wu, G B Kok, M S Pegg, J C Dyason, B Jin, T V Phan, M L Smythe, H F Whites, S W Oliver, P M Colman, J N Varghese, D M Ryan, J M Woods, R C Bethell, V J Hotham, J M Cameron and C R
Penn 1993. Rational Design of Potent Sialidase-Based Inhibitors of Influenza-Virus Replication. Nature 363:418-423.
Comments, questions, corrections?
Click here to send email (if your browser supports the "mailto" command) | {"url":"http://www.booksites.net/leach2/molecular/molecular_modelling_2.html","timestamp":"2014-04-18T23:15:59Z","content_type":null,"content_length":"58228","record_id":"<urn:uuid:158665fc-6d82-43fb-ae5a-0850e910b4f2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Integrating Acceleration for Distance
Thanks for the reply, but the idea of the centrefuge has confused me even more.
Wouldn't we get the same result by summing before integrating rather than after? (comparing the two methods i outlined in the first post)
You spoke of taking the euclidean distance and subtracting 9.81 from that. That's not a vector subtraction. That's a scalar subtraction. That means you would not be integrating a vector. You would be
integrating a scalar, completely ignoring the direction of the acceleration.
The point I was trying to make is that you need to subtract the 9.81 as a vector, leaving a residual vector acceleration and integrate that.
The centrifuge example would have a high scalar acceleration. Integrate that and you get a huge number. But integrate the vector and the directions would tend to cancel out over the long run giving a
much lower number.
Did I make sense that time or am I still losing you? | {"url":"http://www.physicsforums.com/showpost.php?p=4232547&postcount=5","timestamp":"2014-04-20T05:50:35Z","content_type":null,"content_length":"8362","record_id":"<urn:uuid:cda7225c-0c09-417b-b66a-2378082aa92f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roots of complex numbers
July 29th 2008, 10:56 AM #1
Dec 2007
Roots of complex numbers
I am just wanting to see if the book may have made a typo. When asking for the three cubed roots of z=-2+2i, we get sqrt(8) for r. the next step shows 6th root of 8 *((cos 135 + 360k)/(3) + i sin
(135 + 360k)/(3)). I understand how to get everything here except for the 6??? Shouldn't nth root of r be 3rd root of sqrt (8)????
Yes, and 3rd root of $\sqrt{8}$ is 6th root of 8.
$8^{\frac{1}{2}\cdot \frac{1}{3}} = 8^{\frac{1}{6}} = \sqrt[6]{8}$
I see now, thank you for the help.
July 29th 2008, 11:24 AM #2
July 29th 2008, 11:42 AM #3
Dec 2007 | {"url":"http://mathhelpforum.com/trigonometry/44801-roots-complex-numbers.html","timestamp":"2014-04-16T17:42:23Z","content_type":null,"content_length":"34371","record_id":"<urn:uuid:2d238482-bfd4-4640-b8b8-b355d4848361>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
constant of proportionality
Definitions for constant of proportionality
This page provides all possible meanings and translations of the word constant of proportionality
Princeton's WordNet
1. factor of proportionality, constant of proportionality(noun)
the constant value of the ratio of two proportional quantities x and y; usually written y = kx, where k is the factor of proportionality
Find a translation for the constant of proportionality definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for constant of proportionality? | {"url":"http://www.definitions.net/definition/constant+of+proportionality","timestamp":"2014-04-19T23:52:32Z","content_type":null,"content_length":"24836","record_id":"<urn:uuid:14c8d5f3-44d2-4e3a-aba0-3f49e66c6898>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAGMA Forum
Hi Mark,
1. could you please add Fortran interfaces for
• magmaf_dgetri
• magmaf_dstedc
• magmaf_zhegvx
• magmaf_zheevx
2. could you please fix zhegvx.cpp and comment/remove Mymagma_ztrmm routines
3. could you please add magma_dstedc function (see attachment)
my small updates to MAGMA-1.2.1
(18.91 KiB) Downloaded 75 times
Re: wishlist
These seem reasonable, so hopefully we can get to them for the next release.
Re: wishlist
Hi Mark!
I need zstedc function, which is absent in Magma.
Is it correct implementation:
Code: Select all
#include "common_magma.h"
extern "C" magma_int_t
magma_zstedc(char range, magma_int_t n, double* d, double* e, cuDoubleComplex* z, magma_int_t ldz,
cuDoubleComplex* work, magma_int_t lwork, double *rwork, magma_int_t lrwork,
magma_int_t* iwork, magma_int_t liwork, magma_int_t* info)
double *dwork;
if (MAGMA_SUCCESS != magma_dmalloc( &dwork, 3*n*(n/2 + 1) )) {
*info = -15;
return MAGMA_ERR_DEVICE_ALLOC;
char range_t = ' ';
if (range == 'I') range_t = 'A';
magma_zstedx(range_t, n, 0., 0., 0, 0, d, e, z, ldz,
rwork, lrwork, iwork, liwork, dwork, info);
magma_free( dwork );
Please, add it to the next release.
BTW, when will the next release be announced?
Re: wishlist
Is magma_zstedx even accelerated? It looks like a complete pass through to the host LAPACK.
Re: wishlist
This routine becomes more compute intensive when eigenvectors are needed. In that case most of the flops are in gemm and this is what is GPU accelerated. | {"url":"http://icl.cs.utk.edu/magma/forum/viewtopic.php?p=1789","timestamp":"2014-04-20T23:57:57Z","content_type":null,"content_length":"19453","record_id":"<urn:uuid:967dfbc6-aee9-48a8-99a2-6707fe20bf58>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
User: which numbers are divisible by 6? 62 ,248,930, 124,310,558,155 ,7812,93,31,465,279
• 11 months ago
• 11 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5181bdd8e4b0163f43707f26","timestamp":"2014-04-21T00:04:21Z","content_type":null,"content_length":"46664","record_id":"<urn:uuid:a08fa1c9-286e-452f-9abc-da8471d03644>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lombard Trigonometry Tutor
...Learning is personal, so my goal is to connect with each and every student in whatever way is most helpful to them. I look forward to working with you and your children!I was an advanced math
student, completing the equivalent of Algebra 1 before high school. I continued applying algebraic skills in high school, where I was a straight A student and completed calculus as a junior.
12 Subjects: including trigonometry, geometry, statistics, algebra 2
...If you need help with standardized testing, my GRE scores were a 680 verbal and 800 quantitative.During my masters degree I was a TA for the intro to computer science course. For three
semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. The course began ...
17 Subjects: including trigonometry, physics, calculus, GRE
...I would then have the child fill out a questionnaire to see if what he or she believes is true. I would then use that information to decide how I would help the child. If the child were an
auditory learner, then I would lecture most of the time, and have the child repeat what I say.
19 Subjects: including trigonometry, reading, calculus, geometry
...I taught trigonometry and algebra 2 to high school juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree in
applied statistics. I have tutored students of varying levels and ages for more than six years.
19 Subjects: including trigonometry, calculus, statistics, algebra 1
...Whether it is math abilities, general reasoning, or test taking abilities that need improvements, I can help you progress substantially. I work with systems of linear equations and matrices
almost every day. My PhD in physics and long experience as a researcher in theoretical physics make me well qualified for teaching linear algebra.
23 Subjects: including trigonometry, calculus, physics, statistics | {"url":"http://www.purplemath.com/Lombard_trigonometry_tutors.php","timestamp":"2014-04-18T05:49:07Z","content_type":null,"content_length":"24247","record_id":"<urn:uuid:0dd5102f-dbd3-4956-83a2-1a45d29e326d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Y -systems and generalized associahedra
Results 11 - 20 of 51
- J. Combin. Theory Ser. A
"... Abstract. We give a unified explanation of the geometric and algebraic properties of two well-known maps, one from permutations to triangulations, and another from permutations to subsets.
Furthermore we give a broad generalization of the maps. Specifically, for any lattice congruence of the weak or ..."
Cited by 17 (8 self)
Add to MetaCart
Abstract. We give a unified explanation of the geometric and algebraic properties of two well-known maps, one from permutations to triangulations, and another from permutations to subsets.
Furthermore we give a broad generalization of the maps. Specifically, for any lattice congruence of the weak order on a Coxeter group we construct a complete fan of convex cones with strong
properties relative to the corresponding lattice quotient of the weak order. We show that if a family of lattice congruences on the symmetric groups satisfies certain compatibility conditions then
the family defines a sub Hopf algebra of the Malvenuto-Reutenauer Hopf algebra of permutations. Such a sub Hopf algebra has a basis which is described by a type of pattern-avoidance. Applying these
results, we build the Malvenuto-Reutenauer algebra as the limit of an infinite sequence of smaller algebras, where the second algebra in the sequence is the Hopf algebra of non-commutative symmetric
functions. We also associate both a fan and a Hopf algebra to a set of permutations which appears to be equinumerous with the Baxter permutations. 1.
"... We prove that certain linear operators preserve the Pólya frequency property and real-rootedness, and apply our results to settle some conjectures and open problems in combinatorics proposed by
Bóna, Brenti and Reiner-Welker. ..."
Cited by 14 (4 self)
Add to MetaCart
We prove that certain linear operators preserve the Pólya frequency property and real-rootedness, and apply our results to settle some conjectures and open problems in combinatorics proposed by Bóna,
Brenti and Reiner-Welker.
"... Abstract. We prove an instance of the cyclic sieving phenomenon, occurring in the context of noncrossing parititions for well-generated complex reflection groups. 1. ..."
Cited by 14 (1 self)
Add to MetaCart
Abstract. We prove an instance of the cyclic sieving phenomenon, occurring in the context of noncrossing parititions for well-generated complex reflection groups. 1.
"... Abstract. We study the dependence of a cluster algebra on the choice of coefficients. We write general formulas expressing the cluster variables in any cluster algebra in terms of the initial
data; these formulas involve a family of polynomials associated with a particular choice of “principal ” coe ..."
Cited by 14 (1 self)
Add to MetaCart
Abstract. We study the dependence of a cluster algebra on the choice of coefficients. We write general formulas expressing the cluster variables in any cluster algebra in terms of the initial data;
these formulas involve a family of polynomials associated with a particular choice of “principal ” coefficients. We show that the exchange graph of a cluster algebra with principal coefficients
covers the exchange graph of any cluster algebra with the same exchange matrix. We investigate two families of parametrizations of cluster monomials by lattice points, determined, respectively, by
the denominators of their Laurent expansions and by certain multi-gradings in cluster algebras with principal coefficients. The properties of these parametrizations, some proven and some conjectural,
suggest links to duality conjectures of V. Fock and A. Goncharov. The coefficient dynamics leads to a natural generalization of Al. Zamolodchikov’s Y-systems. We establish a Laurent phenomenon for
such Y-systems, previously known in finite type only, and sharpen the periodicity result from an earlier paper. For cluster algebras of finite type, we identify a canonical “universal ” choice
- Pure and Applied Mathematics Quarterly
"... This note which can be viewed as a complement to [9], presents a self-contained overview of basic properties of nested complexes and their two dual polyhedral realizations: as complete
simplicial fans, and as simple polytopes. Most of the results are not new; our aim is to bring into focus a strikin ..."
Cited by 14 (0 self)
Add to MetaCart
This note which can be viewed as a complement to [9], presents a self-contained overview of basic properties of nested complexes and their two dual polyhedral realizations: as complete simplicial
fans, and as simple polytopes. Most of the results are not new; our aim is to bring into focus a striking similarity between nested complexes
, 2005
"... Abstract. We show that a certain orbit category considered by Keller encodes the combinatorics of the m-clusters of Fomin and Reading in a fashion similar to the way the cluster category of
Buan, Marsh, Reineke, Reiten, and Todorov encodes the combinatorics of the clusters of Fomin and Zelevinsky. T ..."
Cited by 13 (2 self)
Add to MetaCart
Abstract. We show that a certain orbit category considered by Keller encodes the combinatorics of the m-clusters of Fomin and Reading in a fashion similar to the way the cluster category of Buan,
Marsh, Reineke, Reiten, and Todorov encodes the combinatorics of the clusters of Fomin and Zelevinsky. This allows us to give typeuniform proofs of certain results of Fomin and Reading in the simply
laced cases. For Φ any root system, Fomin and Zelevinsky [FZ] define a cluster complex ∆(Φ), a simplicial complex on Φ≥−1, the almost positive roots of Φ. Its facets (maximal faces) are called
clusters. In [BM+], starting in the more general context of a finite dimensional hereditary algebra H over a field K, Buan et al. define a cluster category C(H) = D b (H)/τ −1 [1]. (D b (H) is the
bounded derived category of representations of H; more will be said below about it, its shift functor [1], and its Auslander-Reiten translate τ.) The cluster category C(H) is a triangulated
Krull-Schmidt category. We will be mainly interested in the case where H is a path algebra associated to the simply laced root system Φ, in which case we write C(Φ) for C(H). There is a bijection V
taking Φ≥−1 to the indecomposables of C(Φ). A (cluster)-tilting set
"... Abstract. A case-free proof is given that the entries of the h-vector of the cluster complex ∆(Φ), associated by S. Fomin and A. Zelevinsky to a finite root system Φ, count elements of the
lattice L of noncrossing partitions of corresponding type by rank. Similar interpretations for the h-vector of ..."
Cited by 12 (5 self)
Add to MetaCart
Abstract. A case-free proof is given that the entries of the h-vector of the cluster complex ∆(Φ), associated by S. Fomin and A. Zelevinsky to a finite root system Φ, count elements of the lattice L
of noncrossing partitions of corresponding type by rank. Similar interpretations for the h-vector of the positive part of ∆(Φ) are provided. The proof utilizes the appearance of the complex ∆(Φ) in
the context of the lattice L, in recent work of two of the authors, as well as an explicit shelling of ∆(Φ). 1.
, 2005
"... Abstract. Let W be a Weyl group corresponding to the root system An−1 or Bn. We define a simplicial complex ∆m W in terms of polygon dissections for such a group and any positive integer m. For
m = 1, ∆m W is isomorphic to the cluster complex corresponding to W, defined in [8]. We enumerate the face ..."
Cited by 11 (3 self)
Add to MetaCart
Abstract. Let W be a Weyl group corresponding to the root system An−1 or Bn. We define a simplicial complex ∆m W in terms of polygon dissections for such a group and any positive integer m. For m =
1, ∆m W is isomorphic to the cluster complex corresponding to W, defined in [8]. We enumerate the faces of ∆m W and show that the entries of its h-vector are given by the generalized Narayana numbers
Nm W (i), defined in [3]. We also prove that for any m≥1 the complex ∆m W is shellable and hence Cohen-Macaulay. 1. Introduction and
"... Abstract. We show that the Coxeter-sortable elements in a finite Coxeter group W are the minimal congruence-class representatives of a lattice congruence of the weak order on W. We identify this
congruence as the Cambrian congruence on W, so that the Cambrian lattice is the weak order on Coxetersort ..."
Cited by 10 (5 self)
Add to MetaCart
Abstract. We show that the Coxeter-sortable elements in a finite Coxeter group W are the minimal congruence-class representatives of a lattice congruence of the weak order on W. We identify this
congruence as the Cambrian congruence on W, so that the Cambrian lattice is the weak order on Coxetersortable elements. These results exhibit W-Catalan combinatorics arising in the context of the
lattice theory of the weak order on W. Contents | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=346133&sort=cite&start=10","timestamp":"2014-04-18T21:27:13Z","content_type":null,"content_length":"34261","record_id":"<urn:uuid:9676197c-0be7-4bfb-8eb6-0d02ef7c0a7e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Digit Frequencies in the Columns of Cellular Automata
This Demonstration shows a pie chart that accumulates the digits in a vertical column of a given elementary cellular automaton. The slices cut the pie chart into parts, indicating the statistical
frequency of ones and zeros in the column. For instance, you can see that most columns for rule 30 seem to be statistically random. An interesting case is rule 110, which does not look statistically
random even when the number of steps is large; its columns alternate between having more black cells (ones) and having more white cells (zeros), but they are never in equilibrium. | {"url":"http://demonstrations.wolfram.com/DigitFrequenciesInTheColumnsOfCellularAutomata/","timestamp":"2014-04-19T01:50:18Z","content_type":null,"content_length":"42241","record_id":"<urn:uuid:63b0085f-c54f-4add-8087-aed597f78f89>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
"method involving two objects" is that possible in Python?
bruno.desthuilliers at gmail.com bruno.desthuilliers at gmail.com
Sat Jun 28 00:15:21 CEST 2008
On 27 juin, 23:41, Kurda Yon <kurda... at yahoo.com> wrote:
> Hi,
> I just started to learn Python. I understood how one can create a
> class and define a method related with that class. According to my
> understanding every call of a methods is related with a specific
> object. For example, we have a method "length", than we call this
> method as the following "x.length()" or "y.length()" or "z.length()",
> where z, y, and z are objects of the class.
> I am wandering if it is possible to create a method which is related
> not with a single object (as in the previous example) but with a pare
> of objects. For example, I want to have a class for vectors, and I
> want to have a methods which calculate a dot product of two vectors.
> One of the possibilities is to use __mul__ and that I calculated dot
> product of "x" and "y" just as "x * y". However, I am wandering if I
> can declare a method in a way that allows me to calculate dot product
> as "dot(x,y)".
No problem. This is actually called a function. It has the same syntax
as a method, except that:
1/ it's defined outside a class
2/ it doesn't take the instance as first argument.
Here's a simple example applied to multiplication:
def multiply(x, y):
return x * y
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2008-June/510522.html","timestamp":"2014-04-17T02:30:45Z","content_type":null,"content_length":"4466","record_id":"<urn:uuid:dbe41e87-333d-48fc-8de6-bf404720817a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rash thoughts about .NET, C#, F# and Dynamics NAV.
Yesterday I was talking about F# at the .NET Developer Group Braunschweig. It was my first talk completely without PowerPoint (just Live-Coding and FlipChart) and I have to admit this is not that
easy. But the event was really a big fun and we covered a lot of topics like FP fundamentals, concurrency and domain specific languages (of course I showed “FAKE – F# Make”).
Now I have a bit time before I go to the next BootCamp in Leipzig. Today Christian Weyer will show us exciting new stuff about WCF and Azure.
In the meanwhile I will write here about another important question (see first article) from the F# BootCamp in Leipzig:
Question 4 – Try to explain “Currying” and “Partial Application”. Hint: Please show a sample and use the pipe operator |>.
Obviously this was a tricky question for FP beginners. There are a lot of websites, which give a formal mathematical definition but don’t show the practical application.
“Currying … is the technique of transforming a function that takes multiple arguments (or more accurately an n-tuple as argument) in such a way that it can be called as a chain of functions each
with a single argument”
I want to show how my pragmatic view of the terms here, so let’s consider this small C# function:
public int Add(int x, int y)
return x + y;
Of course the corresponding F# version looks nearly the same:
let add(x,y) = x + y
But let’s look at the signature: val add : int * int –> int. The F# compiler is telling us add wants a tuple of ints and returns an int. We could rewrite the function with one blank to understand
this better:
let add (x,y) = x + y
As you can see the add function actually needs only one argument – a tuple:
let t = (3,4) // val t : int * int
printfn "%d" (add t) // prints 7 – like add(3,4)
Now we want to curry this function. If you’d ask a mathematician this a complex operation, but from a pragmatic view it couldn’t be easier. Just remove the brackets and the comma – that’s all:
let add x y = x + y
Now the signature looks different: val add : int -> int –> int
But what’s the meaning of this new arrow? Basically we can say if we give one int parameter to our add function we will get a function back that will take only one int parameter and returns an int.
let increment = add 1 // val increment : (int -> int)
printfn "%d" (increment 2) // prints 3
Here “increment” is a new function that uses partial application of the curryied add function. This means we are fixing one of the parameters of add to get a new function with one parameter less.
But why are doing something like this? Wouldn’t it be enough to use the following increment function?
let add(x,y) = x + y // val add : int * int -> int
let increment x = add(x,1) // val increment : int -> int
printfn "%d" (increment 2) // prints 3
Of course we are getting (nearly) the same signature for increment. But the difference is that we can not use the forward pipe operator |> here. The pipe operator will help us to express things in
the way we are thinking about it.
Let’s say we want to filter all even elements in a list, then calculate the sum and finally square this sum and print it to the console. The C# code would look like this:
var list = new List<int> {4,2,6,5,9,3,8,1,3,0};
If we don’t want to store intermediate results we have to write our algorithm in reverse order and with heavily use of brackets. The function we want to apply last has to be written first. This is
not the way we think about it.
With the help of curried functions, partial application and the pipe operator we can write the same thing in F#:
let list = [4; 2; 6; 5; 9; 3; 8; 1; 3; 0]
let square x = x * x
|> List.filter (fun x -> x % 2 = 0) // partial application
|> List.sum
|> square
|> printfn "%A" // partial application
We describe the data flow in exactly the same order we talked about it. Basically the pipe operator take the result of a function and puts it as the last parameter into the next function.
What should we learn from this sample?
1. Currying has nothing to do with spicy chicken.
2. The |> operator makes life easier and code better to understand.
3. If we want to use |> we need curryied functions.
4. Defining curryied functions is easy – just remove brackets and comma.
5. We don’t need the complete mathematical theory to use currying.
6. Be careful with the order of the parameter in a curryied function. Don’t forget the pipe operator puts the parameter from the right hand side into your function – all other parameters have to be
fixed with partial application.
.NET Developer Group Braunschweig
.NET User Group Leipzig
F-sharp Make
partial application | {"url":"http://www.navision-blog.de/tag/pipe/","timestamp":"2014-04-17T15:27:42Z","content_type":null,"content_length":"34350","record_id":"<urn:uuid:3276784a-27bd-4553-bdc4-19b05b416e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume of a Tetrahedron
Date: 01/23/2002 at 22:17:19
From: Andrew
Subject: Geometry of a Tetrahedron
The volume of a tetrahedron is one-third the distance from a vertex
to the opposite face, times the area of that face. Find a formula for
the volume of a tetrahedron in terms of the coordinates of its
vertices P, Q, R, and S.
I'm not even sure where to begin. I think it may have something to do
with cross product multiplication of vectors.
Date: 01/24/2002 at 22:24:44
From: Doctor Pete
Subject: Re: Geometry of a Tetrahedron
Thanks for writing to Dr Math. Don't despair - you do know where to
begin, because you mentioned vectors. So I'll begin by setting up some
vectors, which will be denoted by capital letters, in terms of their
coordinates, which will be lowercase letters. Suppose you have
P = (x1, y1, z1),
Q = (x2, y2, z2),
R = (x3, y3, z3),
S = (x4, y4, z4).
Furthermore, we will write
A = Q - P,
B = R - P,
C = S - P.
In essence, we translated vector P to the origin, and moved Q, R, S
accordingly, to obtain A, B, and C; this will simplify our work.
Now recall the dot product of two vectors
M = (m1, m2, m3), N = (n1, n2, n3)
satisfy the following properties:
[D1] M . N = m1*n1 + m2*n2 + m3*n3,
[D2] M . N = |M||N|Cos[t].
Here |M| signifies the magnitude (length) of M, and t is the angle
between vectors M and N. As for the cross product, we have
| i j k |
[C1] M x N = | m1 m2 m3 |,
| n1 n2 n3 |
[C2] M x N = |M||N|Sin[t]*U,
where in [C1], i, j, k are the unit x-, y-, and z-vectors, and in
[C2], U is the unit vector that is orthogonal to M and N and points in
the direction as specified by the right-hand rule. (In particular, we
have i x j = k, j x k = i, k x i = j.) A proof of these facts is given
in all textbooks dealing with linear algebra.
The second property about cross products is the main connection to the
geometry of the problem, because geometrically it says that the cross
product of two vectors is a third vector orthogonal to the other two,
with magnitude equivalent to the area of the parallelogram defined by
the two vectors. Perhaps a picture will show this:
/| /
/ |h /
/ |_ /
0 M
In the above diagram we are looking at the plane containing the
vectors M and N. The height h of the parallelogram is simply
|N|Sin[t], where t is the angle M0N, the angle between M and N.
Thus the area of the parallelogram is |M||N|Sin[t]. The vector M x N
is pointing in a direction perpendicular to this plane (straight at
you), by the right-hand rule.
The curious thing about the cross product, then, is that the area of
the triangle determined by points M, N, and 0, is simply half the
magnitude of the cross product, because the parallelogram consists of
two congruent copies of triangle M0N. Thus, in the case of our vectors
A, B, C, we may choose any two of these to show that the area of the
triangular face determined by, say, vectors B and C, is simply
|B x C|/2.
But wait - there's more. We observe that the vector B x C is parallel
to the height from the vertex at A to the opposite face. If we draw
another picture in the plane that contains the vectors B x C, A, and
the length from A to the plane containing B and C, as follows,
B x C
| A
| /|
| / |
| / |d
| / |
0 G
we see that B and C are now projected onto this plane and appear as a
single line. The important thing to realize is that in this picture,
vector A is in the same plane as B x C and the line segment AG. If we
let s = angle 0AG, then we may write the distance d of AG as simply
d = |A|Cos[s].
Therefore, the volume of the tetrahedron is
V = (1/3)d|B x C|/2
= (1/6)|A||B x C|Cos[s].
But s is also the angle between A and B x C, and if we recall the
formula [D2] for the dot product of two vectors, we find that
V = (1/6)|A . (B x C)|.
The product A . (B x C) is more commonly called the (scalar) triple
product, because (with some slight details omitted) the symmetry of
our argument reveals that
A . (B x C) = B . (C x A) = C . (A x B).
Now we may write A . (B x C) in terms of the coordinates using the
formulas [D1, C1]. We have
| i j k |
B x C = |x3-x1 y3-y1 z3-z1|,
|x4-x1 y4-y1 z4-z1|
and since A = (x2-x1, y2-y1, z2-z1), we immediately see that
|x2-x1, y2-y1, z2-z1|
A . (B x C) = |x3-x1, y3-y1, z3-z1|.
|x4-x1, y4-y1, z4-z1|
Thus we have a formula for V in terms of the coordinates of P, Q, R,
and S. But shouldn't this formula be symmetric in the coordinates? It
is - it's just that it isn't obvious from looking at it. I leave it to
you to show that the above determinant is equivalent to
|x1 y1 z1 1|
|x2 y2 z2 1|
|x3 y3 z3 1|
|x4 y4 z4 1| .
- Doctor Pete, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/51837.html","timestamp":"2014-04-19T12:46:46Z","content_type":null,"content_length":"10410","record_id":"<urn:uuid:138a8da5-c6fe-4fea-a259-1311c2636f64>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Newton's Integrability Proof
This figure is based on Newton's proof of the integrability of monotonic functions found in his Principia Mathematica (Book I, Lemma III). The error between the lower and upper sums, represented by
the yellow rectangles, slides over and fits in a rectangle whose height is the height of the graph and width is that of the broadest yellow rectangle. As the partition is subdivided, the error
approaches zero. In other words, the upper and lower sums approach the same value, the value of the integral of the function.
Click to add a point to the partition or click the subdivide button to divide each interval in half. The "slide" slider shows that the difference between the upper and lower sums fits inside a
rectangle whose height is the difference in the values of at the ends of the interval and whose base is the width of the widest interval of the partition.
Snapshots 1 and 2: the decreasing function with the partition subdivided a couple of times, with the (yellow) difference rectangles in their original position and slid over inside the outlined
rectangle, respectively
Snapshot 3: the increasing function with the (yellow) difference rectangles slid over inside the outlined rectangle
Snapshot 4: the discontinuous function with the (yellow) difference rectangles slid over inside the outlined rectangle | {"url":"http://demonstrations.wolfram.com/NewtonsIntegrabilityProof/","timestamp":"2014-04-16T04:34:23Z","content_type":null,"content_length":"43867","record_id":"<urn:uuid:82cec572-a228-40bc-be2b-db02c8cb036b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Passaic Park, NJ Math Tutor
Find a Passaic Park, NJ Math Tutor
My experience in tutoring spans a wide variety of subjects and disciplines. I have degrees in Biology and Mathematics and am comfortable teaching any and all of the subjects in both science and
math. I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between.
22 Subjects: including calculus, SAT math, English, reading
...I’ve taught both high school and college including remedial courses at both levels. I have 30 years' experience in teaching, working with students from a wide variety of ethnic backgrounds as
well as adult learners. I have extensive experience in test preparation.
9 Subjects: including precalculus, trigonometry, statistics, algebra 1
...I would like to say last that I have several certifications and I've been teaching in challenging environments for all my professional career. Challenge is what I live for.I am certified to
teach all subjects K-8. I love teaching this age group, because they haven't had time to develop mental barriers to hinder learning.
13 Subjects: including algebra 1, algebra 2, prealgebra, reading
...I succeed as a tutor because I have a great rapport with students of all ages. I am high energy, and do not believe that tutor has to be boring to work. In fact, the more fun for them, the
more likely my students are to do what I am asking of them.
42 Subjects: including algebra 1, algebra 2, LSAT, biology
...As a student I performed very well on the SAT's (missing only one question), and I have also had much success tutoring other students and dramatically increasing their scores. I have had much
success both as test-taker (missing only one question in high school) and as a tutor dramatically increa...
34 Subjects: including calculus, chemistry, grammar, phonics
Related Passaic Park, NJ Tutors
Passaic Park, NJ Accounting Tutors
Passaic Park, NJ ACT Tutors
Passaic Park, NJ Algebra Tutors
Passaic Park, NJ Algebra 2 Tutors
Passaic Park, NJ Calculus Tutors
Passaic Park, NJ Geometry Tutors
Passaic Park, NJ Math Tutors
Passaic Park, NJ Prealgebra Tutors
Passaic Park, NJ Precalculus Tutors
Passaic Park, NJ SAT Tutors
Passaic Park, NJ SAT Math Tutors
Passaic Park, NJ Science Tutors
Passaic Park, NJ Statistics Tutors
Passaic Park, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/passaic_park_nj_math_tutors.php","timestamp":"2014-04-20T16:38:18Z","content_type":null,"content_length":"23996","record_id":"<urn:uuid:2f4860b8-9c3e-4d4e-858f-44055f67c70d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Evaluation and Comparisons
→ Presentation of data comparison techniques, and the steps for evaluating set of data
→ Definition of statistical hypotheses about datasets
→ t-tests for comparing the means of different datasets
→ Testing whether a mean is greater than, less than, or not equal to, another mean
→ Testing differences between standard deviations of datasets, for comparing precision
You have now seen how to generate a calibration curve for an instrument from a set of linear data, and then use the curve to determine the concentration of an unknown sample from a measured signal.
Let's say you just taken a number of concentration readings from a sample of unknown concentration, and you want to determine whether the difference between your measured value and the stated value
is statistically significant, or simply do to a random error. Or that you measured the same sample with two different methods, but got two different concentration readings, and you want to determine
whether the difference is due to random error, or if your methods are not equivalent. Statistical tests of significance, which are covered in this section, can be used to answer this sort of
In general, statistical tests are used for comparing two means or two standard deviations to see if they are significantly different. You can also compare a mean from measured data to an accepted
value to see if your sample measurements match the literature values.
There are a few steps for evaluating a dataset or comparing multiple sets of data. These steps are summarized in the following list:
1. Decide which test to perform - t-test for comparing means, and F-test for comparing standard deviations.
2. Choose a confidence level P, decide if the test should be 1- or 2-tailed, and determine the number of degrees of freedom.
3. Define the hypothesis as to whether your means or standard deviations are significantly different. You should define a null hypothesis and an alternate hypothesis.
4. Compute the test statistic using the appropriate formula, for either the t-test or the F-test.
5. Compare the test statistic to the tabulated value. Depending on whether the calculated value is greater than or less than the tabulated value, you accept or reject your hypothesis, and can
thereby conclude whether your data is significantly different or not.
Some of these steps have been covered in previous sections. If you need a refresher, just follow the appropriate link above. Others, such as hypotheses, one- and two-tailed tests, t-test and F-test,
are described in this section.
Finally, at the end of this section, there is a flow-chart that describes the steps and options that you would go through when analyzing data. You may find this chart very useful as a visual
reference for solving statistical and data analysis problems. | {"url":"http://www.chem.utoronto.ca/coursenotes/analsci/StatsTutorial/AdvStats.html","timestamp":"2014-04-20T23:39:36Z","content_type":null,"content_length":"5846","record_id":"<urn:uuid:980688ea-b023-49dc-9cf1-ec65cb5d50a6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Keil Is Going To Make 14 Pounds Of Mixed Nuts For ... | Chegg.com
step by step answer
Image text transcribed for accessibility: Keil is going to make 14 pounds of mixed nuts for a party. If Keil can spend $67.00 on nuts and peanuts cost $2.50 per pound and fancy nuts cost $6.50 per
pound, how many pounds of each should he buy? The amount of peanuts is pounds. The amount of fancy nuts is pounds.
Answers (1)
• step by step answer
Rating:5 stars 5 stars 1
Elizzz answered 6 minutes later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/keil-going-make-14-pounds-mixed-nuts-party-keil-spend-6700-nuts-peanuts-cost-250-per-pound-q3663362","timestamp":"2014-04-20T20:44:00Z","content_type":null,"content_length":"20568","record_id":"<urn:uuid:5d8a8cf1-112c-44fa-9aa5-65beb609b40b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fall 2013
Peter B Gilkey
202 Deady Hall,1-541-346-4717 (office phone) 1-541-346-0987 (fax) email: gilkey@uoregon.edu
Mathematics Department, University of Oregon, Eugene Oregon 97403 USA
TENTATIVE SYLLABUS - The reading and homework assignments are SUBJECT TO CHANGE
Math 315 Elementary Analysis Fall 2013 CRN 14660
Syllabus Version 2 as of 2 September 2013
MATH 315 CRN 14660. Meets Monday, Tuesday, Wednesday, Friday in 306 Deady from 10:00 to 10:50
Office Hours: Monday, Wednesday, Friday 09:00-10:00 or by appointment.
Text: Ross, Elementary Analysis: the theory of calculus any edition.
Organization. Homework is probably the most important activity in the course in terms of helping you internalize the material. Homework will be due each Tuesday on the material of the previous week.
The Monday class period will be a discussion section for the homework to be due the subsequent day. The last 20 minutes will be devoted to a quiz.
100 points Homework and Quiz Average (The 2 lowest scores from the combined list of homework and quiz scores will be dropped)
100 points Exam #1 Wednesday 23 October 2013 (Week 4)
100 points Exam #2 Wednesday 20 November 2013 (Week 8)
200 points Final Exam 10:15 Monday, December 9, 2013.
An incomplete can be assigned when the quality of work is satisfactory but a minor yet essential requirement of the course has not been completed for reasons acceptable to the instructor (NOTE:
this grade requires a contract to be completed). According to faculty legislation, final exams may not be given early under any circumstances. Your final grade will be assigned on the basis of
the total point score of 500 points. Any student getting at least a B on the final will receive at least a C- in the course; no student can pass the course unless they receive a grade of D or
better on the final exam. You must bring your photo ID to all exams. You may bring a 3x5 inch index card with any formulas on it to any exam or quiz if you wish. Similarly, you may bring with you
a hand held graphing calculator to any exam or quiz if you wish.
Teaching Associate: Ekaterina Puffini See Academic Calendar
Assignments (Tentive and subject to change. Page numbers are from first edition. The problems are uniquely identified by their number (e.g. 1.12))
□ Do page 5: 1.1, 1.2, 1.3, 1.12;
□ Do Page 12: 2.1, 2.2, 2.3, 2.4;
□ Do Page 18: 3.1, 3.2, 3.3, 3.6, 3.7
□ Do Page 25: 4.1(a-e,k-n,s-w), 4.2 (a-e,k-n,s-w), 4.3 (a-e,k-n,s-w), 4.7, 4.14;
□ Do Page 28: 5.1, 5.2, 5.3, 5.6.
□ Do Page 36: 7.1, 7.2, 7.3 (a,b,c,m,n,o,s,t), 7.5;
□ Do Page 42: 8.1, 8.2, 8.7, 8.8;
□ Do Page 52: 9.1, 9.2, 9.6, 9.8
□ Do Page 62: 10.1, 10.3, 10.6, 10.7, 10.8, 10.9, 10.10
□ Do Page 73 11.1, 11.3, 11.4, 11.5.
□ Do Page 77: 12.1, 12.3, 12.4, 12.5, 12.12, 12.14
□ Do Page 99: 14.1, 14.2, 14.3, 14.4, 14.6, 14.14.
□ Do Page 104: 15.1, 15.2, 15.3,15.4
□ Do Page 113: 16.1, 16.4, 16.6, 16.7.
□ Do Page 123: 17.1, 17.2, 17.3, 17.5, 17.6, 17.10, 17.14.
□ Do Page 131: 18.1, 18.2, 18.3, 18.6, 18.7, 18.12.
□ Do Page 176: 23.1, 23.2, 23.3, 23.5, 23.9;
□ Do Page 182: 24.1, 24.2, 24.3, 24.4, 24.5, 24.6, 24.7, 24.8
□ Do Page 190 but don't hand in: 25.2, 25.3, 25.4, 25.8, 25.12
Course objective The course serves as a transition between the computationally oriented calculus sequences (Math 251/2/3 and Math 281/2) and some of the more theoretically oriented 400 level courses
(the analysis sequence Math 413/4/5 and the complex variables sequence Math 412/3 come to mind as exemplars). More importantly, it serves as an entry into proof based mathematics supplementing the
course on proof theory (Math 307). The course will begin with an introduction to the basics - natural numbers, rational numbers, real numbers. A rigorous treatment of limits (sequential limits,
monotone sequences, cauchy sequences, subsequences, limit points, lim sup, lim inf etc) will be given. A brief introduction to metric spaces will be given (compactness, connectedness, etc).
Alternating series and integral tests will be discussed. Continuity, compactness, uniform continuity, and limits of functions will be discussed. If time permits, power series and L'Hospital's rule
will be treated. At this stage in their mathematical education, students should be familiar with the mechanics of calculus. What this course will stress are the rigorous foundations of the subject -
there will be lots of epsilon-delta proofs.
Learning Outcomes Students must be able to demonstrate an understanding of the nature of mathematical proof by proving various assertions concerning limits. They should be able to not only calculate
but prove their answer for various limits (sequential limits, monotone sequences, cauchy sequences, subsequences, limit points, lim sup, lim inf etc). They should be able to give proofs related to
compactness, connectedness, etc. as well as to compute and prove the correctness of the calculations using the alternating series test, the integral test, and other tests. They should be able to give
proofs that deal with continuity, compactness, uniform continuity, and limits of functions. What is crucial is the ability to give rigorous proofs of the epsilon-delta sort.
Mathematics Department Undergraduate Grading Standards November 2011 There are two important issues that this grading policy recognizes.
• (1) Mathematics is hierarchical. A student who is given a grade of C or higher in a course must have mastery of that material that allows the possibility of succeeding in courses for which that
course is a prerequisite.
• (2) Some mathematics courses are primarily concerned with techniques and applications. In such courses student success is measured by the student's ability to model , successfully apply the
relevant technique, and bring the calculation to a correct conclusion. The department's 100-level courses and most calculus courses are examples in this category although these are not the only
examples. Other courses are primarily concerned with theoretical structures and proof. In such courses student success is measured by the student's ability to apply the theorems and definitions
in the subject, and to create proofs on his or her own using the models and ideas taught during the course. Many courses are partly hybrids incorporating both techniques and applications, and
some element of theory. Some lean more toward applications, others more toward theory. This course has both applications and theory.
Rubric for applied courses:
• A: Consistently chooses appropriate models, uses correct techniques, and carries calculations through to a correct answer. Able to estimate error when appropriate, and able to recognize
conditions needed to apply models as appropriate.
• B: Usually chooses appropriate models and uses correct techniques, and makes few calculational errors. Able to estimate error when prompted, and able to recognize conditions needed to apply
models when prompted.
• C: Makes calculations correctly or substantially correctly, but requires guidance on choosing models and technique. Able to estimate error when prompted and able to recognize conditions needed to
apply models when prompted.
• D: Makes calculations correctly or substantially correctly, but unable to do modeling.
• F: Can neither choose appropriate models, or techniques, nor carry through calculations.
Modeling, in mathematical education parlance, means the process of taking a problem which is not expressed mathematically and expressing it mathematically (typically as an equation or a set of
equations). This is usually followed by solving the relevant equation or equations and interpreting the answer in terms of the original problem.
Rubric for pure courses:
• A: Applies the important theorems from the course. Constructs counterexamples when hypotheses are weakened. Constructs complete and coherent proofs using the definitions, ideas and theorems from
the course. Applies ideas from the course to construct proofs that the student has not seen before.
• B: Applies the important theorems from the course. Constructs counterexamples when hypotheses are weakened. Constructs complete and coherent proofs using the definitions, ideas and theorems from
the course.
• C: Applies the important theorems from the course when the application is direct. Constructs simple proofs using the de nitions when there are very few steps between the de nitions and the
conclusions. Explains most important counterexamples.
• D: Can do some single step proofs and explain some counterexamples.
• F: Unable to do even single step proofs or correctly use de nitions.
Many courses combine pure and applied elements and the rubrics for those courses will have some combination of elements from the two rubrics above. Detailed interpretation of the rubrics depends on
the content and level of the course and will be at the discretion of instructors. Whether to award grades of A+ is at the discretion of instructors.
Academic dishonesty
Academic Misconduct: The University Student Conduct Code (available at conduct.uoregon.edu) defines academic misconduct. Students are prohibited from committing or attempting to commit any act that
constitutes academic misconduct. By way of example, students should not give or receive (or attempt to give or receive) unauthorized help on assignments or examinations without express permission
from the instructor. Students should properly acknowledge and document all sources of information (e.g. quotations, paraphrases, ideas) and use only the sources and resources authorized by the
instructor. If there is any question about whether an act constitutes academic misconduct, it is the studentsŐ obligation to clarify the question with the instructor before committing or attempting
to commit the act. Additional information about a common form of academic misconduct, plagiarism, is available at http://library.uoregon.edu/guides/plagiarism/students/index.html see also http://
To rest on the blue of the day, like an eagle rests on the wind, over the cold range, confident on its wings and its breadth. | {"url":"http://pages.uoregon.edu/gilkey/dirCourse/M315-F13.html","timestamp":"2014-04-20T08:16:06Z","content_type":null,"content_length":"14053","record_id":"<urn:uuid:24bd6cc0-1367-4b50-8e5b-e67e0088ab31>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of computation
is a general term for any type of
information processing
. This includes phenomena ranging from human thinking to calculations with a more narrow meaning. Computation is a process following a well-defined
that is understood and can be expressed in an
network topology
, etc.
Classes of computation
Computation can be classified by at least three orthogonal criteria: digital vs analog, sequential vs parallel vs concurrent, batch vs interactive.
In practice, digital computation is often used to simulate natural processes (for example, Evolutionary computation), including those that are more naturally described by analog models of computation
(for example, Artificial neural network). In this situation, it is important to distinguish between the mechanism of computation and the simulated model.
Computations as a physical phenomenon
A computation can be seen as a purely physical phenomenon occurring inside a closed physical system called a computer. Examples of such physical systems include digital computers, quantum computers,
DNA computers, molecular computers, analog computers or wetware computers. This point of view is the one adopted by the branch of theoretical physics called the physics of computation.
An even more radical point of view is the postulate of digital physics that the evolution of the universe itself is a computation - Pancomputationalism.
Mathematical models of computation
In the theory of computation, a diversity of mathematical models of computers and their software are defined. A computation is considered as the evolution over discrete time epochs of such a model.
Typical mathematical models of computers are the following:
Typical mathematical models of computer software are the following:
Different mathematical models of computers (as well as programming languages) can be classified according to their expressive power, see, for example, the Chomsky hierarchy. There are also other
classifications of computations than the Chomsky hierarchy.
The word computation has an archaic meaning (from its Latin etymological roots), but the word has come back in use with the arising of a new scientific discipline: computer science.
See also | {"url":"http://www.reference.com/browse/computation","timestamp":"2014-04-17T04:49:14Z","content_type":null,"content_length":"83911","record_id":"<urn:uuid:c9dfbb4a-96e8-4865-9743-40c389678beb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2007 [00606]
[Date Index] [Thread Index] [Author Index]
Re: ClearAll[f]; f[x_] := x^2; f[y_] :=y^4; (*What is:*) f[2]
• To: mathgroup at smc.vnet.net
• Subject: [mg84397] Re: ClearAll[f]; f[x_] := x^2; f[y_] :=y^4; (*What is:*) f[2]
• From: Albert Retey <awnl at arcor.net>
• Date: Fri, 21 Dec 2007 03:22:25 -0500 (EST)
• References: <fkcueh$5d9$1@smc.vnet.net>
> ClearAll[f]; f[x_] := x^2; f[y_] :=y^4; (*What is:*) f[2]
> Evaluating this line in Mathematica 5.2 or Mathematica 6 returns 16. This makes sense, because the second definition replaces the first, as we can see when ?f returns:
> Global`f
> f[y_]:=y^4
> But in _A_Physicist's_Guide_to_Mathematica_ on p.314, Patrick Tam shows an example like this returning the other answer, 4, defined in the first definition. He then demonstrates that ?f returns:
> Global`f
> f[x_] := x^2
> f[y_]:= y^4
> He says his book was developed with Mathematica 2.2 and a prerelease of Mathematica 3 and is compatible with both.
> He goes on to explain:
> "Contrary to expectation, Mathematica used the first definition. The ? operator reveals that Mathematica stores both definitions in the global rule base, giving higher priority to the first definition. (This problem cannot, perhaps, be called a bug because developers of Mathematica are well aware of this design flaw, which is quite difficult to mend....)"
> What is he talking about? Did Mathematica 2.2 and 3 treat this differently? If earlier versions worked in this surprising way, there must have been a reason - what was it? Was it changed to prevent surprises like this example? Did changing it create other unfortunate consequences? Was Tam just wrong? Or do I misunderstand?
I think the statement is still true. Usually mathematica orders the
definitions so that more specific definitions are tested before more
general ones. When there is a real ambiguity concerning which pattern is
more specific the definitions are tested in the order they were entered.
For this very simple case (where just the name of the pattern is
different) it is not really an ambiguity and thus obviously can and has
been overcome in newer versions. Look at the following for an example
which shows that the behavior still exists for ambiguities which can not
be resolved as easy as the above:
Note that for an integer both definitions match and the one which is
entered first will win which you can check by changing the order of the
two definitions. While in this case in principle it could be argued that
the IntegerQ test is more specific than the NumericQ test, this would be
much harder to detect and there is no way for mathematica to decide
which of two such definitions is more specific if you would replace
IntegerQ and NumericQ with two user defined functions. | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Dec/msg00606.html","timestamp":"2014-04-16T22:30:36Z","content_type":null,"content_length":"27678","record_id":"<urn:uuid:afc91bb4-7f49-4d75-b8fb-57d54320fd9f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
The Department of Mathematics at the College of Staten Island (a college in the City University of New York) offers over 50 courses for math, engineering, and science students; as well as service
courses for the wider student body. We offer degrees with an emphasis on graduate school preparation, high-school teaching, and a joint degree with computer science.
Many of our majors become involved in undergraduate research, and our full-time faculty instruct and conduct research in a wide variety of areas of pure and applied mathematics [Math Sci-Net link].
Research areas include Applied Math, Discrete Math, Knot Theory, Logic, Number Theory, Probability, geometric analysis, differential geometry. nonlinear analysis, and Topology. The college is
conveniently located at the nexus of the enormous New York City area research community. | {"url":"http://www.math.csi.cuny.edu/","timestamp":"2014-04-17T00:58:48Z","content_type":null,"content_length":"13384","record_id":"<urn:uuid:e837ad4f-799f-401b-988d-881b8b0ef87d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
The non-simplicity of $SO(4)$ and $A_4$
up vote 18 down vote favorite
It is well known that the alternating group $A_n$ is simple unless $n=4$. It is likewise well known that the special orthogonal group $SO(n)$ is essentially simple unless $n=4$ (specifically, the
group $SO(n)$ is simple for odd $n$ and the group $SO(n)/\{\pm I\}$ is simple for even $n\neq 4$).
My question is: are these two facts equivalent? The non-simplicity of $SO(4)$ can be proved by observing that the double-cover of $SO(4)$ is $SU(2)\times SU(2)$ which, being a direct product, is very
much not simple. This double-cover is closely related to properties of the quaternions (see Stillwell's Naive Lie Theory). Is there an analogous proof of the non-simplicity of $A_4$ based on a
geometric structure related to the quaternions?
P.S. This relationship is an example of the "field of one element" heuristic. Can that be formalized?
gr.group-theory lie-groups f-1
1 Lie algebra is simple iff Weyl group modulo center does not split as a direct product. Is it what you are after? – Misha Oct 16 '13 at 20:32
1 $SO(3)$ is simple but $A_3$ is not simple. – Yves Cornulier Oct 17 '13 at 11:54
@Misha: what is the Weyl group of an arbitrary Lie algebra? or do you mean semisimple (finite dimensional in characteristic zero [over algebraically closed field?]) – Yves Cornulier Oct 17 '13 at
@Yves, I was only discussing the semisimple case. – Misha Oct 17 '13 at 15:06
1 I added the f-1 tag. I wasn't sure how specifically the question "Can that be formalized?" was intended, but browsing through the other f-1 questions may be of interest. – Hugh Thomas Oct 17 '13
at 20:53
show 1 more comment
3 Answers
active oldest votes
First, the non-simplicity of $A_4$ has a very beautiful proof, which I heard summarized by Gromov as : $2+2=4$, or rather $4=2+2$.
Or rather, there are 3 ways to pair 4 objects 2 by 2. The action of $S_4$ on the 4 objects therefore induces an action on the 3 pairings, hence a non-trivial morphism $S_4\to S_3$, whose
kernel intersects $A_4$ in a non-trivial simple subgroup.
up vote The sequel is a bit rough, but when looking at $SO(4)$, you can try to reinterpret the same proof, using the elements of a basis instead of the permuted objects. $SO(4)$ naturally acts on
18 down the set of direct orthonormal bases of $\mathbb{R}^4$; with each basis comes 3 decompositions of $\mathbb{R}^4$ into pairs of orthogonal planes (which should correspond in some sense to 3
vote complex structures satisfying the quaternionic relations, probably using that the planes are endowed with particular bases). So $SO(4)$ acts on such triples of complex structures, which if I
remember well identifies with $SO(3)$, the unit tangent bundle $S^2$ (choosing the first complex structure $I$ is picking a point on the unit sphere in purely quaternionic numbers, then you
have left to choose an orthogonal pure unit quaternion). Considering dimension you get a relatively large non-simple subgroup.
add comment
Here is an argument using the finite quaternion group.
Let $Q_8$ be the quaternion group of order $8$, namely $$Q_8= \{{\pm 1, \, \pm i, \, \pm j, \, \pm k\}}.$$ It is well known that $\textrm{Aut}(Q_8)=S_4$, and this is usually proven by
constructing an explicit isomorphism between $\textrm{Aut}(Q_8)$ and the symmetry group of the cube, see for instance here.
On the other hand, since $\textrm{Z}(Q_8)=\{\pm 1\}$, one also has $$\textrm{Inn}(Q_8)=Q_8/\textrm{Z}(Q_8)=V_4,$$ where $V_4$ denotes the Klein group of order $4$, which is isomorphic to
$C_2 \times C_2$.
up vote 8
down vote Finally, the inner automorphism group is always a normal subgroup of the full automorphism group, so the argument above shows that $S_4$ contains a normal subgroup isomorphic to $V_4$.
Using again the identification of $\textrm{Aut}(Q_8)$ with the symmetry group of the cube, it is no difficult to show that such a normal $V_4$ consists of even permutations, in other
words it is contained in $A_4$.
This shows that $A_4$ is not simple.
add comment
Here are some general remarks which hold for arbitrary semisimple Lie algebras ${\mathfrak g}$ relating its algebraic properties to that of its Weyl group $W$.
1. ${\mathfrak g}$ is simple if and only if the standard linear action of its Weyl group $W$ is irreducible. However, $W$ itself might still split nontrivially as a direct product (this
happens in few cases); let's call such $W$ "reducible". This was analyzed by Luis Paris in http://arxiv.org/pdf/math/0412214v2.pdf. The main focus of his paper was on infinite Coxeter
groups, but he also classified reducible finite Coxeter groups whose root system is irreducible (section 7). In all cases, the factor of $W$ by its center is irreducible, i.e., does not
up vote split nontrivially as a direct product. Therefore, the statement is: ${\mathfrak g}$ is simple iff $W/Z(W)$ is irreducible.
5 down
vote 2. As for simplicity of $W$ itself, in the case of "classical" root systems, $W$ always has the form of semidirect product of a permutation group and a finite abelian group. Thus, if one is
willing to divide by the abelian normal subgroup and then pass to the alternating group, then in the classical case one does obtains that simplicity of ${\mathfrak g}$ is equivalent to
simplicity of a certain "subquotient" of $W$. I do not know enough about exceptional Coxeter groups to make a similar conclusion in general.
I guess you work over an algebraically closed field of characteristic zero? Or how do you define the Weyl group? – Yves Cornulier Oct 17 '13 at 23:12
@Yves: Yes, I do. I did not think through the argument to conclude what happens over the reals. However, one can define root system and Weyl groups for real semisimple Lie groups. – Misha
Oct 17 '13 at 23:43
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory lie-groups f-1 or ask your own question. | {"url":"http://mathoverflow.net/questions/145026/the-non-simplicity-of-so4-and-a-4/145066","timestamp":"2014-04-20T18:34:21Z","content_type":null,"content_length":"67945","record_id":"<urn:uuid:041b6f5a-2e69-49b3-af3e-c53586d1f69f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Creative grading schemes
Creative grading schemes August 6, 2010
Posted by Noah Snyder in Uncategorized.
This fall I’ll be teaching my first regular college class (I’d only taught sections at Berkeley, though I suppose the summer sophomore tutorial I taught at Harvard might count). It’s on group
representation theory, which is my favorite subject, so I’m excited about it. I was just thinking about some possible homework problems, and I got to thinking about creative and unusual grading
schemes I’ve seen in previous classes I’d taken, and figured that might make a fun blog discussion topic. (Since this is my first time teaching I won’t be experimenting with any unusual grading this
time around, though I think it might be interesting to try one of these in the future.)
At the Ross summer math program if you don’t answer a problem satisfactorily then you get a REDO. This means you’re expected to go back and redo the problem and get it right. I’ve never seen this
tried in a regular class, but I think it could be a good idea for an “intro to proof writing” class. The point being that in such a class the material itself isn’t super important, and so if you do
fewer homework problems total but learn how to do them right that’s a good tradeoff.
Grading out of many points:
When I took group representation theory from Richard Taylor, the exams were graded out of a ridiculous number of points. A 5 question midterm would be out of 600 or so points. At first glance this
seems silly (and it certainly would be a bad idea for a class with multiple graders where you want consistency between graders), but it actually works very well. Here’s the point: if someone does
something you don’t like no matter how small it is you can take off points! Unclear sentence? Minus 1. Used the wrong terminology? Minus 3 points. This way the grader can effectively communicate
relatively small shortcomings in your write-ups, which wouldn’t be possible if you were grading out of a smaller number of points.
Perfection bonus
This idea comes from a class that I didn’t take our first year of grad school with Givental, so perhaps someone who took the class can correct me on the details. The basic idea that was for the final
in addition to points for each problem you got, there was a pool of extra points which you got if you never wrote anything false on the exam. But as soon as you wrote something that was wrong you
lost those points. This is good training for graduate students who soon won’t have graders telling them when they made a mistake, and it’s a good way to keep people from spewing nonsense in an
attempt to get partial credit. If I remember correctly the perfection bonus was quite substantial (I want to say it was worth as much as a full question on an exam where you need a little more than 2
correct solutions.)
What do people think of these ideas? Any other interesting grading schemes you’ve heard of?
I like the perfection bonus. A corollary that I’ve heard of is to assign at least a 20% fraction of the points if you merely write “I don’t know” instead of blathering on
I’m interested in hearing more details about the redo policy at Ross. What sort of feedback was given with each redo request, and how stringent was the criteria with respect to, e.g., writing
style? After I heard you talk about the redo idea in grad school, I mentioned it to a friend who implemented some version of it when teaching some undergraduates. Unfortunately, the students only
fixed the specific problems that were explicitly pointed out in each iteration, rather than (say) polishing the presentation in general.
hi noah,
many points: will you be grading, or will there be a separate grader? i’ve done this and it works great when i grade — but when i’ve had graders, they *hated* it, and did a lousy job, so the
students hated it too.
i’ll also be experimenting with redo’s in my quantum class this year, though for a slightly different reason: I want to force students to go through the solution sets *before* the night before
the final exam. so each week, in addition to the weekly problem set, the students will turn in a corrected version of their previous week’s problem set, allowing them to recoup some fraction of
the point previously lost. i’ve never heard of anyone doing this but surely it’s been tried — would love any feedback or advice on such a strategy.
wouldn’t it be nice if there were data on how various strategies panned out?
good luck!
So such things at Ross certainly varied from counselor to counselor. I typically met with students to go over their problems and explain what I was unhappy about with the problems that got redos,
and also wrote short comments on the sets so that they’d be reminded about the issues when they went back through. I don’t think I very often gave redos over issues like writing style, but if the
student was generally unclear in their writing I’d pick some particularly bad example and make them redo that one until it was clearly explained.
It’s true that things don’t necessarily work so well with students who are dedicated to putting in the minimal amount of effort. If people are happy to copy their old solution word for word and
then make small changes then certainly there’s less value in the technique.
For exams, I write minimal comments on the students’ papers, and then I give them about a week to fix any mistakes; they can earn 1/3 of a point for each point they missed originally. And yes, I
do get students who got 49 out of 50 who turn in revisions to boost their score to 49.33… You have to grade twice, but the first time is sort of quick because you don’t have to write much, and
the second time is really quick because usually they have figured out how to do it the second time through.
For homework, this isn’t really a grading scheme, but I’ve been using “portfolios” for a while for intermediate to advanced classes: for maybe a half dozen of the homework problems for the term,
the students write several drafts. The first draft of every problem gets graded by the grader if I have a grader, or by me if not. The second draft (for only one or two of the problems) gets
looked at in peer groups — they pass out copies for their colleagues to take home and evaluate, and then in the next class they spend some time going over their comments. I may let them turn in a
draft (of one or two problems) to me for further comments. Then at the end of the quarter they turn in final drafts of everything, with a cover sheet explaining which are their strongest
problems. Then I grade the portfolio, focusing on the strongest problems, evaluating both mathematical content and exposition.
This is perhaps not so uncommon – I once had a professor who wrote gigantic exams that had a total of 400-600 points available. The goal of the student was to answer 100 points worth of questions
(of the student’s choosing) correctly. The questions varied widely in point value. The student could state a and prove a big result for 60 points, or write a definition for 5 points. Many of the
questions on the exam were taken directly from assignments.
From my perspective as a student, these exams were beautiful. I knew that every ounce of studying I did would translate directly into a better exam score, as the tests were so big that every last
topic would appear on each exam. The downside: We (the students) all knew that it was okay to skip some topics while preparing for the exam, because we could skip 3/4 of the questions on the
My impression was that the professor just wanted us to show him that we learned *something*, in some detail. This exam style is probably not well-suited for a professor that wants to see if the
students have mastered the basics of all of the topics in the course.
I’ve definitely reduced the number of points for each exam problem over the years. A typical hour exam now usually has only 30-40 points. I find this makes it quicker to grade.
I also like having the total number of points be out of something other than 100 — it lets you control how students interpret their grade as they can’t easily apply the usual A = 90, B = 80,
I also like having the total number of points be out of something other than 100 — it lets you control how students interpret their grade as they can’t easily apply the usual A = 90, B = 80,
That is a _great_ idea.
I was going to mention something I do that I think of as the Caltech system — A = 60, B = 50, approximately. It means that when people say “What do I need to do to get a B in this class?” I can
say “Get a 65 on the final” not “Drop it and take it again”. Similarly, “Yeah, you got a 30, but that’s not going to totally ruin you”.
It means that the grades are, even more than usual, determined by the tests and not the HW. It has the good side that people don’t sweat the difference between 90 and 93, believing it’s the
difference between B and A. On the other hand, surprisingly many people act as “He can’t possibly mean what he said, that my terrible 55 was a B+” and are depressed by what is just a number.
The redo system is wonderful in a proofs class. This was utilized in my “Intro to Higher Mathematics” class, which was essentially Proof Writing 101. Sometimes the only way to figure out proofs
is to write a whole bunch of them. I highly recommend this method for your future classes.
I would probably enjoy having a test out of 600 points. The points you make about writing style and nitpicking would actually be helpful–if annoying. But in reality, I feel that you could still
mark those on tests and just not take points off.
I use the redo scheme for my graduate courses. As with all these
things, the devil’s in the details. I hand out a long list of problems
and the students are required to hand in some minimum number
during the course; they can do more (and often do).
I give 5 marks for a correct solution, 4 for something basically correct (where they’d learn nothing be rewriting it); otherwise it’s redo and they earn at most four marks. It’s amazing how it
improves the quality of the submitted work.
You need to set up some system of due dates, or at the end of the course they’re still working on the early material.
I do not give a final exam in these courses. I reserve the right to impose one, but this is so students with no time management
skills can earn an acceptable grade.
Here’s a measure (not the only one) that is used to compare students taking final exams at Oxford: the sum of the squares of the scores on each question. It has a similar effect to the perfection
bonus, but isn’t all-or-nothing.
Here are a couple related grading tips used in some courses at Harvey Mudd; I think they were developed by Michael Orrison for courses in algebra and representation theory.
Redos: problem sets can be redone, due two weeks after the original due date. The grade given is simply the highest grade earned. The original is resubmitted with rewrites.
This only works if you have lots of graders. It seems to really reinforce the importance of understanding HW problems and writing solutions up clearly.
Nonstandard scale: 95 and up is A+, 85-95 is A, 80-85 is A-, etc. If you use only integers when grading, this gives more room for variance, instead of leaving over half the scale for different
flavors of FAIL. And it has the positive effect on perception Allen mentioned above.
Greatest component: say the course has 2 “midterms”, homework, and a final exam, and they are weighted equally in the course grade. Instead of assigning each 25%, make each worth 20% with the
final fifth being the greatest of the components. So, if a student kicks ass on their HW, it may be worth 40% of the course grade. Similarly for the final.
This helps give the message that it’s never too late, and that HW is important, etc. And of course you can augment for different numbers of components or unequal weight etc.
I have some experience with the “redo” policy both from a grader and from a student point of view. You need to be very careful and deadline strict (for student and grader) for that to work. As an
example, I took a class where we had:
week 1: exercise class (1)
week 2: exercise class (2) + hand in homework assignment from week 1.
week 3: exercise class (3) + hand in homework assignment from week 2 + get back assignment week 1.
week 4: exercise class (4) + hand in homework assignment from week 3 + get back assignment week 2 + hand in redo from week 1.
week 5: exercise class (5) + hand in homework assignment from week 4 + get back assignment week 3 + hand in redo from week 2 + get back redo from week 1.
So there was a 5 week delay between the exercise class and the final homework grading, if there were no delays anywhere. If you do this for 20 weeks, chaos is sure to ensue without very strict
At my university we’ve been using the redo-grading for a numerical methods course for quite a while. This really forces the students to learn the intimate details of some selected methods, but it
is a bit more painful for the lecturers who have to correct the same assignments over and over again. No one is allowed to present themselves for examination until they get all assignments right,
so there is a practical limit to how many times a student can redo.
I have also seen this scheme in a quite different course, when I studied molecular modeling at the chemistry department. What I found attractive about this scheme as a student, was that I could
apply a different strategy to the topics I was unsure about. In stead of utilizing rhetorical skills to hide my shortcomings (like I might do if I knew that my mistakes would be reflected in the
finale grades), I could do the opposite and be particularly elaborate on matters I knew less well in order to get problems pointed out.
I had a first proofs class (Intro to Analysis) where we could do redos for problem sets. Problems were originally graded out of ten points, and you could submit them the next week for a maximum
of eight points.
As a student, I really liked it, because it gave me an incentive to go back to problems I had never figured out, or didn’t fully understand, which otherwise fell on the endless list of “things
that seem like a good idea but are never going to happen.” It was especially nice in a first proofs class because I think people newly exposed to real mathematics are a little more prone to going
into mental shock at some idea or other, and forcing/giving them an opportunity to process it for another week made it a little easier to stay afloat.
One downside was that it made procrastinating more tempting. (And now that I’m on the other side, I can certainly recognise that it could cause an oppressive amount of grading to pile up.)
When I taught an undergraduate topology class, I used a redo system for the first half of the course (point-set) but not for the second half (algebraic topology & classification of surfaces). I
had only 7 students in the course, but it worked really well since it leveled the playing field at the beginning of the course between students about to graduate and students straight out of
“Intro to Abstract Math”. During the point-set portion of the course, I would assign weekly problem sets which I would comment on and assign a “tentative grade” (if the problem set was good
enough to earn one). The students then had to turn in all of their point-set homework one week after we concluded the point-set section. They also had to turn in their original drafts so in
regrading I simply had to focus on the errors and poor communication I had pointed out in the original draft.
Not using the redo system in the second half of the semester gave the students the feeling that they had “graduated” and were now capable of meeting my high expectations for correctness and
In the fall, I will be teaching “Intro to Abstract Math” for the first time and am pondering the best way to use a redo system in that class (for 30 people). In particular, I want to be sure not
to provide any incentives for students to slack-off the first time through a problem set. A colleague uses a system whereby students are only allowed to redo a problem set if the original is of
sufficiently high quality — and he keeps the standards for reaching that threshold a secret. He has success with that method, but I prefer to be more upfront about what students exactly need to
do to meet my expectations. (Not that I have no subjectivity in grading …)
Hi Noah!
Just thought I’d add, like others, that the ‘Intro to Proofs’ class at Brandeis is always graded on the redo system for problem sets (and maybe for tests as well.) The actual grade is based
entirely on the revised version.
Tim Riley beat me to the mention of the Oxford system. It’s worth saying that this was for the final exams, and was used in conjunction with three other measures. One of which was the ordinary
sum of all the marks. The other two started by assigning each question a letter grade (α, β, γ) and then one was the number of αs and the other was twice the number of αs plus the number of βs.
To get a first (top class degree), you had to be above a certain level on all *four* of the systems.
However, grading an exam is very different to grading homeworks. An exam is a test to see if a student has achieved enough to pass the course. Homeworks are part of the teaching of that course.
I’m probably not alone here in thinking that mathematics is best learnt by actually doing it (it’s not a spectator sport!), from which one could deduce that the homeworks are the most important
part of the course.
I would say that you’re in danger of going about this the wrong way. You should start with your aims and objectives for the homeworks (with apologies for using the A&O phrase!). What do you
intend the students to achieve by doing the homeworks? Then you need to think about the following:
1. How do I encourage my students to actually do the homeworks? This is what, for unmotivated students (ie most of ‘em), schemes like “Homeworks count for X% of the exam” are aimed at. However,
I’d prefer “You must do N homeworks to be allowed to take the exam”.
2. How do I encourage my students to think about the homeworks after they’ve done them? This is where “redos” can help. This obviously needs tying in with whatever scheme is used in part 1.
3. How do I make life easy for my graders? Since the graders are involved in giving the feedback, to make it useful feedback, it needs to be easy for them to do (otherwise they’ll not do it well,
or at least, there’s no guarantee that they will).
So a grading scheme for homeworks has to do two jobs: encourage the students to do the homeworks, and give feedback to encourage the students to look at them again. These can actually be in
conflict at times, so I would recommend trying to have a scheme that has two components so that the two roles can be put into the two components and they won’t conflict.
(time for me to go and teach now, so I don’t have time to develop this further – which is probably just as well as I really only wanted to make the point that the grading policy should *follow*
the homework policy, not the other way around)
I had a professor at Caltech who did two-dimensional grading. He plotted exam scores versus homework scores, and gave the same grade to clusters.
Effectively a non-linear mapping between exam scores, homework grades, and the final grade.
He also thought that the final shouldn’t be weighted the same as the midterm, but he also thought that weighting it twice as much was too severe, so he took the geometric mean of the two.
Sorry comments are closed for this entry
Recent Comments
Erka on Course on categorical act…
Qiaochu Yuan on The many principles of conserv…
David Roberts on Australian Research Council jo…
David Roberts on Australian Research Council jo…
Elsevier maths journ… on Mathematics Literature Project… | {"url":"http://sbseminar.wordpress.com/2010/08/06/creative-grading-schemes/","timestamp":"2014-04-18T23:19:54Z","content_type":null,"content_length":"91901","record_id":"<urn:uuid:c63a8083-c3bb-43cd-8c8e-3a9b9ec9b9c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
identify the vertex and the axis of symmetry of the graph of the function y = 3(x + 2)2 – 3. (1 point)vertex: (2, –3); axis of symmetry: x = 2 vertex: (–2, –3); axis of symmetry: x = –2 vertex: (2,
3); axis of symmetry: x = 2 vertex: (–2, 3); axis of symmetry: x = –2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5097223ee4b02ec0829c095a","timestamp":"2014-04-17T00:58:15Z","content_type":null,"content_length":"60995","record_id":"<urn:uuid:14fa6c46-17a8-4287-ac6a-d2cb7608a171>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conversion of a number from string to vector<int>
"Juha Nieminen" <(E-Mail Removed)> wrote in message
news:4dfd93f0$0$2848$(E-Mail Removed)...
> Paul <(E-Mail Removed)> wrote:
>>> Do anyone want to write an efficient function for converting a
>>> non-negative arbitrary-precision number in base 10 from string to
>>> std::vector<int>. The vector must represent the number in base B, where
>>> B
>>> is int and arbitrary. End each element in the vector represents the
>>> digit
>>> of the number in base B. The most significative digit must be on the top
>>> of the vector. The code must be portable and must not rely on types
>>> greater than int. Only the std library is allowed.
>>> For example:
>>> std::vector<int> v = f("253", 127);
>>> would give
>>> v[0] = 126
>>> v[1] = 1
>> There is a function called atoi that may help you.
> Your incompetence and comprehension capabilities never cease to amuse.
> Care to actually give us actual code on how atoi() can be used for this
> task? (Hint: It can't.)
He seems to be trying to convert a string to an int, this is what atoi does.
What is incompetent about trying to provide a helpfull suggestion? | {"url":"http://www.velocityreviews.com/forums/t750131-conversion-of-a-number-from-string-to-vector-int.html","timestamp":"2014-04-20T14:22:52Z","content_type":null,"content_length":"65737","record_id":"<urn:uuid:fd4b8808-cc79-4322-b01b-005c222e9ac9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Tutors
Arlington, VA 22203
Ivy League tutor for Exam Prep, Writing, and much more!
...My tutoring specialties include: Exam Prep (SAT, ACT, SAT II, AP, GRE) Writing (Particularly advanced essay/paper writing) History, Politics, Social Studies Literature Chemistry
- through Algebra II Basic Algebra skills stand as an important foundation...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/Burke_VA_mathematics_tutors.aspx","timestamp":"2014-04-20T21:25:03Z","content_type":null,"content_length":"61147","record_id":"<urn:uuid:1cb512a4-9ca6-4d38-ab29-31c60105f5c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Java Operators
An operator is a symbol that operates on one or more arguments to produce a result. The Hello World program is so simple it doesn't use any operators, but almost all other programs you write will.
Operator Purpose
+ addition of numbers, concatenation of Strings
+= add and assign numbers, concatenate and assign Strings
- subtraction
-= subtract and assign
* multiplication
*= multiply and assign
/ division
/= divide and assign
% take remainder
%= take remainder and assign
++ increment by one
-- decrement by one
> greater than
>= greater than or equal to
< less than
<= less than or equal to
! boolean NOT
!= not equal to
&& boolean AND
|| boolean OR
== boolean equals
= assignment
~ bitwise NOT
?: conditional
instanceof type checking
| bitwise OR
|= bitwise OR and assign
^ bitwise XOR
^= bitwise XOR and assign
& bitwise AND
&= bitwise AND and assign
>> shift bits right with sign extension
>>= shift bits right with sign extension and assign
<< shift bits left
<<= shift bits left and assign
>>> unsigned bit shift right
>>>= unsigned bit shift right and assign | {"url":"http://www.cafeaulait.org/course/week2/03.html","timestamp":"2014-04-19T09:26:54Z","content_type":null,"content_length":"7211","record_id":"<urn:uuid:c8033301-89c6-4b62-975b-2c5cf8550642>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resolution: standard / high
Figure 2.
PL excitation density dependence.
a, b
) Integrated intensities of the PL spectra versus excitation density at different temperatures for (a) sample S650 and (b) sample S850. Different symbols indicate the measurement temperatures, 15 K
(squares), 70 K (circles), 130 K (upward triangles), 170 K (downward triangles), and 210 K (diamonds). The value of P
is 70 W/cm
. The dashed lines are the fit of the experimental data with the equation
. (
) Dependence of
on temperature for samples S650 (full diamonds), S700 (triangles), S750 (open diamonds), S800 (circles) and S850 (squares).
Bietti and Sanguinetti Nanoscale Research Letters 2012 7:551 doi:10.1186/1556-276X-7-551
Download authors' original image | {"url":"http://www.nanoscalereslett.com/content/7/1/551/figure/F2?highres=y","timestamp":"2014-04-19T14:53:13Z","content_type":null,"content_length":"12647","record_id":"<urn:uuid:9521d5c3-898e-48e8-a0bf-7faeb3f706de>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: amo_@_erf.net
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: amo_@_erf.net
User Profile for: amo_@_erf.net
UserID: 36676
Name: Amos Newcombe
Registered: 12/6/04
Total Posts: 6
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=36676","timestamp":"2014-04-19T12:24:29Z","content_type":null,"content_length":"12464","record_id":"<urn:uuid:657c3a57-69c7-4307-842f-c21625f399aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Hardball Times: Micah Owings the Pitcher vs. Micah Owings the Batter
05-14-2009, 12:11 PM #1
Join Date
Feb 2006
The Hardball Times: Micah Owings the Pitcher vs. Micah Owings the Batter
I came across this article this morning and thought I'd post it here for others to see.
Thou shalt sow, but thou shalt not reap
by Colin Wyers
May 14, 2009
On May 10, pitcher Micah Owings bought the Reds a shot at extra innings with a pinch hit home run. This brings up the perpetual question: Is Micah Owings best utilized as a hitter or a pitcher?
We should begin by talking a common language; it's possible to compare hitters to hitters and pitchers to pitchers with metrics like OPS and ERA respectively (albeit with the recognition that
there are better alternatives to those as well). But to compare apples to oranges we need a pan-fruit metric. As longtime readers might expect, my metric of choice for this purpose is Wins Above
Replacement. For those who aren't longtime readers, WAR is essentially an expression of how many wins a player is worth compared to the generally available replacements available, such as
minor-league journeymen or free agents willing to work for the league minimum.
[For a longer discussion of WAR, see my previous series "How to measure a player's value," parts one, two and three.]
So let's look at Micah Owings in two parts: the pitcher who is and the position player who could be.
Owings the pitcher
There are other good hitting pitchers in the league, like Carlos Zambrano, Jake Peavy and Yovani Gallardo. This doesn't ignite controversy the same way that Micah Owings does when he swings the
stick. This is because they have all established themselves as pitchers. Owings hasn't.
Owings owns a career 4.89 ERA and a career 4.80 FIP. These are not spectacular numbers, but they aren't terrible. Going into this season, Owings had accumulated 2.6 WAR in 250 innings, according
to Fangraphs. The average baseball player (hitter or pitcher) will accumulate 2.25 WAR in a full season (about 120 innings for a starting pitcher).
Again courtesy of Fangraphs, his projected FIP-ERA going forward is 4.59; that works out to almost five runs per game. In a league where the average is 4.65, and in a hitter's park like the Great
American Ballpark, that works out to a projected 1.33 WAR in 120 innings for Owings. (Probably a little better than that, since he would still accumulate some value from his hitting even as a
pitcher.) So a below average pitcher, to be sure, but it's possible that the Reds don't have a better starting pitcher than Owings available.
Owings seems to be comfortably nested in the grey area where he's neither a great asset or a great liability as a pitcher. But what about as a hitter?
Owings the hitter
The short answer here is that we don't know how well he'd perform as a hitter. For the sake of intellectual curiosity, we can go ahead and step through some calculations that seem reasonable and
should at least put us in the ballpark, but I want to start off right here with the caveat that the error bars on what we're doing here are huge.
Or to be more plain, this is basically a wild guess. An educated wild guess, maybe—is there such a thing? Anyway, if there is an educated wild guess, this is one.
And let's start off with the first of the two big, unsubstantiated assumptions that are required to make this guesstimate work out. Projection systems are largely based upon the premise of
regression to the mean, which is that over time, extreme observations become less extreme. Given a small number of observations, we would assume that a guy who has hit poorly will hit better than
he has so far, and a guy who has hit well will hit worse than he has so far. The more observations we have, the more we can have confidence in a player being above or below average as a matter of
true talent, rather than it simply being due to random chance.
The issue with regression to the mean here is determining the mean. For the majority of pitchers, regressing their hitting performance to the mean results in simply laughable numbers, because
pitcher hitting is really a separate thing from position player hitting, and that almost every pitcher is a well below average hitter. So simply feeding a pitcher's batting line into the typical
projection system will not provide usable results.
What we are going to assume here is that, if Owings were to devote himself to being a hitter full-time, that he would regress to the league mean of position players hitting, not the mean of
pitchers hitting. This has the benefit of making sense, at least. But it's a really unvalidated assumption, with few data points to go off of (outside of Babe Ruth and Rick Ankiel there's not a
lot of evidence to go around, and those two are both special cases indeed). If I had a giant robot following me around helping me perform sabermetrics, he would be flashing warning lights and
screaming "DANGER COLIN WYERS, DANGER!" But it's the best guess I have for the time being.
Using Sal Baxamusa's Marcels spreadsheet and Owings' career hitting stats thusfar, we get a forecasted batting line of .285/.343/.478, or about a .350 wOBA. That works out to roughly six runs (or
a half-win) above average with the bat in 650 plate appearances.
Now, as it just so happens, the positional adjustment for a corner outfield spot in 650 plate appearances is right around seven runs. So if we're correct, Micah Owings could be about a league
average player in the outfield, assuming he plays defense about as well as a typical corner outfielder. Can he do that?
Honestly, this is even more of a guess than his hitting projection. He's a young guy and apparently athletic (and he obviously has the arm to play somewhere like right field). So he could be. Or
he could not be. I know I'm being vague here, but that's all I can be with the data at hand.
So, using a set of favorable assumptions, it looks like we could make a case for Micah Owings being a league-average player in the outfield. Again, very large error bars surround this entire
endeavor and it's possible that he's an utter trainwreck as an outfielder and hits like a utility infielder. We really don't know for sure. But let's assume for a second that this WAG is
essentially correct. Should the Reds try to convert him to an outfielder right now?
Transactional costs
There are real costs to trying to convert Owings to a full-time position player, such as figuring out who to displace from the outfield to make room for him and figuring out who to take his
innings in the rotation. There's a lot of chaining going on here, and a lot of things to figure for the Reds.
There's also a lot of risk here. It's real easy to run some numbers and come up with "about 2-2.5 WAR," and another thing entirely to teach a pitcher to play the outfield, to have him abandon his
craft and devote himself to something else entirely. There's the chance that in taking Micah the pitcher and making him Micah the hitter you're left with neither Micah at all.
And the Reds just moved into a three-way tie for first place in the NL Central this evening, which says they're probably not in a mood to take those kinds of risks right now. So Owings will
probably stay in the rotation, and probably should stay in the rotation.
But boy, it's fun to dream about, isn't it.
Last edited by nmculbreth; 05-14-2009 at 12:20 PM. | {"url":"http://www.redszone.com/forums/showthread.php?75708-The-Hardball-Times-Micah-Owings-the-Pitcher-vs-Micah-Owings-the-Batter&s=9a8f3aefedc2c210e9075daf7a36e03c&p=1867694","timestamp":"2014-04-16T14:04:00Z","content_type":null,"content_length":"63456","record_id":"<urn:uuid:f31d4c57-26c2-4cb1-b78f-360fe810f54c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
» Topic: FFT in Max/Msp
FFT in Max/Msp
Jul 15, 2013 at 7:16am
FFT in Max/Msp
bt96 Hey, so I was looking at the tutorial #25 in MSP, and I had several questions about using fft. When the sample analyzed is no longer a sinosuid or periodic, it seems that you have to window it
bt96 to eliminate erroneous signals, but would the artifacts created by the FFT object and the windowing somehow tamper and mishandle the original data? I know you can use pfft~ also, but how would
that affect the original data? Also, it suggests splitting data of samples that are longer than 1024 into different sections, but I am hoping to directly perform FFT on at least 10000 samples
at a time, so how would the larger amount of samples affect analysis? I also don’t understand why 512 samples is so important (well in the tutorial) and while I know the concept of fundamental
frequency, how is 512 samples the fundamental frequency in this case? And in order for FFT to process it, does it have to be a signal first? would I use “*~” or “cycle~” or both? And if the
number of samples is not a power of 2, which is needed for the algorithm of FFT, how will that affect how things run?
Jul 15, 2013 at 8:12am
I will try to give you some answers, but english is not my native language and this is a complicated topic, so I don’t know how much I will be able to help you.
would the artifacts created by the FFT object and the windowing somehow tamper and mishandle the original data?
David The reason why you have to window the original signal is that the fourier transform assumes that the signal is periodic. If you take little slices of a longer signal (sample), for example you
have a sample that has a length of 20 seconds and you slice it into parts that are 1024 samples long, the fft will assume, that each of these 1024 samples long slices is a periodic waveform and
David that after the end of each slice (at sample 1023) the waveform jumps back (loops) to the beginning (sample 0).
The problem is, that most of the time the signal value at the end of each slice is different from the signal value at the beginning, because it isn’t a periodic waveform but a small potion of a
much longer signal.These “jumps” in the signal create harmonics that aren’t there in the original series. That’s why you have to “fade” each slice in and out, so that at the beginning and the
end of each slice the signal value is zero and there are no jumps.
You’re right, this has some effects on the signal, but think about this: if you let the slices overlap in the right way (fading one out while the next already fades in) you could reconstruct
the original sample by simply adding the slices again. So windowing doesn’t give you much trouble in practice.
Jul 15, 2013 at 8:27am
I am hoping to directly perform FFT on at least 10000 samples at a time, so how would the larger amount of samples affect analysis?
If you do a fft on a 1024 samples long signal, it will give you the frequency contents of that 1024 long section, if you take the fft on over 10000 samples long signal it will give you the
frequency contents of that over 10000 samples long section. So how long you make the section depends on what you want to do, what you want to analyze. Also it has an effect on the “resolution”
of the fft, I will explain that later and of course on the latency, because you have to “collect” the more than 10000 samples before you can perform a fft on it.
David I also don’t understand why 512 samples is so important (well in the tutorial) and while I know the concept of fundamental frequency, how is 512 samples the fundamental frequency in this
I think you’re mistaking the fundamental frequency of your input signal for the fundamental frequency of the fft. The fourier transform tells you, how to reconstruct a periodic signal by using
siniod waves that have integer frequency ratios. So for example, if you do a fft on a 512 samples long signal, the fft will “deconstruct” it into sine waves that have lengths of 512 samples,
256, 128, 64…. and so on plus a dc offset. The fft will give you the amplitude and phase of each of these sine waves (at least after some more calculations).
If you then take a number of sinewave-generators, tune them so that they have a periodicity of 512 samples, 256,128… give them the right phase offset and amplitude and turn them on and add them
all together, you will get a 512 samples long periodic waveform, that is exactly the original 512 samples long slice.
The fft doesn’t know anything about the fundamental frequency of some tone that the original signal might consist of. So if you’re playing a flute at 440Hz, that’s not the fundamental frequency
of the fft, but the frequency of the sine wave that “fits” into the 512 samples is.
Jul 15, 2013 at 8:37am
And in order for FFT to process it, does it have to be a signal first? would I use “*~” or “cycle~” or both?
Well yes, the fft analyzes signals. But then again, a digital signal is just a stream of numbers. I don’t know what “*~” or “cycle~” have to do with this. What are you trying to do?
David And if the number of samples is not a power of 2, which is needed for the algorithm of FFT, how will that affect how things run?
David The number of samples have to be a power of 2 because of the fft algorithm, which is much, much, much faster than a “normal” fourier transform. You could take 16384 (2^14) for example.
The number of samples you choose simply effects the fundamental frequency of the fft (the lowest freq. is the one, that fits into that number of samples) and the “resolution” as all higher
frequencies into that the original signal is “deconstructed” are multiple integers of the fundamental frequency.
On http://www.dspguide.com you can find a free online book, that also explains fourier transform as less math as possible.
You must be logged in to reply to this topic. | {"url":"http://cycling74.com/forums/topic/fft-in-maxmsp/","timestamp":"2014-04-18T04:34:55Z","content_type":null,"content_length":"72215","record_id":"<urn:uuid:3e9d111e-ae53-49fc-ae5c-ed52fd67a765>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
3-Way Active Crossover
3-Way Active Crossover
A simple 3-way crossover, intended for triamping Hi-Fi systems. This is a conventional 12dB / Octave unit, and cannot be expected to have the same performance as a Linkwitz-Riley aligned filter
network. It will still be a vast improvement over nearly any passive crossover, and is ideal for beginners or those who want to experiment further with multi-amping, but without the complexity of a
major project. The retuning (to (sub)-Bessel / Linkwitz-Riley alignment) is recommended, as the performance will be more in line with modern standards – see information below.
Please Note: This is a contributed article, and ESP is not responsible for errors or omissions
The crossover is based on the 2^nd order Butterworth filters. The resistors Ra and Rf set the gain of each filter to 1.582, which is slightly less than the required value of 1.586. This value of gain
(A[o]) follows from the formula …
k = 3 – A[o] where k = 1 / (Q-factor of the filter).
For a 2^nd order filter, the value of k can be obtained from the butterworth circle for n = 2. It turns out that k = 1 / cos(x), where x = π / (2 * n) which is π / 4 in this case. Thus for a
butterworth response, the Q-factor turns out to be 0.707. Please see references (1) for more details.
Increasing Q beyond this results in peaking at the cut-off frequency for each individual filter. Likewise, reducing Q makes the filter response more and more gradual. Thus a value of 0.707 for the
Q-factor gives the flattest pass-band gain & sharpest roll-off at cut-off frequency. Note that the gain of each individual filter must be less than 3, otherwise the circuit will oscillate. To get an
even sharper roll-off the order of the filters must be increased (keeping Q = 0.707).
Figure 1 – Crossover Schematic
In the above schematic are shown the low-mid range, mid range mid-high range filters. The x’over frequencies chosen are 300Hz and 3000Hz. Thus the low range filter has a cut-off frequency of 300 Hz,
the mid range has a lower cut-off at 300 Hz and an upper cut-off at 3000Hz, and the high range has a cut-off frequency of 3000 Hz. Please see references (2) for the x’over frequencies. The
calculations for the x’over are as follows:
Low-mid range: Low pass filter, f[h] = 300 Hz.
f[h] = 1 / (2 * π * R1 * C1), assuming R1 = R2 & C1 = C2 = 10nF. This yields R1 = R2 = 53K (used 56K).
Mid range: Low pass filter, f[h] = 3000 Hz, followed by a high pass filter f[l] = 300 Hz.
Assuming R1 = R2, R3 = R4, C1 = C2 = C3 = C4 = 10nF.
For the low pass, f[h] = 1 / (2 * PI * R1 * C1), yields R1 = R2 = 5.3K (used 5.6K).
For the high pass, f[l] = 1 / (2 * PI * R3 * C3), yields R3 = R4 = 53K (used 56K).
Mid-high range: High pass filter, f[l] = 3000 Hz.
f[l] = 1 / (2 * PI * R3 * C3), assuming R3 = R4 & C3 = C4 = 10nF. This yields, R3 = R4 = 5.3K (used 5.6K).
Note that the mid-range filter is preceded by an inverting amplifier. This is needed for 2 reasons – Firstly, the gain of the mid-range is (1.582 * 1.582) which must be brought back to the level of
the low-mid & mid-high ranges (1.582). Secondly (and more importantly), the 2^nd order butterworth filter has an inherent property of shifting the phase of any signal passing through it depending on
the signal’s frequency so that at cut-off the signal is 90° out of phase with the input (direction of shift depends on whether the filter is high-pass or low-pass).
Thus at 300 Hz the low-mid range filter has shifted the signal by 90° and the mid range has also done the same (but in an opposite direction). Hence, at 300 Hz, the signals appearing at the low-mid
range & mid range outputs are going to be 180° out of phase with each other & will cancel out (electrically or acoustically). The same happens to the mid range & mid-high range filters at 3000 Hz.
The inverter, with a gain of -0.63 serves to solve both the problems.
Using a 4^th order filter (assuming it to be a cascade of two 2^nd order butterworth types) in place of the ones shown will not have the phase-reversal at x’over problem, but you will still need to
bring down the mid range filter’s gain (this equates to 2 inverters).
The 0.1µF capacitor (Cin) is used with R5 to obtain a lower 3db frequency of about 15 Hz. With the arrangement shown, the overall magnitude response exhibits peaks at the x’over frequencies when the
3 outputs are combined, either electrically or acoustically. This ordinarily does not pose a problem, since the speaker deficiencies themselves will tend to hide (rather veil) the peaks, but with
really good speaker systems, the peaking could become evident.
The op-amps used should preferably have a high slew rate and all resistors must be of 1/4 W, 1% metal film type. For the x’over that I have made, the op-amps used are TL074 quad devices. Any other
op-amps of your choice can be substituted in place of these. The 100 Ohm resistors at the filter outputs are required if the x’over is going to be connected to the power stages via connectors or any
length of shielded lead.
All inputs & outputs must use fully shielded cables (as short as possible). If the circuit is to be assembled on a general purpose board, then try to keep all component leads and wiring as short as
possible to avoid pick up (and playback) of radio signals (mine did, but only on touching certain resistor leads).
That concludes the description of the x’over. I hope that the reader will find the material presented here to be of some help in understanding electronics.
Linkwitz-Riley Alignment
I visited the Linkwitz web-site (http://www.linkwitzlab.com) and found that the 2nd order unity gain (sub) Bessel crossover is indeed a 12db/octave L-R aligned unit. Further to this, I indicate the
possibility to convert the current design to an L-R alignment by –
1. shorting out all resistors named “Rf” & removing all resistors named “Ra”
2. replacing “Rf1″ (56K + 6.8K) by 100K resistors and,
3. using time-aligned drivers to really be able to appreciate the benefits of an L-R x’over.
This modification will create a Linkwitz-Riley aligned crossover, which will be superior to the Butterworth version in almost all cases.
1. Integrated Electronics by Millman & Halkias, McGraw-Hill (ISBN 0-07-Y85493-9)
2. Bi-Amping (not quite magic, but close) – ESP
High Current 13.8V Power Supply Current Sensing Slave Power Switch
Comments are currently closed. | {"url":"http://schematicdiagrams.net/3-way-active-crossover.html","timestamp":"2014-04-18T16:55:16Z","content_type":null,"content_length":"24792","record_id":"<urn:uuid:de47c2fd-a89a-449d-8f1d-8cfe1f0129fa>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ron Goldman
Work address
Department of Computer Science
6100 South Main
Rice University
Houston, Tx 77251-1892
Duncan Hall 3116
Work phone
Fax number
Research Interests
My current research interests lie in the mathematical representation, manipulation, and analysis of shape using computers. I am particularly interested in algorithms for polynomial and piecewise
polynomial curves and surfaces, and I have investigated both parametrically and implicitly represented geometry. My current work includes research in computer aided geometric design, solid modeling,
computer graphics, and splines. Click here for a short biography.
Recent Publications
Here are some of my recent publications:
• Modeling Projections in 3-Dimensions by Rotations in 4-Dimensions, accepted to appear in Graphical Models.
• Quantum B-splines, with P.Simeonov, accepted to appear in BIT Numerical Mathematics.
• Formulas and Algorithms for Quantum Differention of Quantum Bernstein Bases and Quantum Bezier Curves Based on Quantum Blossoming, with P. Simeonov, Graphical Models (2012), Special Issue of
selected papers from the 8th Dagstuhl Seminar on Geometric Modeling 2011, Vol. 74, pp. 326-334.
• Using Mu-Bases to Implicitize Rational Surfaces with a Pair of Orthogonal Directorices, with X. Shi and X. Wang, Computer Aided Geometric Design (2012), Special Issue on GMP 2012, Vol. 29, pp.
• Implicitizing Rational Surfaces of Revolution using Mu-Bses, with X. Shi, Computer Aided Geometric Design (2012), Vol. 29, pp. 348-362.
• Using Smith Normal Forms and Mu-bases to Compute All the Singularities of Rational Planar Curves, with X. Jia, Computer Aided Geometric Design (2012), Vol. 29, pp. 296-314.
• q-Blossoming: A New Approach to Algorithms and Identities for q-Bernstein Bases and q-Bezier Curves, with P. Simeonov and V. Zafiris, Journal of Approximation Theory (2012), Vol. 29, pp. 296-314.
• h-Blossoming: A New Approach to Algorithms and Identities for h-Bernstein Bases and h-Bernstein Bases and h-Bezier Curves, with P. Simeonov and V. Zafiris, Computer Aided Geometric Design,
(2011), Vol. 28, No. 9, pp. 549-565.
• Bezier and B-spline Curves with Knots in the Complex Plane, with K. Tsianos, Fractals Complex Geometry, Patterns, and Scaling in Nature and Society (2011), Vol. 28, No. 9, pp. 549-565.
• Rethinking Quaternions: Theory and Computation,, Synthesis Lectures on Computer Graphics and Animation, ed. Brian A. Barsky, No. 13, San Rafael: Morgan & Claypool Publishers, 2010.
• An Integrated Introduction to Computer Graphics and Geometric Modeling, CRC Press, Taylor and Francis, New York, 2009.
• Geometric Modeling, Dagstuhl 2002, S. Hahmann, G. Brunnett, G. Farin, and R. Goldman (eds.), Springer-Verlag, 2004.
• Topics in Algebraic Geometry and Geometric Modeling, co-edited with R. Krasauskas, AMS Contemporary Mathematics, Vol. 334, 2003.
• Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling, Morgan Kaufmann Publishers, Academic Press, San Diego, 2002.
• Knot Insertion and Deletion Algorithms for B-spline Curves and Surfaces, co-edited with Tom Lyche, SIAM, 1993.
Other Links | {"url":"http://www.cs.rice.edu/~rng/","timestamp":"2014-04-20T03:11:28Z","content_type":null,"content_length":"5781","record_id":"<urn:uuid:e5f14f1e-469d-43e1-91b6-4e038d672a3d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Saint Louis, MO 63124
Math, Physics and Business Tutor with PHD and MBA degrees
...The subject areas were Physics,
, Geometry, Precalculus and Calculus, Statistics, Operations Management, Economics, Marketing, Finance, and new Product Development. I earned a bachelor's degree in industrial engineering from
University of Portland, a doctorate...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Kirkwood_MO_algebra_tutors.aspx","timestamp":"2014-04-18T14:37:48Z","content_type":null,"content_length":"58961","record_id":"<urn:uuid:e0683c11-c352-4585-9f45-036cef4f0b96>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time dependent rotational flow of a viscous fluid over an infinite porous disk with a magnetic field
Chandrasekar, A and Nath, G (1986) Time dependent rotational flow of a viscous fluid over an infinite porous disk with a magnetic field. In: International Journal of Engineering Science, 24 (10). pp.
http___www.sciencedirect.com_science__ob=MImg&_imagekey=B6V32-481FWKC-1HS-1&_cdi=5718&_user=512776&_orig=search&_coverDate=12_31_1986&_sk=999759989&view=c&wchp=dGLbVtz-zSkzV&md5=3295c2f1f44b.pdf -
Published Version
Restricted to Registered users only
Download (861Kb) | Request a copy
Both the semi-similar and self-similar flows due to a viscous fluid rotating with time dependent angular velocity over a porous disk of large radius at rest with or without a magnetic field are
investigated. For the self-similar case the resulting equations for the suction and no mass transfer cases are solved numerically by quasilinearization method whereas for the semi-similar case and
injection in the self-similar case an implicit finite difference method with Newton's linearization is employed. For rapid deceleration of fluid and for moderate suction in the case of self-similar
flow there exists a layer of fluid, close to the disk surface where the sense of rotation is opposite to that of the fluid rotating far away. The velocity profiles in the absence of magnetic field
are found to be oscillatory except for suction. For the accelerating freestream, (semi-similar flow) the effect of time is to reduce the amplitude of the oscillations of the velocity components. On
the other hand the effect of time for the oscillating case is just the opposite.
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/20471/","timestamp":"2014-04-19T14:33:39Z","content_type":null,"content_length":"24580","record_id":"<urn:uuid:3cc98908-9559-4768-83f0-b72e26aadf29>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakview, PA SAT Math Tutor
Find an Oakview, PA SAT Math Tutor
...Able to help students improve their math skills and also learn many valuable test-related shortcuts and strategies. Scored 770/800 on SAT Reading in high school and 790/800 on January 26, 2013
test. Routinely score 800/800 on practice tests.
19 Subjects: including SAT math, calculus, statistics, geometry
...I possess clean FBI/criminal history and Child Abuse clearances. I am able to tutor at flexible times and locations. I am able to provide references, documentation, etc. upon request.
58 Subjects: including SAT math, chemistry, reading, biology
...My passion for math and science has given me more than expertise in those subjects, but also a strong analytical mind that is essential to success and growth in life. I am finding more and
more as I get older that critical thinking is rarely taught and greatly needed. I feel that getting experience teaching students one on one is the best way for me to have an immediate impact.
16 Subjects: including SAT math, Spanish, calculus, physics
I graduated from Jacksonville University with a bachelors degree in mathematics. I have spent 2 years as a tutor at Jacksonville University. I am currently a graduate mathematics student at
Villanova University.
13 Subjects: including SAT math, calculus, algebra 2, geometry
...My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. I have also
worked with accelerated groups in Camden with students that have gone on to receive scholarships and success at highly accredited local high schools.
8 Subjects: including SAT math, geometry, algebra 1, algebra 2
Related Oakview, PA Tutors
Oakview, PA Accounting Tutors
Oakview, PA ACT Tutors
Oakview, PA Algebra Tutors
Oakview, PA Algebra 2 Tutors
Oakview, PA Calculus Tutors
Oakview, PA Geometry Tutors
Oakview, PA Math Tutors
Oakview, PA Prealgebra Tutors
Oakview, PA Precalculus Tutors
Oakview, PA SAT Tutors
Oakview, PA SAT Math Tutors
Oakview, PA Science Tutors
Oakview, PA Statistics Tutors
Oakview, PA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bala, PA SAT math Tutors
Bywood, PA SAT math Tutors
Carroll Park, PA SAT math Tutors
Clifton Heights SAT math Tutors
Cynwyd, PA SAT math Tutors
Drexelbrook, PA SAT math Tutors
Fernwood, PA SAT math Tutors
Lester, PA SAT math Tutors
Moylan, PA SAT math Tutors
Pilgrim Gardens, PA SAT math Tutors
Pilgrim Gdns, PA SAT math Tutors
Primos Secane, PA SAT math Tutors
Primos, PA SAT math Tutors
Secane, PA SAT math Tutors
Westbrook Park, PA SAT math Tutors | {"url":"http://www.purplemath.com/Oakview_PA_SAT_math_tutors.php","timestamp":"2014-04-20T13:57:53Z","content_type":null,"content_length":"23981","record_id":"<urn:uuid:38940c5c-5423-4f92-9366-bc1670fe7330>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angela Kopp
Publications (8)40.35 Total impact
[show abstract] [hide abstract]
ABSTRACT: We study the entanglement between a qubit and its environment from the spin-boson model with Ohmic dissipation. Through a mapping to the anisotropic Kondo model, we derive the entropy
of entanglement of the spin E(alpha,Delta,h), where alpha is the dissipation strength, Delta is the tunneling amplitude between qubit states, and h is the level asymmetry. For 1-alpha>Delta/
omegac and (Delta,h)<omegac, we show that the Kondo energy scale TK controls the entanglement between the qubit and the bosonic environment (omegac is a high-energy cutoff). For h<TK, the
disentanglement proceeds as (h/TK)2; for h>TK, E vanishes as (TK/h)2-2alpha, up to a logarithmic correction. For a given h, the maximum entanglement occurs at a value of alpha which lies in the
crossover regime h approximately TK. We emphasize the possibility of measuring this entanglement using charge qubits subject to electromagnetic noise.
Physical Review Letters 06/2007; 98(22):220401. · 7.94 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The extreme variability of observables across the phase diagram of the cuprate high-temperature superconductors has remained a profound mystery, with no convincing explanation for the
superconducting dome. Although much attention has been paid to the underdoped regime of the hole-doped cuprates because of its proximity to a complex Mott insulating phase, little attention has
been paid to the overdoped regime. Experiments are beginning to reveal that the phenomenology of the overdoped regime is just as puzzling. For example, the electrons appear to form a Landau Fermi
liquid, but this interpretation is problematic; any trace of Mott phenomena, as signified by incommensurate antiferromagnetic fluctuations, is absent, and the uniform spin susceptibility shows a
ferromagnetic upturn. Here, we show and justify that many of these puzzles can be resolved if we assume that competing ferromagnetic fluctuations are simultaneously present with
superconductivity, and the termination of the superconducting dome in the overdoped regime marks a quantum critical point beyond which there should be a genuine ferromagnetic phase at zero
temperature. We propose experiments and make predictions to test our theory and suggest that an effort must be mounted to elucidate the nature of the overdoped regime, if the problem of
high-temperature superconductivity is to be solved. Our approach places competing order as the root of the complexity of the cuprate phase diagram.
Proceedings of the National Academy of Sciences 04/2007; 104(15):6123-7. · 9.74 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We study the entanglement between a qubit and its environment by calculating the von Neumann entropy of the spin in the delocalized phase of the spin-boson model. Using a well-known
mapping between the spin-boson model with Ohmic dissipation and the anisotropic Kondo model, we obtain exact results for the entanglement entropy E at arbitrary dissipation strength alpha and
level asymmetry h. We show that the Kondo energy scale TK controls the entanglement between the qubit and the bosonic environment. For h TK, we find that E=E(h=0)-2e^b/(2-2alpha) gamma[1+1/
(2-2alpha)]pi2 gamma[1+alpha/(2-2alpha)] (hTK)^2, where b=alphaalpha+ (1-alpha) (1-alpha). The universal (h/TK)^2 scaling reflects the Fermi liquid nature of the Kondo ground state. In the limit
h TK, E vanishes as (TK/h)^2-2alpha, up to a logarithmic correction. We thoroughly explore the phase space (,); for a given h, the maximal entanglement occurs in the crossover regime h ˜TK. We
also emphasize the possibility of measuring this entanglement using charge qubits subject to electromagnetic noise.
[show abstract] [hide abstract]
ABSTRACT: We propose that quantum phase transitions are generally accompanied by non-analyticities of the von Neumann (entanglement) entropy. In particular, the entropy is non-analytic at the
Anderson transition, where it exhibits unusual fractal scaling. We also examine two dissipative quantum systems of considerable interest to the study of decoherence and find that
non-analyticities occur if and only if the system undergoes a quantum phase transition.
Annals of Physics 01/2007; · 3.32 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We revisit the interlayer tunneling theory of high temperature superconductors and formulate it as a mechanism by which the striking systematics of the transition temperature within a
given homologous series can be understood. We pay attention not only to the enhancement of pairing, as was originally suggested, but also to the role of competing order parameters that tend to
suppress superconductivity, and to the charge imbalance between inequivalent outer and inner CuO2 planes in a unit cell. Calculations based on a generalized Ginzburg-Landau theory yield results
that bear robust and remarkable resemblance to experimental observations.
Proc SPIE 08/2005;
[show abstract] [hide abstract]
ABSTRACT: At quantum critical points (QCP) \cite{Pfeuty:1971,Young:1975,Hertz:1976,Chakravarty:1989,Millis:1993,Chubukov:1 994,Coleman:2005} there are quantum fluctuations on all length scales,
from microscopic to macroscopic lengths, which, remarkably, can be observed at finite temperatures, the regime to which all experiments are necessarily confined. A fundamental question is how
high in temperature can the effects of quantum criticality persist? That is, can physical observables be described in terms of universal scaling functions originating from the QCPs? Here we
answer these questions by examining exact solutions of models of correlated systems and find that the temperature can be surprisingly high. As a powerful illustration of quantum criticality, we
predict that the zero temperature superfluid density, $\rho_{s}(0)$, and the transition temperature, $T_{c}$, of the cuprates are related by $T_{c}\propto\rho_{s}(0)^y$, where the exponent $y$ is
different at the two edges of the superconducting dome, signifying the respective QCPs. This relationship can be tested in high quality crystals.
Nature Physics 04/2005; · 19.35 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The transition temperature (Tc) of multi-layer cuprate superconductors has an unusual dependence on the number of layers (n) per unit cell: it forms a bell-shaped curve peaked at n=3.
An explanation of this behavior is due to the combined effects of interlayer tunneling and a competing order, the latter effect being enhanced for n >=3 by a charge imbalance between the layers.
We explore this proposal further by examining the mean-field theory of a superconducting order parameter and a competing d-density wave (DDW) order parameter. We focus on three effects:
interlayer DDW coupling, increased charge imbalance in the five-layer system, and fluctuations of the superconducting order parameter. We find that (1) the DDW order parameters in neighboring
layers prefer to couple ``anti-ferromagnetically''---and, surprisingly, the coupling vanishes identically for two layers with order parameters that are ``ferromagnetically'' aligned; (2) both the
interlayer DDW coupling and the increased charge imbalance bring the calculation into better agreement with the experimental results; and (3) fluctuations can have a more pronounced effect when
they occur in the presence of a competing order parameter.
[show abstract] [hide abstract]
ABSTRACT: The low temperature scanning tunneling microscopy spectra in the underdoped regime is analyzed from the perspective of coexisting $d$-density wave and d-wave superconducting states. The
calculations are carried out in the presence of a low concentration of unitary impurities and within the framework of the fully self-consistent Bogoliubov-de Gennes theory, which allows local
modulations of the magnitude of the order parameters in response to the impurities. Our theory captures the essential aspects of the experiments in the underdoped BSCCO at very low temperatures.
Physical review. B, Condensed matter 01/2005;
Top Journals
• 2007
□ Rutgers, The State University of New Jersey
New Brunswick, New Jersey, United States
• 2005–2007
□ University of California, Los Angeles
☆ Department of Physics and Astronomy
Los Angeles, CA, United States | {"url":"http://www.researchgate.net/researcher/11732955_Angela_Kopp","timestamp":"2014-04-21T14:33:42Z","content_type":null,"content_length":"183651","record_id":"<urn:uuid:59445f9f-f342-43ce-9e53-e71babac5522>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
1x2 Solution
The number of ways to tile an MxN rectangle with 1x2 dominos is 2^(M*N/2) times the product of
{ cos^2(m*pi/(M+1)) + cos^2(n*pi/(N+1)) } ^ (1/4)
over all m,n in the range 0<m<M+1, 0<n<N+1.
0) Why does this work for M*N odd?
1) When M<3 the count can be determined directly; check that it agrees with the above formula.
2) Prove directly this formula gives an integer for all M,N, and further show that if M=N it is a perfect square when 4|N and twice a square otherwise.
Where does this come from? For starters note that, with the usual checker- board coloring, each domino must cover one light and one dark square. Assume that M*N is even (but as it happens our formula
will work also when both M,N are odd --- see exercise 0 above). Form a square matrix of size M*N/2 whose rows and columns are indexed by the light and dark squares, and whose (j,k) entry is 1 if the
j-th light and k-th dark square are adjacent and zero otherwise. There are now three key ideas:
First, the number of tilings is the number of ways to match each light square with an adjacent dark square; thus it is the permanent of our matrix (recall that the permanent of a rxr matrix is a sum
of the same r! terms that occur in its determinant, except without the usual +1/-1 sign factors).
Second, that by modifying this matrix slightly we can convert the permanent to a determinant; this is nice because determinants are generally much easier to evaluate than permanents. One way to do
this is to replace all the 1's that correspond to vertical adjacency to i's, and multiply the whole thing by a suitable power of i (which will disappear when we raise it to a fourth power).
Exercise 3): check that this transformation actually works as advertised!
Third, that we can diagonalize the resulting matrix A --- or, more conveniently, the square matrix of A' order M*N whose order-(M*N/2) blocks are 0,A;A-transpose,0 , whence det(A') = +-(det(A))^2.
Then the rows and columns of A' are indexed by squares of either hue on our generalized checkerboard, and its entries are 1 for horizontally adjacent squares, i for vertically adjacent ones, and 0
for nonadjacent (including coincident) squares. This A' can be diagonalized by using the trigonometric basis of vectors v_ab (a,b as in the formula above) whose coordinate at the (m,n)-th square is
sin(a*m*pi/(M+1)) * sin(b*n*pi/(N+1)).
Exercise 4): verify that these are in fact orthogonal eigenvectors of A', determine their eigenvalues, and complete the proof of the above formula.
(None of this is new, but it does not seem to be well-known: indeed each of the above steps seems to have been discovered independently several times, and I'm not sure whom to credit with the first
discovery of this particular application of the method. For different approaches to exactly solvable problems involving the enumeration of domino tilings, see the two papers of G.Kuperberg, Larsen,
Propp and myself on "Alternating-Sign Matrices and Domino Tilings" in the first volume of the Journal of Algebraic Combinatorics.)
--Noam D. Elkies (elkies@zariski.harvard.edu) Dept. of Mathematics, Harvard University | {"url":"http://rec-puzzles.org/index.php/1x2%20Solution","timestamp":"2014-04-16T10:23:34Z","content_type":null,"content_length":"9997","record_id":"<urn:uuid:d3f23d4c-b9b2-493b-804f-092e299ab7b9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hopkinton, MA ACT Tutor
Find a Hopkinton, MA ACT Tutor
...I have an PhD in Applied Math from UC Berkeley and have been tutoring students part time in the last four years. I enjoy working with students who are motivated but need a little help to
understand the subject at hand. I'm very good at explaining hard concepts or problems using easy to understand and every day examples.
11 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have completed biostatistics as an undergraduate, I work as a process engineer for an international biopharma company, and have also taken graduate-level statistics as part of my MBA. I have
completed graduate courses in marketing as part of my MBA, receiving As in those courses. I also have ...
66 Subjects: including ACT Math, reading, chemistry, writing
...Prepare students for college visits and interviews 6. Tutoring the SAT and ACT I write a blog for a local newspaper and have been a news correspondent. In addition, I have good computer
teaching skills.
88 Subjects: including ACT Math, chemistry, reading, physics
...Many concepts learned in Algebra 1 are used. A SOLID FOUNDATION in Geometry will ensure success in Algebra II and Precalculus. Prealgebra makes the transition from concrete arithmetic to the
abstract concepts of Algebra I/II and Geometry.
9 Subjects: including ACT Math, geometry, algebra 2, algebra 1
...I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or review sheets that they have been assigned. I prefer to focus on
examples from each section, rather than each specific problem, to make sure they understand all of the concepts.
17 Subjects: including ACT Math, statistics, geometry, algebra 1
Related Hopkinton, MA Tutors
Hopkinton, MA Accounting Tutors
Hopkinton, MA ACT Tutors
Hopkinton, MA Algebra Tutors
Hopkinton, MA Algebra 2 Tutors
Hopkinton, MA Calculus Tutors
Hopkinton, MA Geometry Tutors
Hopkinton, MA Math Tutors
Hopkinton, MA Prealgebra Tutors
Hopkinton, MA Precalculus Tutors
Hopkinton, MA SAT Tutors
Hopkinton, MA SAT Math Tutors
Hopkinton, MA Science Tutors
Hopkinton, MA Statistics Tutors
Hopkinton, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/hopkinton_ma_act_tutors.php","timestamp":"2014-04-16T10:21:37Z","content_type":null,"content_length":"23744","record_id":"<urn:uuid:66a58e88-f1e8-4591-b670-931149beabdf>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Prime Glossary: amicable numbers
Prime Pages:
Top 5000:
The pair of numbers 220 and 284 have the curious property that each "contains" the other. In what way? In the sense that the sum of the proper positive divisors of each, sum to the other.
┃For 220│1+2+4+5+10+11+20+22+44+55+110 = 284 ┃
┃For 284│1+2+4+71+142 = 220 ┃
Such pairs of numbers are called amicable numbers (amicable means friendly--but there is a different set of number actually called friendly number.).
Amicable numbers have a long history in magic and astrology, making love potions and talismans. As an example, some ancient Jewish commentators thought that Jacob gave his brother 220 sheep (200
female and 20 male) when he was afraid his brother was going to kill him (Genesis 32:14). The philosopher Iamblichus of Chalcis (ca. 250-330 A.D.) writes that the Pythagoreans knew of these numbers:
They call certain numbers amicable numbers, adopting virtues and social qualities to numbers, such as 284 and 220; for the parts of each have the power to generate the other.
Pythagoras is reported to have said that a friend is "one who is the other I, such as are 220 and 284." Now amicable numbers are most often (and most properly!) relegated to the exercise sections of
elementary number theory texts.
There is no formula or method known to list all of the amicable numbers, but formulas for certain special types have been discovered throughout the years. Thabit ibn Kurrah (ca. A.D. 850) noted that
if n > 1 and each of p = 3.2^n-1-1, q = 3.2^n-1, and r = 9.2^2n-1-1 are prime, then 2^npq and 2^nr are amicable numbers.
It was centuries before this formula produced the second and third pair of amicable numbers! Fermat announced the pair 17,296 and 18,416 (n=4) in a letter to Mersenne in 1636. Descartes wrote to
Mersenne in 1638 with the pair 9,363,584 and 9,437,056 (n=7). Euler then topped them both by adding a list of sixty-four new amicable pairs, however he made two errors. In 1909 one of his pairs was
found to be not amicable, and in 1914 the same fate took a second pair. In 1866 a sixteen year old boy, Nicolo Paganini, discovered the pair (1184,1210) which was previously unknown.
Now extensive computer searches have found all such numbers with 10 or fewer digits and numerous larger examples, for a total of over 7500 amicable pairs. It is unknown if there are infinitely many
pairs of amicable numbers. It is also unknown if there is a relatively prime pair of amicable numbers. If there is such a pair, they must be more than twenty-five digits long, and their product must
be divisible by at least 22 distinct primes.
See Also: PerfectNumber, AbundantNumber, DeficientNumber, SigmaFunction
Related pages (outside of this work) | {"url":"http://primes.utm.edu/glossary/page.php?sort=AmicableNumber","timestamp":"2014-04-17T09:51:25Z","content_type":null,"content_length":"7224","record_id":"<urn:uuid:f6ec25f3-cea7-4756-a227-a10e5bd7334f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
algcurves tools for studying one-dimensional algebraic curves defined by multi-variate polynomials
Algebraic commands for performing computations with algebraic numbers
ArrayTools tools used for low level manipulation of Matrices, Vectors, and Arrays
AudioTools commands for audio file I/O and manipulation
Bits commands for performing bit-wise operations efficiently
Cache commands for cache table manipulation
CAD tools to connect with CAD applications
codegen tools for translating Maple procedures to other languages
CodeGeneration tools for translating Maple code to other languages
CodeTools commands for analyzing and profiling Maple code
ColorTools commands for working with and converting colors
combinat combinatorial functions, including commands for calculating permutations and combinations of lists, and partitions of integers
combstruct commands for generating and counting combinatorial structures
ContextMenu tools for building and modifying context-sensitive menus
CUDA use CUDA(R) technology to accelerate certain LinearAlgebra routines
CurveFitting commands that support curve-fitting
Database commands and Maplet applications for using databases
DEtools tools for manipulating, solving, and plotting systems of differential equations
DifferentialAlgebra commands that are key for simplifying and decoupling systems of polynomial differential equations and computing formal power series solutions for them
DifferentialGeometry commands for differential geometry, Lie algebras, and tensors
difforms commands for handling differential forms
DiscreteTransforms commands for computing transforms of discrete data
DocumentTools commands that allow programmatic access to Maple documents and components
Domains commands for creating domains of computation
DynamicSystems commands for creating, manipulating, simulating, and plotting linear systems objects
eBookTools tools to convert a collection of Maple worksheets into a book using DocBook
EssayTools commands for analyzing and grading essays
ExcelTools commands that allow access to stored data in Microsoft Excel format
ExternalCalling tools for calling external functions from Maple
FileTools commands for file manipulation and processing
Finance commands for financial modeling and computations
Fractals commands to generate and explore fractals
GaussInt commands for working with Gaussian integers
genfunc commands for manipulating rational generating functions
geom3d commands for three-dimensional Euclidean geometry
geometry commands for two-dimensional Euclidean geometry
gfun commands for generating function manipulation
Grading tools for grading plots of functions
GraphTheory collection of routines for creating, drawing, manipulating, and testing graphs
Grid a package for multi-process parallel computation
Groebner commands for Groebner basis calculations in skew algebras
group commands for working with permutation and finitely-presented groups
GroupTheory collection of routines for working with groups
hashmset commands for multisets
heap commands on heaps
HTTP tools for fetching data from the web
ImageTools tools for image processing
InertForm tools for obtaining and working with inert-form expressions
InstallerBuilder create an installer for a Maple toolbox
IntegerRelations commands for approximating floating numbers by integer linear combinations of symbolic constants
IntegrationTools tools used for manipulation of integrals
inttrans commands for working with integral transforms and their inverses
LargeExpressions tools for managing creation of computation sequences
LibraryTools commands for library manipulation and processing
liesymm commands for characterizing the contact symmetries of systems of partial differential equations
LinearAlgebra commands for manipulating Matrices and Vectors as rtable data structures
LinearFunctionalSystems commands for constructing solutions of linear functional systems of equations
LinearOperators tools for solving linear functional equations, building annihilators and minimal annihilators, and performing accurate integration
ListTools tools for manipulating lists
Logic commands for manipulating expressions by using Boolean logic
LREtools commands for manipulating, plotting, and solving linear recurrence equations
Magma collection of routines for manipulating small magmas
MapleTA builtin commands from MapleTA available for use in Maple
Maplets tools to create graphical user interfaces for Maple
MathematicalFunctions tools providing information about mathematical functions
MathML commands for importing and exporting Maple expressions as MathML
Matlab commands to facilitate a Matlab Link
MatrixPolynomialAlgebra tools for symbolic manipulation of polynomial matrices
MmaTranslator tools for translating from Mathematica to Maple, expressions, command operations and notebooks
MTM collection of commands to support the Maple Toolbox
MultiSeries commands for performing asymptotic and series expansions in general asymptotic scales
numapprox commands for calculating polynomial approximations to functions on a given interval
numtheory commands for classic number theory
Optimization commands for numerically solving optimization theory problems
Ore_algebra routines for basic calculations in algebras of linear operators
OreTools tools for performing basic arithmetic in pseudo-linear (ore) algebra
OrthogonalSeries tools for series of classical orthogonal polynomials
orthopoly commands for generating various types of orthogonal polynomials
padic commands for computing p-adic approximations to real numbers
PDEtools tools for solving partial differential equations
Physics a package implementing the standard mathematical physics computational objects and their operations
plots commands for displaying graphical representations
plottools commands for generating and manipulating graphical objects
PolynomialIdeals commands for computing with polynomial ideals
PolynomialTools commands for manipulating polynomial objects
powseries commands for creating and manipulating formal power series represented in general form
priqueue functions on priority queues
ProcessControl commands for computing and visualizing statistical process control
QDifferenceEquations commands for constructing solutions of linear q-difference equations
queue commands on queue data structures
RandomTools tools for working with random objects
RationalNormalForms tools for using rational normal forms as a basis for constructing minimal representations and decomposing hypergeometric terms
RealDomain provides a real number context
RegularChains tools for solving systems of algebraic equations symbolically
RootFinding advanced commands for finding roots numerically
ScientificConstants commands for accessing physical constants and Periodic Table Element properties
ScientificErrorAnalysis commands for representation and construction of numerical quantities with a value and error
Security tools for Maple engine security
SignalProcessing commands for manipulating signals
simplex commands for linear optimization using the simplex algorithm
Slode commands for finding formal power series solutions of linear ODEs
SNAP symbolic-numeric algorithms for polynomial arithmetic
Sockets tools for network communication in Maple
SoftwareMetrics functions for quantifying code complexity
SolveTools commands for solving systems of algebraic equations
Spread tools for working with spreadsheets in Maple
stack commands on stack data structures
Statistics tools for mathematical statistics and data analysis
StringTools optimized commands for string manipulation
Student collection of packages covering undergraduate mathematics courses
Student[Basics] commands for learning foundational mathematics
Student[Calculus1] commands to assist with the teaching and learning of single-variable calculus
Student[LinearAlgebra] commands to assist with the teaching and learning of basic linear algebra
Student[MultivariateCalculus] commands to assist with the teaching and learning of multivariate calculus
Student[NumericalAnalysis] commands to assist with the teaching and learning of basic numerical analysis
Student[Precalculus] commands to assist with the teaching and learning of precalculus
Student[Statistics] commands to assist with the teaching and learning of statistics
Student[VectorCalculus] commands to assist with the teaching and learning of vector calculus
SumTools tools for finding closed forms of indefinite and definite sums
sumtools commands for computing indefinite and definite sums
Threads tools for parallel programming
TimeSeriesAnalysis commands used for working with data that varies with time
Tolerances provides computations with tolerances
Typesetting tools for programmatic access to Standard Worksheet Typeset and 2-D equation Parsing options
TypeTools commands for extending the set of recognized types in the type command
Units commands for converting values between units, and environments for performing calculations with units
URL tools for fetching data from the web
VariationalCalculus tools for Calculus of Variations computations
VectorCalculus commands for performing multivariate and vector calculus operations
Worksheet tools for generating and manipulating Maple worksheets
XMLTools tools for using XML documents
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam | {"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=index/package","timestamp":"2014-04-16T04:13:48Z","content_type":null,"content_length":"433483","record_id":"<urn:uuid:4dc1a143-6162-41b3-ad08-70ea497fceb4>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
The sign test, more examples
This is a continuation of the previous post The sign test. Examples 1 and 2 are presented in the previous post. In this post we present three more examples. Example 3 is a matched pairs problem and
is an example demonstrating that the sign test may not as powerful as the t-test when the population is close to normal. Example 4 is a one-sample location problem. Example 5 is an example of an
application of the sign test when the outcomes of the study or experiment are not numerical. For more information about distribution-free inferences, see [Hollander & Wolfe].
Example 3
Courses in introductory statistics are increasingly popular at community colleges across the United States. These are statistics courses that teach basic concepts of descriptive statistics,
probability notions and basic inferential statistical procedures such as one and two-sample t procedures. A certain teacher of statistics at a local community college believes that taking such a
course improves students’ quantitative skills. At the beginning of one semester, this professor administered a quantitative diagnostic test to a group of 15 students taking an introductory statistics
course. At the end of the semester, the professor administered a second quantitative diagnostic test. The maximum possible score on each test is 50. Though the second test was at a similar level of
difficulty as the first test, the questions in the second test were different and the contexts of the problems were different. Thus simply taking the first test should not improve the second test.
The following matrices show the scores before and after taking the statistics course:
$\displaystyle \begin{pmatrix} \text{Student}&\text{Pre-Statistics}&\text{Post-Statistics}&\text{Diff} \\{1}&17&21&4 \\{2}&26&26&0 \\{3}&16&19&3 \\{4}&28&26&-2 \\{5}&23&30&7 \\{6}&35&40&5 \\{7}&41&43
&2 \\{8}&18&15&-3 \\{9}&30&29&-1 \\{10}&29&31&2 \\{11}&45&46&1 \\{12}&8&7&-1 \\{13}&38&43&5 \\{14}&31&31&0 \\{15}&36&37&1 \end{pmatrix}$
Is there evidence that taking introductory statistics course at community colleges improves students’ quantitative skills? Do the analysis using the sign test.
For a given student, let $X$ be the post-statistics score on the diagnostic test and let $Y$ be the pre-statistics score on the disgnostic test. Let $p=P[X>Y]$. This is the probability that the
student has an improvement on the quantitative test after taking a one-semester introductory statistics course. The test hypotheses are as follows:
$\displaystyle H_0:p=\frac{1}{2} \ \ \ \ H_1:p>\frac{1}{2}$
Another interpretation of the above alternative hypothesis is that the median of the post-statistics quantitative scores has moved upward. Let $W$ be the number of students with an improvement
between the post and pre scores. Since there are two students with a zero difference, under $H_0$, $W \sim \text{binomial}(13,0.5)$. Then the observed value of $W$ is $w=9$. The following is the
$\displaystyle \text{P-value}=P[W \ge 9]=\sum \limits_{k=9}^{13} \binom{13}{k} \biggl(\frac{1}{2}\biggr)^{13}=0.1334$
If we want to set the probability of a type I error at 0.10, we would not reject the null hypothesis $H_0$. Thus based on the sign test, it appears that merely taking an introductory statistics
course may not improve a student’s quantitative skills.
The data set for the differences in scores appears symmetric and has no strong skewness and no obvious outliers. So it should be safe to use the t-test. With $\mu_d$ being the mean of $X-Y$, the
hypotheses for the t-test are:
$\displaystyle H_0:\mu_d=0 \ \ \ \ H_1:\mu_d>0$
We obtain: t-score=2.08 and the P-value=0.028. Thus with the t-test, we would reject the null hypothesis and have the opposite conclusion. Because the sign test does not use all the available
information in the data, it is not as powerful as the t-test.
Example 4
Acid rain is an environmental challenge in many places around the world. It refers to rain or any other form of precipitation that is unusually acidic, i.e. rainwater having elevated levels of
hydrogen ions (low pH). The measure of pH is a measure of the acidity or basicity of a solution and has a scale ranging from 0 to 14. Distilled water, with carbon dioxide removed, has a neutral pH
level of 7. Liquids with a pH less than 7 are acidic. However, even unpolluted rainwater is slightly acidic with pH varying between 5.2 to 6.0 due to the fact that carbon dioxide and water in the air
react together to form carbonic acid. Thus, rainwater is only considered acidic if the pH level is less than 5.2.
In a remote region in Washington state, an enviromental biologist measured the pH levels of rainwater and obtained the following data for 16 rainwater samples on 16 different dates:
$\displaystyle \begin{pmatrix} 4.73&4.79&4.87&4.88 \\{5.04}&5.06&5.07&5.09 \\{5.11}&5.16&5.18&5.21 \\{5.23}&5.24&5.25&5.25 \end{pmatrix}$
Is there reason to believe that the rainwater from this region is considered acidic (less than 5.2)? Use the sign test to perform the analysis.
Let $X$ be the pH level of a sample of rainwater in this region of Washington state. Let $p=P[5.2>X]=P[5.2-X>0]$. Thus $p$ is the probability of a plus sign when comparing the each data measurement
and 5.2. The hypotheses to be tested are:
$\displaystyle H_0:p=\frac{1}{2} \ \ \ \ H_1:p>\frac{1}{2}$
The null hypothesis $H_0$ is equivalent to the statement that the median pH level is 5.2. If the median pH level is less than 5.2, then a data measurement will be more likely to have a plus sign.
Thus the above alternative hypothesis is the statement that the median pH level is less than 5.2.
Let $W$ be the number of plus signs (i.e. $5.2-X>0$). Then $W \sim \text{binomial}(16,0.5)$. There are 11 data measurements with plus signs ($w=11$). Thus the P-value is:
$\displaystyle \text{P-value}=P[W \ge 11]=\sum \limits_{k=11}^{16} \binom{16}{k} \biggl(\frac{1}{2}\biggr)^{16}=0.1051$
At the level of significance $\alpha=0.05$, the null hypothesis is not rejected. We still believe that the rainwater in this region is not acidic.
Example 5
There are two statistics instructors who are both sought after by students in a local college. Let’s call them instructor A and instructor B. The math department conducted a survey to find out who is
more popular with the students. In surveying 15 students, the department found that 11 of the students prefer instructor B over instructor A. Use the sign test to test the hypothesis of no difference
in popularity against the alternative hypothesis that instructor B is more popular.
More than $\frac{2}{3}$ of the students in the sample prefer instructor B over A. This seems like convincing evidence that B is indeed more popular. Let perform some calculation to confirm this. Let
$W$ be the number of students in the sample who prefer B over A. The null hypothesis is that A and B are equally popular. The alternative hypothesis is that B is more popular. If the null hypothesis
is true, then $W \sim \text{binomial}(15,0.5)$. Then the P-value is:
$\displaystyle \text{P-value}=P[W \ge 11]=\sum \limits_{k=11}^{15} \binom{15}{k} \biggl(\frac{1}{2}\biggr)^{15}=0.05923$
This P-value suggests that we have strong evidence that instructor B is more popular among the students.
Myles Hollander and Douglas A. Wolfe, Non-parametric Statistical Methods, Second Edition, Wiley (1999)
3 thoughts on “The sign test, more examples”
1. Loved the Article. Clear and Simple and ideal for my studnts. Big Thumbs up
2. how do i enter that into my t-i
3. 3. A group of 11 students selected at random secured the grade points: 1.5, 2.2, 0.9, 1.3, 2.0, 1.6, 1.8, 1.5, 2.0, 1.2 and 1.7 (out of 3). Use the sign test to test the hypothesis that
intelligence is a random function (with a median of 1.8) at 5% level of significance | {"url":"http://probabilityandstats.wordpress.com/2010/02/28/the-sign-test-more-examples/","timestamp":"2014-04-19T11:55:44Z","content_type":null,"content_length":"64361","record_id":"<urn:uuid:3f24be62-5aa5-4cab-b50d-4607b7ae39e1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generators of sections of free groups
up vote 2 down vote favorite
Given a free group $F$ on $d$ generators and a normal subgroup $H$ of $F$ whose index is finite of prime power order, is there a systematic way to find the numbers of generators of $H/[H,F]$ and of
1 The answer is yes, in the sense that there are algorithms to solve these problems, and it would not be particularly difficult to write programs in a language like GAP or Magma to do so. Is this
what you are looking for, or is this more of a theoretical question? – Derek Holt May 9 '13 at 12:15
Thank you Professor Derek Holt, I would like to know if there is an explicit formula (perhaps depending on d and the index of H) to calculate this number of generators. – Yassine Guerboussa May 9
'13 at 12:44
2 These numbers can certainly depend on the isomorphism type of $F/H$ (and not just on its order). – Derek Holt May 9 '13 at 15:11
add comment
1 Answer
active oldest votes
This is really a cohomological question and has a simple cohomological answer. Recall that if a group $G$ acts on an abelian group $M$, then $M_G$ denotes the coinvariants of the
action, that is, the quotient of $M$ by the subgroup generated by $\{\text{$m-g(m)$ $|$ $m \in $M, $g \in G$}\}$. The group $F$ acts on $H$ by conjugation, and thus there is an induced
action of $F$ on $H_1(H)$.
Key Observation : $H/[H,F] \cong (H_1(H;\mathbb{Z}))_F$ and $H/[H,F]H^p \cong (H_1(H;\mathbb{Z}/p))_F$.
Indeed, we have $H/[H,H] \cong H_1(H;\mathbb{Z})$ and $H/[H,H]H^p \cong H_1(H;\mathbb{Z}/p)$ by definition, and quotienting by $[H,F]$ just kills off the $F$-action.
The other needed ingredient is the 5-term exact sequence in group homology. Given a short exact sequence
$$1 \longrightarrow A \longrightarrow B \longrightarrow C \longrightarrow 1$$
of groups and a ring of coefficients $R$, this 5-term exact sequence takes the form
up vote 5 $$H_2(B;R) \longrightarrow H_2(C;R) \longrightarrow (H_1(A;R))_B \longrightarrow H_1(B;R) \longrightarrow H_1(C;R) \longrightarrow 0.$$
down vote
accepted Letting $Q = F/H$, we will apply this to the short exact sequence
$$1 \longrightarrow H \longrightarrow F \longrightarrow Q \longrightarrow 1.$$
The key simplification that occurs is that $H_2(F;R) = 0$ since $F$ is free. We thus get exact sequences
$$0 \longrightarrow H_2(Q;\mathbb{Z}) \longrightarrow H/[F,H] \longrightarrow H_1(F;\mathbb{Z}) \longrightarrow H_1(Q;\mathbb{Z}) \longrightarrow 0$$
$$0 \longrightarrow H_2(Q;\mathbb{Z}/p) \longrightarrow H/[F,H]H^p \longrightarrow H_1(F;\mathbb{Z}/p) \longrightarrow H_1(Q;\mathbb{Z}/p) \longrightarrow 0.$$
If you understand $Q$ enough to calculate its first and second homologies, these short exact sequences allow you to determine $H/[F,H]$ and $H/[F,H]H^p$.
Thank you dear professor andy Putman. clearly we can start by a minimally d-generated group Q (I'm interested to the case when Q is a finite p-group), in that case $H_1(Q,Z/p) is
isomorphic to the frattini quotient of Q, so it is elementary abelian of rank d. – Yassine Guerboussa May 10 '13 at 10:53
also it is not difficult to prove that $F/[F,F]F^p$ is elementary abelian of rank d, and so is $H_1(F,Z/p)$. It follows from your last exact sequence that $H/[F,H]H^p$ and $H_2(Q,Z/p)
$ are isomorphic. So we have only to compute $H_2(Q,Z/p)$. I may ask how much harder to do this? – Yassine Guerboussa May 10 '13 at 11:00
It's nontrivial to compute it, but there is a huge literature on group cohomology, so there are many tools available. To help your search, you should be aware that $H_2(G;\mathbb{Z})$
is also known as the Schur multiplier of $G$. For a particular finite group of reasonable size, by the way, you should be able to compute $H_2$ using GAP. The relevant packages are
cohomolo (see gap-system.org/Packages/cohomolo.html) and hap (see gap-system.org/Packages/hap.html). – Andy Putman May 10 '13 at 16:43
Many thanks dear Andy. – Yassine Guerboussa May 11 '13 at 14:06
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"https://mathoverflow.net/questions/130157/generators-of-sections-of-free-groups","timestamp":"2014-04-21T04:48:15Z","content_type":null,"content_length":"60499","record_id":"<urn:uuid:c10f56ec-aee3-47db-b536-7aba2c81be8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gravitational Wave Rocket
Here's the paper I'm talking about
It's times like this I wish I'd gone to class...
I've been trying to find a method of propulsion that falls within the realms of accepted physics but is convenient in terms of *storytelling*. I want my characters to reach star systems within
subjective weeks, which can't be done if you accelerate them at 1G (or anything remotely close to that).
The basic concept of the gravitational wave rocket is that it radiates gravitational waves asymmetrically, losing mass as it does so, causing it to accelerate. I would assume this would accelerate
the rocket without generating any FELT acceleration, although I can't tell from the paper. I'm having a very difficult time understanding what is being discussed in this paper. At some points it says
there is a vibration, at other times they say a rotation. I can't see what they are talking about. Even the meanings of most of the variables is completely beyond me. Can anybody help clarify this? | {"url":"http://www.physicsforums.com/showthread.php?p=1571207","timestamp":"2014-04-21T04:47:10Z","content_type":null,"content_length":"25719","record_id":"<urn:uuid:b0cc1ebc-56c9-4458-9a17-4ac4d4060088>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
[e-lang] An attack on a mint
[e-lang] An attack on a mint
David Wagner daw at cs.berkeley.edu
Mon Mar 3 12:44:15 EST 2008
Mark Miller writes:
>However, because JavaScript doesn't have integers, I pulled the
>addition out into a separate statement to ensure it didn't loose
Oh, gosh, so my fixing this correctly is more subtle than it appears.
Here was my proposed "fix":
deposit: function(amount,src) {
var box = src.getDecr();
balance = caja.enforceNat(balance+amount);
There's a subtle detail here. If balance+amount overflows (i.e., exceeds
MAXINT, where MAXINT+1 is the smallest non-precisely-representable natural
number), then enforceNat() will throw an exception -- *after* src.balance
has been decremented but before this.balance has been incremented.
Thus this might look like it can violate conservation of currency.
Fortunately, I think this is not exploitable as long as the total amount
of money in circulation is representable in a natural number, and I
believe that assumption is enforced by the Mint. However this shows
that my fix has made the code less robust against integer overflow;
it happens to be safe, but the reasoning needed to show that requires
more subtle, non-local invariants.
I seem to recall discussing the integer overflow issue during the
Waterken review as we reviewed Tyler's Mint. His code was also safe,
for the same reason, but I think it took us a moment to work through
the chain of reasoning.
>The enforceNat function is
> /**
> * Enforces that <tt>specimen</tt> is a non-negative integer within
> * the range of exactly representable consecutive integers, in which
> * case <tt>specimen</tt> is returned.
> * <p>
> * "Nat" is short for "Natural number".
> */
> function enforceNat(specimen) {
> enforceType(specimen, 'number');
> if (Math.floor(specimen) !== specimen) {
> fail('Must be integral: ', specimen);
> }
> if (specimen < 0) {
> fail('Must not be negative: ', specimen);
> }
> // Could pre-compute precision limit, but probably not faster
> // enough to be worth it.
> if (Math.floor(specimen-1) !== specimen-1) {
> fail('Beyond precision limit: ', specimen);
> }
> if (Math.floor(specimen-1) >= specimen) {
> fail('Must not be infinite: ', specimen);
> }
> return specimen;
> }
Interesting. Does this suffice? One property we want to ensure is
that if
z = enforceNat(x+y);
succeeds, then as integers z is the sum of x and y. But what if x is
MAXINT, and y=1. Is it possible that x+1 rounds down to x? Similarly
is it possible that there exists some sum s such that x > MAXINT but
Math.floor(s-1) === s-1 < s? Are we guaranteed by properties of floating
point arithmetic that if x > MAXINT, then enforceNat() will throw an
exception? I just don't know floating point arithmetic well enough to
reason this through.
What about
z = enforceNat(x*y);
>I do remember a different plan interference vulnerability that both
>Tyler & I independently fell into in our respective implementations of
>the IOU Mint (in Waterken/Joe-E and E respectively). By aborting a
>plan with a thrown exception, an attacker could violate conservation
>of currency. This bug was fixed well before the Waterken security
>review. But it corroborates the suspicion that we may all be
>underestimating plan interference vulnerabilities.
This seems to be shaping up to make an interesting case study in
identifying several kinds of common hazards:
- exceptions can cause plan interference
- re-entrancy can cause plan interference
(or: calling untrusted code while object invariants are
temporarily broken can introduce security holes)
- integer overflow can violate expected invariants
- aliasing can violate expected invariants
More information about the e-lang mailing list | {"url":"http://www.eros-os.org/pipermail/e-lang/2008-March/012526.html","timestamp":"2014-04-20T06:17:00Z","content_type":null,"content_length":"6736","record_id":"<urn:uuid:b77ccc57-93e6-47d7-9b0d-389a851e4e5d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
explain the steps you would follow to evaluate an expression? here is an example : 40 + 7h-5 where H = 5
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ec84a4e4b07cd2b6490369","timestamp":"2014-04-20T03:46:03Z","content_type":null,"content_length":"50957","record_id":"<urn:uuid:5cf2c906-d682-462f-b6c5-44280d65734c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
ShareMe - free Arithmetic Average Definition download
Zynga Mafia Wars
Freeware Registry Cleaner Software
Piano Backing Tracks
Powerschool Parent
Network Internet Monitor Mac
Stock Chart Free
Series Expandable Phones
Boot Sector Error
Freehdl Win32
Opera 4 Beta
Easy Flv Player For Sonny Ericson P1i
Mac How To Convert Batch Nef To Jpg
Java Replace String Pdf File
File Upload Swf
Parsons Project
Arithmetic Average Definition
From Title
1. average CPU Cycles - Utilities
... The program is ideal for measuring the cpu usage for specific programs, to find out the total average cpu usage of a program for its entire uptime or to start measuring at a point in time you
are interested in. The latter is for example useful to find out how much cpu a program uses when idle. The Vista/7 version uses the kernel32.dll function QueryProcessCycleTime which makes it
possible to measure smaller cpu bursts than with other more commonly used methods. It also makes the measurements ...
2. Grade Point average Calculator - Utilities/Mac Utilities
... The Grade Point average Calculator is a web-based script that will calculate a grade point average based on the American system. Currently including calculation options for high school; middle
shool and college versions are in development. ...
3. PERT Weighted average and STD - Utilities/Other Utilities
... Another java app. for calculating the PERT Weighted average (Expected time - te) and standard deviation (STD). ...
4. Free average Interest Rate Calculator - Business & Productivity Tools/Accounting & Finance
... Calculate average Interest Rates with this Free average Interest Rate Calculator for Windows. Enter the balance and interest rate of your loans. Knowing the average interest rate of loans
helps determine when consolidation may be a good idea. This calculator offers integrated help, provides automatic hints and offers free, online updates. ...
5. EMSolution arithmetic - Educational/Mathematics
... This bilingual problem-solving mathematics software allows you to work through 36319 arithmetic and pre-algebra problems with guided solutions, and encourages to learn through in-depth
understanding of each solution step and repetition rather than through rote memorization. Each solution step is provided with its objective, related definition, rule and underlying math formula or
theorem. A translation option offers a way to learn math lexicon in a foreign language. Test preparation options ...
6. Animated arithmetic - Educational/Mathematics
... The "Animated arithmetic CD" for Windows and Win 95 teaches addition, subtraction, multiplication and division for children from 1st through 4th grades. It provides exercises in addition and
subtraction with and without regrouping. Problems can involve up to 9 digits. More than just a drill program, progressive help is given as needed to instruct the child to solve the problems. The
multiplication and division problems are based on the multiplication table from 1 to 10, it teaches "mental math" ...
7. arithmetic Game - Games/Kids
... Fancy some interesting arithmetic training? Come and search for the numeric components and complete the equations in this game! You will be given a large grid of numerous numbers, while a
questions will be presented at the bottom of the screen. Click to choose the correct numbers on the grid to complete the equation, then click the Submit button to check if the equation is valid.
If the equation is correct, the numbers you picked will be removed from the grid, and you can proceed to the next ...
8. arithmetic Matrix Calculator - Utilities/Other Utilities
... With this application is possible make: Sum of Matrices, Subtraction of Matrices, Multiply a Matrix by a real number and Multiplication of Matrices ...
9. arithmetic Simulation Routines - Utilities/Other Utilities
... Aim is to develop a library of utility functions to efficiently simulate division, multiplication, mod, finding first 1/0 bit etc. (using shift and add/subtract) for software development on
low cost dsp systems lacking these. ...
10. C++ Multiprecision arithmetic Library - Utilities/Other Utilities
... This is a library of classes and functions to be used to abstract arbitrarily large integer and floating-point numbers in C++.All standard operators are overloaded so the user is able to
substitute "mpi" for "int" and "mpf" for "double" to use. ...
Arithmetic Average Definition
From Short Description
1. AvancedLoadAvgerage - Utilities/Other Utilities
... This program is a little daemon, written in Python, splitting the load average in these two majors components: the CPU average, and the I/O average. With theses values, the administrator know
what to improved. ...
2. RekenTest - Educational/Mathematics
... RekenTest is freeware educational software to practice arithmetic skills. It supports basic arithmetic operations like addition and subtraction, the muliplication tables and so on, as well as
more advanced arithmetic operations like decimals, money problems, percentages and fractions. The software can be used for classes or as a homework helper. It has lots of options to create the
lessons you want and lets you organize your classroom with tasks and groups. Available in the languages English, ...
3. DigiMode My Notes - Utilities/System Utilities
... It takes an average of 30 seconds and 10 mouse clicks to get to windows Notepad, This not very good when the average PC user needs to write down a new note every few minutes. We have devised a
simple solution for this, and further added a lot of handy and useful features for the notepad user. No need to save anything, every think is saved automatically and recalled in one single mouse
click. Further, we added minimize to tray function and ability to Make on top instantly, plus ability to run the ...
4. Gish - Games/Adventure & RPG
... Gish isn't your average hero, in fact he's not your average anything...see Gish is a ball of tar. A Sunday stroll with his lady friend Brea goes awry when a shadowy figure emerges from an open
man hole and pulls Brea into the ground below. Following Brea's calls for help Gish suddenly finds himself in the subterranean sewers of Dross, a long forgotten city filled with twisting
corridors, evil traps and some of the most demented creatures imaginable. With his gelatinous structure as his only ...
5. ESBPCS-Dates for VCL - Trial - Programming/Components & Libraries
... ESBPCS-Dates is a subset of ESBPCS containing Components and Routines for Calendars and Date/Time Manipuation in Borland Delphi and C++ Builder - also covering Duration, TimeZones, Month
arithmetic, Week arithmetic, different standards. Subset includes a good collection of Edits, SpinEdits, ComboBoxes, Memos, CheckBoxes, RadioGroups, CheckGroups as well as a huge collection of
routines. Also Includes Data Aware Components, Help and full source. ...
6. AVCWare iPhone Video Converter for Mac - Multimedia & Design/Rippers & Converters
... Convert, edit, transfer and share faster than ever with AVCWare iPhone Video Converter for Mac-your complete video-editing software for making iPhone High definition and Standard definition
videos. Optimized with the latest processing technology, AVCWare iPhone Video Converter for Mac lets you see results on screen, make movies and music and then share it on iPhone immediately.
Functions: 1.1-2-3 step to convert iPhone videos from popular video formats 2.High definition movies for ...
7. OpenCms Page definition Module - Utilities/Other Utilities
... The Page definition Module for OpenCms extends the core functionality of OpenCms by a page definition layer. This permits to create, edit, delete and position heterogeneous content elements
freely from within the page preview. ...
8. Video Converter Factory Pro - Multimedia & Design/Video
... Video Converter Factory Pro is capable of converting almost all frequently used video files. The input video file formats supported by this video converter software include both HD (High
definition) and SD (Standard definition) videos. Video Converter Factory Pro provides powerful video editing function and video effect. Video Converter Factory Pro is a real all-in-one video
converting tool. As the advanced version of Free Video Converter Factory, Video Converter Factory Pro with more powerful ...
9. DVDVideoMedia Free DVD to HD Converter - Multimedia & Design/CD/DVD Burners/Rippers
... DVDVideoMedia Free DVD to HD Converter can convert DVD to various HD formats with no limitation. High-definition video, opposed to standard definition video, refers to any video which display
resolutions of 1280—720 pixels (720p) or 1920—1080 pixels (1080i/1080p). Besides supporting versatile HD formats, DVDVideoMedia Free DVD to HD Converter also allows you editing function and
customize advanced parameters is supported as well. Capture pictures and preview are also allowed with ideal DVD to HD ...
10. AVCWare Video Converter Platinum - Multimedia & Design/Rippers & Converters
... AVCWare Video Converter Platinum integrates all functions of HD video converter and general video converter software, which converts almost all High-definition and Standard-definition video
formats (AVI, MPEG, WMV, DivX, MP4, H.264/AVC, AVCHD, MKV, RM, MOV, XviD, 3GP, etc.) from one to another. Functions: 1.Supports nearly all video formats, in particular HD formats 2.Supports all
Popular Multimedia Devices, works as an iPod/iPhone/iPad/Zune/PSP/MP4 Video Converter 3.Create videos from ...
Arithmetic Average Definition
From Long Description
1. XNum integer arithmetic library - Utilities/Mac Utilities
... XNum is an integer arithmetic library written in C++.The difference between XNum and other libraries such as GMP is the the former tries to imitate the practical method that humans use to do
the arithmetic themselves.Current ...
2. MobileCubeAverage - Utilities/Other Utilities
... Written in J2ME, this is a simplistic mobile app which computes Rubik's cube session average <br>Computes speedcubing session average<br>Accepts SS.HH and MM:SS.HH input format<br>Also accepts
DNF(Did not finish) input for DNF solves<br>Shows the fastest and slowest time and a summary of solve times<br>average of 5; average of ...
3. Random Intelligence Test - Educational/Science
... Free random intelligence test based on Color Lines game. There are complexity adjustment by number of colors and initial balls, precision setting, saving results through Internet and settings
in the registry. It calculates average appraisal of intellect, i.e. arithmetic mean of total scores gotten in every round of test. With lesser complexity the test can be used for intellect
development. ...
4. Color Lines Test - Games/Puzzle & Word
... Free random intellect test based on Color Lines game. There are complexity adjustment by number of colors and initial balls, precision setting, saving results through Internet and settings in
the registry. It calculates average appraisal of intellect, i.e. arithmetic mean of total scores gotten in every round of test. With lesser complexity the test can be used for intellect
development. ...
5. AverageTime - Business & Productivity Tools/Accounting & Finance
... AverageTime is a quick and simple application that will average together a series of times, entered as HH:MM. Very useful in call center applications, such as finding the average handle time
for a series of calls, or for finding the average wait time on a series of queues. Also useful for exercise tracking, such as entering in lap times for a series of days to get an average.
AverageTime is FREEWARE. ...
6. Snap for hyperbolic 3-manifolds - Utilities/Other Utilities
... Snap (snap-pari) is a computer program for studying arithmetic invariants of hyperbolic 3-manifolds. See: Computing arithmetic invariants of 3-manifolds by Coulson, Goodman, Hodgson and
Neumann, Experimental Mathematics Vol.9 (2000) 1. ...
7. ScienCalc - Educational/Mathematics
... ScienCalc is a convenient and powerful scientific calculator. ScienCalc calculates mathematical expression. It supports the common arithmetic operations (+, -, *, /) and parentheses. The
program contains high-performance arithmetic, trigonometric, hyperbolic and transcendental calculation routines. All the function routines therein map directly to Intel 80387 FPU floating-point
machine instructions. ...
8. Safety In Numbers - Interval arithmetic - Utilities/Other Utilities
... John has between 2 and 5 apples. Mary gives him between 3 and 7 apples. How many apples does he have now? If you even understand this question, you understand interval arithmetic. This Java
library implements interval arithmetic for all operators. ...
9. Gaol: NOT Just Another Interval Library - Utilities/Other Utilities
... Gaol: NOT Just Another Interval Library Gaol is a C++ library for interval arithmetic. It is supposed to be a fast and easy to use/modify library for anyone interested in assessing interval
arithmetic merits or using it on a regular basis. ...
10. Stock Screener Lite - Business & Productivity Tools/Accounting & Finance
... Screen, Scan and Filter Stocks, covers over 30 stock exchange, Technical Analysis. MACD, RSI, Moving average, CCI, Williams %R, MFI. 4 build in filters and FREE EOD Data for 38 stock exchange
worldwide. Filter include Moving average 1 Crossover Moving average 2, MACD Buy Signal in last few days, weeks, months, average Volume Filter and RSI Above, Below, Crossover certain value in last
days, weeks, months. ...
Arithmetic Average Definition
Related Searches: | {"url":"http://shareme.com/programs/arithmetic/average-definition","timestamp":"2014-04-18T20:54:42Z","content_type":null,"content_length":"54026","record_id":"<urn:uuid:abf88135-e80c-4440-887a-df09280a7807>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution to Puzzle No. 2 - carnivorous beetles
September 1997
For the question see " Puzzle No. 2 - carnivorous beetles", in issue 2.
At first sight it may appear necessary to write down the equations of motion of the beetles as a set of differential equations and to solve these to find the paths or trajectories. However, as with
our first puzzle there is a trick (see "Solution to Puzzle No. 1 - the ring").
The problem is symmetrical so we can see straight away that whatever paths the beetles take they will always be at the four vertices of a square whose origin remains fixed.
We know that A's path must curve because B is also moving. From the direction of B's motion we can also see that A's path must curve towards the interior of the square. So the square is shrinking (as
well as rotating clockwise).
Notice that the component of B's velocity in the direction of AB is always zero so the length of the side from A to B is shrinking at the speed at which A is moving towards B, 1cm/s. After 4s the
square has shrunk to a point with all the beetles having spiralled into the centre.
To summarise: A takes 4s to catch B and travels 4cm in this time. As to what happens, that is left to your imagination!
Solution using calculus:
Writing down the equations of motion for the beetles and solving them is another way of working out what happens. It also tells us something that is not obvious from a simple plot of the beetles'
Take origin O as the centre of the square, and consider the motion of the beetle at corner A. Work with polar coordinates as shown in the diagram with r as the distance OA. Then:
From this we see that the beetles must eventually collide at t = 4 which gives us our answer.
It is also instructive to solve the equation for theta.
Notice that as t gets closer to 4, theta gets larger and larger without bound. In other words the beetles spin around each other an infinite number of times before colliding.
Submitted by Anonymous on September 9, 2012.
(1) 4 seconds
(2) 4 cm
(3) All Beatles will die - John Lennon, Paul McCartney, George Harrison, and Lingo Star. By eating one another.
The square (A,B,C,D) continues to shrink by 1 cm / second, toward the center. Since the original size of the square is 4 cm (height & width), it takes 4 seconds. The length of each locus is 4 cm (= 4
seconds * (1 cm / second)).
Sent from: takushi.itadani@gmail.com | {"url":"http://plus.maths.org/content/os/issue3/puzzle/solution","timestamp":"2014-04-20T00:48:56Z","content_type":null,"content_length":"21472","record_id":"<urn:uuid:41afa3e5-41ba-45fe-81cd-0456fe7390a6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
The first order parchment
- Recent Trends in Data Type Specifications. 11th Workshop on Specification of Abstract Data Types, volume 1130 of Lecture Notes in Computer Science , 1996
"... this paper, so we leave them out here. Thus we can apply the idea of combining things via colimits to institutions themselves, with the special point that we have to take limits here instead of
colimits. Taking limits in CAT results in categories of "amalgamated objects", i. e. we put signatures an ..."
Cited by 15 (5 self)
Add to MetaCart
this paper, so we leave them out here. Thus we can apply the idea of combining things via colimits to institutions themselves, with the special point that we have to take limits here instead of
colimits. Taking limits in CAT results in categories of "amalgamated objects", i. e. we put signatures and models together at the level of single objects. In contrast to this, sentences are combined
with colimits in Set (due to the contravariant direction of the sentence component). That is, sets of sentences are combined. To show how this works, we introduce some well-known institutions and
morphisms between them.
- In Recent Trends in Algebraic Development Techniques, volume 1376 of LNCS , 1997
"... . The paper addresses important problems of building complex logical systems and their representations in universal logics in a systematic way. We adopt the model-theoretic view of logic as
captured in the notions of institution and of parchment (an algebraic way of presenting institutions). We prop ..."
Cited by 15 (4 self)
Add to MetaCart
. The paper addresses important problems of building complex logical systems and their representations in universal logics in a systematic way. We adopt the model-theoretic view of logic as captured
in the notions of institution and of parchment (an algebraic way of presenting institutions). We propose a new, modified notion of parchment together with parchment morphisms and representations. In
contrast to the original parchment definition and our earlier work, in model-theoretic parchments introduced here the universal semantic structure is distributed over individual signatures and
models. We lift formal properties of the categories of institutions and their representations to this level: the category of model-theoretic parchments is complete, and their representations may be
put together using categorical limits as well. However, model-theoretic parchments provide a more adequate framework for systematic combination of logical systems than institutions. We indicate how
the necessar...
, 1996
"... For the specification of abstract data types, quite a number of logical systems have been developed. In this work, we will try to give an overview over this variety. As a prerequisite, we first
study notions of {\em representation} and embedding between logical systems, which are formalized as {\em ..."
Cited by 5 (4 self)
Add to MetaCart
For the specification of abstract data types, quite a number of logical systems have been developed. In this work, we will try to give an overview over this variety. As a prerequisite, we first study
notions of {\em representation} and embedding between logical systems, which are formalized as {\em institutions} here. Different kinds of representations will lead to a looser or tighter connection
of the institutions, with more or less good possibilities of faithfully embedding the semantics and of re-using proof support. In the second part, we then perform a detailed ``empirical'' study of
the relations among various well-known institutions of total, order-sorted and partial algebras and first-order structures (all with Horn style, i.e.\ universally quantified conditional, axioms). We
thus obtain a {\em graph} of institutions, with different kinds of edges according to the different kinds of representations between institutions studied in the first part. We also prove some
separation results, leading to a {\em hierarchy} of institutions, which in turn naturally leads to five subgraphs of the above graph of institutions. They correspond to five different levels of
expressiveness in the hierarchy, which can be characterized by different kinds of conditional generation principles. We introduce a systematic notation for institutions of total, order-sorted and
partial algebras and first-order structures. The notation closely follows the combination of features that are present in the respective institution. This raises the question whether these
combinations of features can be made mathematically precise in some way. In the third part, we therefore study the combination of institutions with the help of so-called parchments (which are certain
algebraic presentations of institutions) and parchment morphisms. The present book is a revised version of the author's thesis, where a number of mathematical problems (pointed out by Andrzej
Tarlecki) and a number of misuses of the English language (pointed out by Bernd Krieg-Br\"uckner) have been corrected. Also, the syntax of specifications has been adopted to that of the recently
developed Common Algebraic Specification Language {\sc Casl} \cite{CASL/Summary,Mosses97TAPSOFT}. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2261052","timestamp":"2014-04-21T11:27:20Z","content_type":null,"content_length":"20128","record_id":"<urn:uuid:c1a21b5a-364f-44b7-a7d4-f72fb8812ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of possible triangle.
November 29th 2011, 08:19 AM #1
Nov 2011
Number of possible triangle.
Q)triangle has n dots on each side. (as shown in fig) How many triangles can be formed (any size).
1st image (image002.jpg)
My incomplete solution
If thr are n dots then total no of dots =n(n+1)/2
No of combination using 3 at a time = n(n+1)/2C3
but all set of 3 pts wont be ∆
some will be co linear points
image 2 (1111.jpg)
like x1,x2,x3 are coliner ON line AB , similarly on CD,EF n so on
n similarly on other side of ∆ BT, GH n so on & AT ,SJ n so on
combination of 3pts on AB = nC3
on CD = (n-1)C3
on EF = (n-2)C3
so total = nC3 +(n-1)C3+(n-2)C3………………………………….. 3C3
as there are 3 side therefor:
3x(nC3 +(n-1)C3+(n-2)C3………………………………….. 3C3)= 3(n∑ i=3 iC3) (the limits of summation didnt come properly)
so no . possible triange = n(n+1)/2C3 - 3(n∑ i=3 iC3)
but x1,x4,x6,x7 are also collinear...
dont know how to proceed
plz help!!!
Last edited by mr fantastic; November 29th 2011 at 02:47 PM. Reason: Title.
Re: number of possible triangle .... plz give a look once
A recursive definition should solve the problem.
When n = 1, you have 0 triangles.
When n = 2, you have 1 triangle.
If n = k gives you x triangles, what does n = (k+1) give you?
If you can answer that, you can get the answer in general.
Re: number of possible triangle .... plz give a look once
Also...it matters if you only want equilateral triangles, I wasn't clear on that exactly. If you allow ANY triangle, then all you should have to do is calculate the number of points and eliminate
cases where all three are colinear.
I'd build the cases up to maybe n = 3 or 4, then try to figure out the recursive portion which follows.
Re: number of possible triangle .... plz give a look once
Er, one more note. This is actually pretty tricky given the colinear issue...I see your problem now. This is significantly non-trivial, as you have to take all angles into account. It's possible,
but it would be hard...you would have to get a formula to determine for a triangle of height h, how many angles produce 3 in a row, how many produce 4, 5, 6... I don't think this is
straightforward at all. Can you post the verbatim question?
Re: number of possible triangle .... plz give a look once
1st of all .. i appreciate u gave time for this
when i read ur post i feel i m taken back in time where i thought of these things
spent 2 days now... :|
anyways this is my own question... not frm anywhere
and yes it can be any triangle.. not necessarily equilateral triangles
I did it till n =18 but still... many new colinear points were coming up..
but I do think there can be a general expression for this...
we are overlooking something ......
if u can show a direction.. i might do something on it
{i ll be sleeping now... its late .. might be a delay in my reply}
Re: number of possible triangle .... plz give a look once
Oh! That's a relief. If this is your own question then that explains a lot.
Congratulations are in order!
Take a look at Fermat's Last Theorem sometime. Or for a bigger headache, the Collatz Conjecture (which may well be one of those impossible-to-solve problems for all we know). What you're asking
for here is simple but realize that colinearity extends, as the triangle grows, toward an infinite number of angles. I don't know what the best method would be for attacking this problem, maybe
geometry or even topology. I'm not about to try it, though.
Re: Number of possible triangle.
If $n\ge 3$ and $T(n)=\frac{n(n+1)}{2}$(i.e. the total number of point in the diagram) then the number of triangles is
$\binom{T(n)}{3}-\left[3 \cdot \sum\limits_{k = 3}^n \binom{k}{3} \right]$.
Re: Number of possible triangle.
Plato, how on Earth does that equation take the colinearity of arbitrary angles into account?
Example: if n = 5 or more, then colinear points appear at ~ 23.4 degrees (two levels, each level sqrt(3)/2, by four dots). The further you go, the more angles add colinearity. The poster is not
asking for equilateral triangles or right triangles; they want all triangles, but no lines.
I can't believe that your recurrence is sufficient to handle that. For one thing, once you hit n = 8, you start generating 4 points colinear on each 23.4 degree angle, which is four triangles you
have to eliminate from consideration for each line which contains 4 points... etc.
Re: Number of possible triangle.
Plato, how on Earth does that equation take the colinearity of arbitrary angles into account?
Example: if n = 5 or more, then colinear points appear at ~ 23.4 degrees (two levels, each level sqrt(3)/2, by four dots). The further you go, the more angles add colinearity. The poster is not
asking for equilateral triangles or right triangles; they want all triangles, but no lines.
I admit that I am assuming that the larger triangle is equilateral where the points are equally spaced, given the posted diagram. Thus if $n=5$ then there are three sets of five colinear points,
there are three sets of four colinear points, and there are three sets of three colinear points. So we remove any selection of three points that are colinear. So there are no angles to be
Again the analysis is based on a grid of n points on each side of an equilateral triangle where the points are equally spaced.
There is a well-know problem in graph theory asking for the number of "triangles" is a graph. The quotes are for the fact that colinear is not an issue.
Now if we have n randomaly placed points on each side of the triangle, then of course there is no easy solution.
Re: Number of possible triangle.
You're ignoring some colinearities. Here, look:
* *
@ * *
* * @ *
* * * * @
The larger the triangle becomes, the more of these there are at various angles. How are you accounting for them?
Re: Number of possible triangle.
@ Plato, this is an equilateral triangle with points equally spaced..
but still ...u r ignoring many co linear points...
if u see I gave an expression no . possible triange = n(n+1)/2C3 - 3(n∑ i=3 iC3)
which u also got...
if you proceed n=8,9,10 etc...
many other different cases of co linear points will appear...
u mentioned "There is a well-know problem in graph theory asking for the number of "triangles" is a graph"
will u please share a link of such problem... it might give me some idea...
@ Annatala
ur post "there are many problems that are extremely easy to phrase, but difficult (and in some cases, though probably not this one, impossible) to solve"
made me laugh
that may be the case......I think its difficult to solve this 1 but may b someone has a simple solution for this
I have many other questions like this for which I dont have answer... (all my own q
Re: Number of possible triangle.
I'll take a look at it when I next have free time, which might be a week or two from now. I think there may be a reasonably simple formula for capturing what you want, but it will probably be a
recursively dependent formula that takes a long while to calculate.
Re: Number of possible triangle.
@ Plato, this is an equilateral triangle with points equally spaced.. but still ...u r ignoring many co linear points...
if u see I gave an expression no . possible triange = n(n+1)/2C3 - 3(n∑ i=3 iC3)
which u also got...
if you proceed n=8,9,10 etc...
many other different cases of co linear points will appear...
u mentioned "There is a well-know problem in graph theory asking for the number of "triangles" is a graph"
will u please share a link of such problem... it might give me some idea...
Both of you are of course correct. I was clearly working with a model which was too small. However, I do think the key is to find out for each k from 3 to n how many "colinear sets" consist of
exactly k points.
Re: Number of possible triangle.
If this were a square (or rhombus) instead of a triangle (half of a square, topologically speaking wrt the points lining up), it would be much easier to get the answer; as with the triangle, you
can calculate the next point in a line from the previous, but there would also be an equal amount of room in all directions. If I solve this I'll probably start there.
November 29th 2011, 09:37 AM #2
Aug 2011
November 29th 2011, 09:40 AM #3
Aug 2011
November 29th 2011, 09:43 AM #4
Aug 2011
November 29th 2011, 09:59 AM #5
Nov 2011
November 29th 2011, 04:02 PM #6
Aug 2011
November 29th 2011, 04:14 PM #7
November 29th 2011, 04:37 PM #8
Aug 2011
November 29th 2011, 05:27 PM #9
November 29th 2011, 08:08 PM #10
Aug 2011
November 30th 2011, 03:17 AM #11
Nov 2011
November 30th 2011, 04:07 AM #12
Aug 2011
November 30th 2011, 04:16 AM #13
November 30th 2011, 08:20 AM #14
Aug 2011 | {"url":"http://mathhelpforum.com/discrete-math/192986-number-possible-triangle.html","timestamp":"2014-04-17T01:47:39Z","content_type":null,"content_length":"74848","record_id":"<urn:uuid:4195c871-42ee-4eee-8e79-71b9bfbbf909>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
: Binary Tree Traversal
C++ Notes: Binary Tree Traversal
Typical binary tree node
Assume these definition for the following examples.
struct tree_node {
tree_node *left; // left subtree has smaller elements
tree_node *right; // right subtree has larger elements
int data;
Traversing a binary tree
Traversing (visiting all the nodes) a tree starting at node is often done in one of three orders
• Preorder - node, left subtree, right subtree.
• Inorder - left subtree, node, right subtree. This could be used to print a binary search tree in sorted order.
• Postorder - left subtree, right subtree, node. This could be used to print an expression tree in reverse polish notation (postfix).
Recursive code to print a binary search tree
By far the easiest way to print (or otherwise process) a binary tree is with a recursive function. This is one of the first uses of recursion that makes an algorithm much easier to code.
void print_inorder(tree_node *p) {
if (p != NULL) {
print_inorder(p->left); // print left subtree
cout << p->data << endl; // print this node
print_inorder(p->right); // print right subtree
1. Rewrite this for preorder and postorder traversals.
2. To save the cost of a call, this could test each pointer for non-NULL before making the call to process that subtree. Make this small improvement.
3. The example above prints each element of the tree. Rewrite the function to add (ie, push_back) data elements onto a global vector<int> v. The resulting vector will have all elements in sorted
order as a consequence of the inorder traversal. | {"url":"http://www.fredosaurus.com/notes-cpp/ds-trees/binarytreetraversal.html","timestamp":"2014-04-19T19:40:50Z","content_type":null,"content_length":"2619","record_id":"<urn:uuid:4932637b-5fba-40ff-8a8d-cd6d5e281621>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Manipulation Functions
angle Angle Corresponding to a Complex Value [O-Matrix Function]
revolve Shift Matrix Rows in Circular Fashion
shift Shift Matrix Rows with Zero-Fill
expand Expand Matrix Rows by Sample-Repetition
zeropad Zeropad a Matrix in Row Direction
interpfft Matrix Column interpolation Via DFT
resample Resample a Matrix by Ratio of Intergers
delavg Delete Average from Matrix Columns | {"url":"http://www.omatrix.com/sptmanual/data%20manipulation%20functions.htm","timestamp":"2014-04-19T06:51:34Z","content_type":null,"content_length":"4209","record_id":"<urn:uuid:4b3cfcc5-e56d-47aa-adb7-10fdbd2715ee>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
GADTs for dummies
From HaskellWiki
For a long time, I didn't understand what GADTs are and how they can be used. It was a sort of conspiracy of silence - people who understand GADTs think it is all obvious, and don't need any
explanation, but I still couldn't understand.
Now I have an idea how it works, and think that it was really obvious :) and I want to share my understanding - may be my way to realize GADTs can help someone else. See also
1 Type functions
A "data" declaration is a way to declare both type constructor and data constructors. For example,
data Either a b = Left a | Right b
declares type constructor "Either" and two data constructors "Left" and "Right". Ordinary Haskell functions works with data constructors:
isLeft (Left a) = True
isLeft (Right b) = False
but there is also an analogous way to work with type constructors!
declares a TYPE FUNCTION named "X". Its parameter "a" must be some type and it returns some type as its result. We can't use "X" on data values, but we can use it on type values. Type constructors
declared with "data" statements and type functions declared with "type" statements used together to build arbitrarily complex types. In such "computations" type constructors serves as basic "values"
and type functions as a way to process them.
Indeed, type functions in Haskell are very limited compared to ordinary functions - they don't support pattern matching, nor multiple statements, nor recursion.
2 Hypothetical Haskell extension - Full-featured type functions
Let's build hypothetical Haskell extension, that mimics for type functions the well-known ways to define ordinary functions, including pattern matching:
multiple statements (this is meaningful only in presence of pattern matching):
type F Bool = Char
F String = Int
and recursion (which again needs pattern matching and multiple statements):
type F [a] = F a
F (Map a b) = F b
F (Set a) = F a
F a = a
As you may already have guessed, this last definition calculates a simple base type of arbitrarily-nested collections, e.g:
F [[[Set Int]]] =
F [[Set Int]] =
F [Set Int] =
F (Set Int) =
F Int =
Let's don't forget about statement guards:
type F a | IsSimple a == TrueType = a
Here we define type function F only for simple datatypes by using in guard type function "IsSimple":
type IsSimple Bool = TrueType
IsSimple Int = TrueType
IsSimple Double = TrueType
IsSimple a = FalseType
data TrueType = T
data FalseType = F
These definitions seem a bit odd, and while we are in imaginary land, let's consider a way to write this shorter:
type F a | IsSimple a = a
type IsSimple Bool
IsSimple Int
IsSimple Double
Here, we just defined list of simple types, the implied result of all written statements for "IsSimple" is True value, and False value for anything else. Essentially, "IsSimple" is no less than TYPE
I really love it! :) How about constructing a predicate that traverses a complex type trying to decide whether it contains "Int" anywhere?
type HasInt Int
HasInt [a] = HasInt a
HasInt (Set a) = HasInt a
HasInt (Map a b) | HasInt a
HasInt (Map a b) | HasInt b
or a type function that substitutes one type with another inside arbitrary-deep types:
type Replace t a b | t==a = b
Replace [t] a b = [Replace t a b]
Replace (Set t) a b = Set (Replace t a b)
Replace (Map t1 t2) a b = Map (Replace t1 a b) (Replace t2 a b)
Replace t a b = t
3 One more hypothetical extension - multi-value type functions
Let's add more fun! We will introduce one more hypothetical Haskell extension - type functions that may have MULTIPLE VALUES. Say,
type Collection a = [a]
Collection a = Set a
Collection a = Map b a
So, "Collection Int" has "[Int]", "Set Int" and "Map String Int" as its values, i.e. different collection types with elements of type "Int".
Pay attention to the last statement of the "Collection" definition, where we've used type variable "b" that was not mentioned on the left side nor defined in any other way. It's perfectly possible -
anyway "Collection" function has multiple values, so using on the right side some free variable that can be replaced with any type is not a problem at all - the "Map Bool Int", "Map [Int] Int" and
"Map Int Int" all are possible values of "Collection Int" along with "[Int]" and "Set Int".
On the first look, it seems that multiple-value functions are meaningless - they can't be used to define datatypes, because we need concrete types here. But on the second look :) we can find them
useful to define type constraints and type families.
We can also represent multiple-value function as predicate:
type Collection a [a]
Collection a (Set a)
Collection a (Map b a)
If you remember Prolog, you should guess that predicate, in contrast to function, is multi-purpose thing - it can be used to deduce any parameter from other ones. For example, in this hypothetical
head | Collection Int a :: a -> Int
we define 'head' function for any Collection containing Ints.
And in this, again, hypothetical definition:
data Safe c | Collection c a = Safe c a
we deduced element type 'a' from collection type 'c' passed as the parameter to the type constructor.
4 Back to real Haskell - type classes
Reading all those glorious examples you may be wondering - why Haskell don't yet supports full-featured type functions? Hold your breath... Haskell already contains them and at least GHC implements
all the mentioned abilities more than 10 years ago! They just was named... TYPE CLASSES! Let's translate all our examples to their language:
class IsSimple a
instance IsSimple Bool
instance IsSimple Int
instance IsSimple Double
Haskell'98 standard supports type classes with only one parameter that limits us to defining only type predicates like this one. But GHC and Hugs supports multi-parameter type classes that allows us
to define arbitrarily-complex type functions
class Collection a c
instance Collection a [a]
instance Collection a (Set a)
instance Collection a (Map b a)
All the "hypothetical" Haskell extensions we investigated earlier - actually implemented at the type class level!
Pattern matching:
instance Collection a [a]
Multiple statements:
instance Collection a [a]
instance Collection a (Set a)
instance (Collection a c) => Collection a [c]
Pattern guards:
instance (IsSimple a) => Collection a (UArray a)
Let's define type class which contains any collection which uses Int as its elements or indexes:
class HasInt a
instance HasInt Int
instance (HasInt a) => HasInt [a]
instance (HasInt a) => HasInt (Map a b)
instance (HasInt b) => HasInt (Map a b)
Anther example is a class that replaces all occurrences of 'a' with 'b' in type 't' and return result as 'res':
class Replace t a b res
instance Replace t a a t
instance Replace [t] a b [Replace t a b]
instance (Replace t a b res)
=> Replace (Set t) a b (Set res)
instance (Replace t1 a b res1, Replace t2 a b res2)
=> Replace (Map t1 t2) a b (Map res1 res2)
instance Replace t a b t
You can compare it to the hypothetical definition we gave earlier. It's important to note that type class instances, as opposite to function statements, are not checked in order. Instead, most
_specific_ instance automatically selected. So, in Replace case, the last instance that is most general will be selected only if all other are failed to match and that is that we want.
In many other cases this automatic selection is not powerful enough and we are forced to use some artificial tricks or complain to the language developers. The two most well-known language extensions
proposed to solve such problems are instance priorities, which allow to explicitly specify instance selection order, and '/=' constraints, which can be used to explicitly prohibit unwanted matches:
instance Replace t a a t
instance (a/=b) => Replace [t] a b [Replace t a b]
instance (a/=b, t/=[_]) => Replace t a b t
You can check that these instances are no more overlaps.
At practice, type-level arithmetics by itself is not very useful. It becomes really strong weapon when combined with another feature that type classes provide - member functions. For example:
class Collection a c where
foldr1 :: (a -> a -> a) -> c -> a
class Num a where
(+) :: a -> a -> a
sum :: (Num a, Collection a c) => c -> a
sum = foldr1 (+)
I'll be also glad to see possibility to use type classes in data declarations like this:
data Safe c = (Collection c a) => Safe c a
but afaik this is also not yet implemented
UNIFICATION ...
5 Back to GADTs
If you are wonder how relates all these interesting type manipulations to GADTs, now is the time to give you answer. As you know, Haskell contains highly developed ways to express data-to-data
functions. Now we also know that Haskell contains rich facilities to write type-to-type functions in form of "type" statements and type classes. But how "data" statements fits in this infrastructure?
My answer: they just defines type-to-data constructors translation. Moreover, this translation may give multiple results. Say, the following definition:
data Maybe a = Just a | Nothing
defines type-to-data constructors function "Maybe" that has parameter "a" and for each "a" has two possible results - "Just a" and "Nothing". We can rewrite it in the same hypothetical syntax that
was used above for multi-value type functions:
data Maybe a = Just a
Maybe a = Nothing
Or how about this:
data List a = Cons a (List a)
List a = Nil
and this:
data Either a b = Left a
Either a b = Right b
But how are flexible "data" definitions? As you should remember, "type" definitions was very limited in their features, while type classes, vice versa, much more developed than ordinary Haskell
functions facilities. What about features of "data" definitions examined as sort of functions?
On the one side, they supports multiple statements and multiple results and can be recursive, like the "List" definition above. On the other side, that's all - no pattern matching or even type
constants on the left side and no guards.
Lack of pattern matching means that left side can contain only free type variables, that in turn means that left sides of all "data" statements for one type will be essentially the same. Therefore,
repeated left sides in multi-statement "data" definitions are omitted and instead of
data Either a b = Left a
Either a b = Right b
we write just
data Either a b = Left a
| Right b
And here finally comes the GADTs! It's just a way to define data types using pattern matching and constants on the left side of "data" statements! How about this:
data T String = D1 Int
T Bool = D2
T [a] = D3 (a,a)
Amazed? After all, GADTs seems really very simple and obvious extension to data type definition facilities.
The idea is to allow a data constructor's return type to be specified directly:
data Term a where
Lit :: Int -> Term Int
Pair :: Term a -> Term b -> Term (a,b)
In a function that performs pattern matching on Term, the pattern match gives type as well as value information. For example, consider this function:
eval :: Term a -> a
eval (Lit i) = i
eval (Pair a b) = (eval a, eval b)
If the argument matches Lit, it must have been built with a Lit constructor, so type 'a' must be Int, and hence we can return 'i' (an Int) in the right hand side. The same objections applies to the
Pair constructor.
6 Further reading
The best paper on type level arithmetic using type classes i've seen is "Faking it: simulating dependent types in Haskell" ( http://www.cs.nott.ac.uk/~ctm/faking.ps.gz ). Most part of my article is
just duplicates his work.
The great demonstration of type-level arithmetic is TypeNats package which "defines type-level natural numbers and arithmetic operations on them including addition, subtraction, multiplication,
division and GCD" ( darcs get --partial --tag '0.1' http://www.eecs.tufts.edu/~rdocki01/typenats/ )
I should also mention here Oleg Kiselyov page on type-level programming in Haskell: http://okmij.org/ftp/Haskell/types.html
There are plenty of GADT-related papers, but best for beginners remains the "Fun with phantom types" (http://www.informatik.uni-bonn.de/~ralf/publications/With.pdf). Phantom types is another name of
GADT. You should also know that this paper uses old GADT syntax. This paper is must-read because it contains numerous examples of practical GADT usage - theme completely omitted from my article.
Other GADT-related papers i know:
"Dynamic Optimization for Functional Reactive Programming using Generalized Algebraic Data Types" http://www.cs.nott.ac.uk/~nhn/Publications/icfp2005.pdf
"Phantom types" (actually more scientific version of "Fun with phantom types") http://citeseer.ist.psu.edu/rd/0,606209,1,0.25,Download/
"Phantom types and subtyping" http://arxiv.org/ps/cs.PL/0403034
"Existentially quantified type classes" by Stuckey, Sulzmann and Wazny (URL?)
7 Random rubbish from previous versions of article
data family Map k :: * -> *
data instance Map () v = MapUnit (Maybe v)
data instance Map (a, b) v = MapPair (Map a (Map b v))
let's consider well-known 'data' declarations:
it can be seen as function 'T' from type 'a' to some data constructor.
'T Bool', for example, gives result 'D Bool Bool Int', while
'T [Int]' gives result 'D [Int] [Int] Int'.
'data' declaration can also have several "results", say
data Either a b = Left a | Right b
and "result" of 'Either Int String' can be either "Left Int" or "Right String"
Well, to give compiler confidence that 'a' can be deduced in just one way from 'c', we can add some form of hint:
type Collection :: a c | c->a
Collection a [a]
Collection a (Set a)
Collection a (Map b a)
The first line i added tell the compiler that Collection predicate has two parameters and the second parameter determines the first. Based on this restriction, compiler can detect and prohibit
attempts to define different element types for the same collection:
type Collection :: a c | c->a
Collection a (Map b a)
Collection b (Map b a) -- error! prohibits functional dependency
Of course, Collection is just a function from 'c' to 'a', but if we will define it directly as a function:
type Collection [a] = a
Collection (Set a) = a
Collection (Map b a) = a
- it can't be used in 'head' definition above. Moreover, using functional dependencies we can define bi-directional functions:
type TwoTimesBigger :: a b | a->b, b->a
TwoTimesBigger Int8 Int16
TwoTimesBigger Int16 Int32
TwoTimesBigger Int32 Int64
or predicates with 3, 4 or more parameters with any relations between them. It's a great power! | {"url":"http://www.haskell.org/haskellwiki/index.php?title=GADTs_for_dummies&oldid=27468","timestamp":"2014-04-16T22:39:55Z","content_type":null,"content_length":"51647","record_id":"<urn:uuid:544f6a74-c296-4b79-b39e-7d5c01736ee0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sierra Madre
Los Angeles, CA 90027
Ivy League grad specializing in Math, Writing, & Test Prep
...ming the student). I have tutored for various private companies for several years (SAT Subject Tests, PSAT, SAT, ACT, AP English/
/Social Studies courses). I have also tutored independently in various subjects (essays, test, prep, physics, Spanish, all
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/geo_Sierra_Madre_Math_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-21T00:01:31Z","content_type":null,"content_length":"58649","record_id":"<urn:uuid:24b29ce1-f7e2-4eb7-bd7b-04fc9599271f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics and mathematics: boundaries and interactions
In the fast comments under the previous posting, we have had some discussions with Q2 about the boundaries between physics and mathematics. Does Yau's theorem belong to theoretical physics? Are the
differential equations governing the vector field flows tools of physics? My answers were essentially Yes, while the answers of Q2 were No.
The separation of wisdom and research to physics and mathematics is largely a social phenomenon - one that is affected by some objective features of reality (including the Universe around us as well
as the Platonic Universe of mathematical ideas) but one that can also be influenced by personal and political decisions, by social conventions, and by fashionable trends.
Commercial: See also Feynman's lecture about the relation of physics and mathematics
In ancient Greece, people did not distinguish physics and mathematics. In fact, all of us were philosophers, the lovers of wisdom. The crowning achievement of that era, the Euclidean geometry, later
became a part of pure mathematics. As Einstein emphasized, it can also be interpreted as the oldest branch of physics: statics of perfectly solid bodies.
Let's jump to the era between Galileo and Gauss. We find many true heroes of thinking squeezed into several centuries. The separation of the quantitative thinkers into mathematicians and physicists
is slightly arbitrary, and in fact it is mostly a result of the historical perspective that only appeared in the new era. Those thinkers who liked to do (or look at) some experiments or observations
are counted as physicists - Newton, Maxwell, and others - while others are usually classified as mathematicians - Leibniz, Euler, Gauss. The actual theoretical work of these two groups did not differ
much. If your physics department could hire Euler in the Fall, I think you would hire him. Even if we call them mathematicians, they could still count as extraordinary theoretical physicists.
One century ago or so, people decided to separate mathematics and physics. The main substantial, non-sociological reason behind these developments was the discovery that our intuition can often fail.
Newton used to be convinced that he was directly "reading" the laws of physics from the real world: this is the only way how we can interpret his quote "hypotheses non fingo" (I am not inventing any
hypotheses). In the 19th century, people realized that this can't be possible.
Non-Euclidean geometries were discovered. Suddenly, the opinion that the Euclidean geometry is the only mathematically possible geometry, which would also imply that it must be true in the real
world, collapsed. The mathematicians had to build on firmer fundaments instead of the fundaments that collapse whenever the physicists find something surprising about this dirty world - and whenever
the mathematicians find that some insights about the real world aren't as logically inevitable as they seemed to be previously.
So the mathematicians realized that they could (and should) build their structures without any links to the observable physics and the intuition from everyday life whatsoever. One starts with a set
of axioms, and using well-defined and "obviously meaningful" rules of logic, he (and today also she) can prove the validity of some other statements. Modern mathematics was born.
The birth of pure mathematics was an important moment in the history of thinking. Nevertheless, it did not change the fact that a majority of the most interesting questions and results was directly
or indirectly linked to the real world as understood at the given moment of the past, or at least to the real world as understood from a more complete future perspective. Also, the intuition from the
everyday life was no longer necessary for mathematics. Again, it still helped many mathematicians even though many of them decided (and are still deciding) to obscure this fact. ;-)
The key reason for the separation of mathematics and physics was described by Einstein as follows: whatever is rigorous cannot be directly applied to the real world, and whatever can be directly and
accurately applied to the real world is not rigorous. Of course, such a rule could break down as soon as we find the complete theory of everything that could be formulated rigorously and that would
be physically accurate at the same moment. But we're not there yet, which is why we can still separate the fields.
Mathematicians themselves had to discover some purely mathematical and surprising facts, especially about the solution of paradoxes in set theory and Gödel's theorems
1. about the incompleteness - the existence of an unprovable and undisprovable assertion - and
2. about the unprovability of the internal consistency of a system of axioms
which are valid for all consistent systems of axioms that can mimic the set of integers and its usual properties.
But these interesting insights occured inside the world of mathematics after its velvet divorce with physics - and most physicists are more or less certain that these logical games have no physical
(or even measurable) consequences whatsoever. For example, we can't design an experiment that would decide whether the axiom of choice is true, false, or undecidable or whether the Zermelo-Frenkel
set theory is better than the Gödel-Bernays framework.
The velvet divorce - inspired by the split of Czechoslovakia - was not good enough for a certain extremist group of mathematicians who preferred a divorce according to the Yugoslav example. The group
called Bourbaki started to publish boring, mechanical books that were based on the ideology that physical intuition must always be assassinated whenever it appears near the iron curtain separating
mathematics from the rest of the world of ideas. Because I think that the mathematicians should mostly be ashamed of this chapter of their history ;-), let me say nothing else about that movement.
During the last decades, the iron curtain started to disappear again, especially in the context of geometry and related disciplines where the gap or wall between the cultures of mathematicians and
the culture of physicists is finite, shrinking, and penetrable.
I am convinced that there exists some general organization of deep mathematical ideas - something that God or Nature had to know when He or She was designing the world(s). In this organization, the
main ideas have certain mutual relationships and a hierarchy. Even if you think about deep questions in mathematics only, I am convinced that the identity of the most interesting generalization(s) of
a mathematical structure has an objectively well-defined answer that can in principle be found, plus minus the error margin proportional to the social conventions.
Moreover, all these very deep ideas eventually turn out to be important for theoretical physics. I can't prove this assumption but I believe that it is consistent with the whole history of
mathematics and physics, as I understand it, combined with my personal appraisal of the values of different ideas in mathematics. This appraisal, of course, values general insights about robust and
continuous structures (those that are useful for predictive natural science) much more than special insights about particular discrete structures (that are useful for creating many new games in
recreational mathematics).
When Newton was solving the differential equations relevant for the Kepler system, he was solving not only an abstract mathematical problem but also an extremely important physical system. Some of
the modifications, deformations, and generalizations of these equations and other equations turned out to be more important for physics, some of them were less important for physics, but I think that
no one would question that the insights about the solutions to differential equations are important for natural sciences, and in this sense they belong to the natural sciences.
They can only be isolated as "mathematics" if someone decides that some subproblems should only be solved by the people from one group, and other problems should be solved by the people from another
group, and that these groups should not be encouraged to look behind the boundaries of their fields of expertise. This arrangement is nothing more than a social policy that does not say much about
the true internal relationships between different ideas and insights. You may decide that you're not interested in anything outside your narrow field; but such a decision can't change what is
actually there behind these walls.
When Jacobi studied the theta functions, he did not know much about string theory. But it was his fault, so to say - and the fault of other scholars before him who were not able to do what Lenny
Susskind et al. could do in the late 1960s. ;-) Today, we know that when Jacobi proved his obscure identity, he also proved a necessary condition for spacetime supersymmetry in superstring theory
formulated in the RNS variables.
In fact, we know much more. The theta functions and similar functions are related to the partition sums and correlators of a physical system called the worldsheet. This insight provides us with
natural generalizations and new important unanswered problems along the same lines. I am convinced that the answer of string theory to the question
• "What should we do with the theta functions in the following century?"
is more or less unique. It is not a coincidence that most of the 21st century papers that talk about theta functions use them as the partition sums of a string or a dual, equivalent physical system
unified with the strings in string theory. I would bet that the extraterrestrial aliens would find the same application of the theta functions.
The same comment applies to many other insights that have become important parts of physics in general and string theory in particular. Yau's proof of Calabi's conjecture was presented as pure
mathematics. In the decades that followed, it was realized that it is primarily an extremely important result in theoretical physics. That does not mean that Yau is suddenly a pure physicist; he is
still a mathematician although he is now co-authoring a large number of physics papers. But it does mean that the natural and interesting generalization of his insights has a very powerful physical
Similar observations hold in the case of mirror symmetry. If you only define mirror symmetry as the fact that for every Calabi-Yau manifold "M", you can find a Calabi-Yau manifold "W" whose Hodge
diamond is rotated by 45 degrees, it can look like an abstract mathematical problem. Or perhaps even a sophisticated exercise from recreational mathematics.
However, if you actually try to solve some more general problems of this kind and to extend the result into a stronger statement, you will inevitably be led to string theory. Also, string theory will
allow you to solve some problems from "recreational mathematics" much more efficiently than what the mathematicians who are ignorant about string theory can do.
In string theory, the "elementary" description of mirror symmetry involving the Hodge diamonds of manifolds "M" and "W" above is just a very tiny portion of a much more general conclusion that
reveals the equivalence between "two" physical systems that look very different a priori but that can be shown to be isomorphic, including the infinite number of new observables that both of them
admit and that were ignored in the paragraph about the Hodge diamond.
When people study important mathematical results carefully enough, they will inevitably be led to their natural generalizations. In other words, they will be forced to discover the role that these
mathematical insights play within physics or within string theory. I could continue with many examples such as those in knot theory, Chern-Simons theory, and their extension via topological string
theory and perhaps the full string theory, but the main point of this essay is of philosophical nature, so let me avoid too many examples.
Many proofs in mathematics use various ad hoc inequalities or they assign mathematical structures different roles than those that would be viewed as natural ones from a physicist's viewpoint, but I
believe that none of these physically unexpected features can be quite unique, fundamental, or universally important. All of them are technicalities that could be replaced by different technicalities
and the true important result is the proof modulo the choices of these technicalities. Only the properties of the mathematical objects that are natural from the physics viewpoint can be truly
important and deep.
Of course, you might think that this statement simply means that the physicists should be defined as those who are thinking in a deep way, but I still feel that what I want to say is more than just a
definition of "physics".
The paragraphs above make it clear that I believe that the distance between string theory as theoretical physics on one side and mathematics of string theory studied using the tools of pure
mathematics on the other side will be diminishing throughout the remainder of the 21st century. The term "mathematics of string theory" mostly describes properties of various "continuous"
mathematical structures. But it is not hard to imagine that very discrete subfields of mathematics such as number theory will be incorporated into this powerful system of ideas, too.
Of course, there will always be differences between people who study pure sciences and applied sciences, and between people who use their hands vs. heads, but these differences will be viewed as
sociological barriers while the actual, intellectual barriers between the ideas of different fields - and between physics and mathematics in particular - will continue to melt down.
snail feedback (0) : | {"url":"http://www.motls.blogspot.com/2006/06/physics-and-mathematics-boundaries-and.html","timestamp":"2014-04-16T13:29:20Z","content_type":null,"content_length":"203567","record_id":"<urn:uuid:85a44880-f718-48b7-a2c2-1f7144795536>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Sorry, God. I am an old guy and have a tendency to want to do things by hand. Whenever I use my TI-89, I almost feel like I'm cheating in some way. But if you really need an answer there is nothing
wrong with using technology to find the answer. Actually, I use my calculator all of the time, it's not like I sat down and memorized the trigonomic tables....hmm, I have to put that on my to-do | {"url":"http://www.mathisfunforum.com/post.php?tid=2685&qid=26353","timestamp":"2014-04-19T14:53:35Z","content_type":null,"content_length":"19378","record_id":"<urn:uuid:ca33fc0f-630d-4a7b-800b-3579ec26f2fd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acoustics/Sealed Box Subwoofer Design
From Wikibooks, open books for an open world
A sealed or closed box baffle is the most basic but often the cleanest sounding sub-woofer box design. The sub-woofer box in its most simple form, serves to isolate the back of the speaker from the
front, much like the theoretical infinite baffle. The sealed box provides simple construction and controlled response for most sub-woofer applications. The slow low end roll-off provides a clean
transition into the extreme frequency range. Unlike ported boxes, the cone excursion is reduced below the resonant frequency of the box and driver due to the added stiffness provided by the sealed
box baffle.
Closed baffle boxes are typically constructed of a very rigid material such as MDF (medium density fiber board) or plywood .75 to 1 inch thick. Depending on the size of the box and material used,
internal bracing may be necessary to maintain a rigid box. A rigid box is important to design in order to prevent unwanted box resonance.
As with any acoustics application, the box must be matched to the loudspeaker driver for maximum performance. The following will outline the procedure to tune the box or maximize the output of the
sub-woofer box and driver combination.
Closed baffle circuit[edit]
The sealed box enclosure for sub-woofers can be modeled as a lumped element system if the dimensions of the box are significantly shorter than the shortest wavelength reproduced by the sub-woofer.
Most sub-woofer applications are crossed over around 80 to 100 Hz. A 100 Hz wave in air has a wavelength of about 11 feet. Sub-woofers typically have all dimensions much shorter than this wavelength,
thus the lumped element system analysis is accurate. Using this analysis, the following circuit represents a sub-woofer enclosure system.
where all of the following parameters are in the mechanical mobility analog
V[e] - voltage supply
R[e] - electrical resistance
M[m] - driver mass
C[m] - driver compliance
R[m] - resistance
R[Ar] - rear cone radiation resistance into the air
X[Af] - front cone radiation reactance into the air
R[Br] - rear cone radiation resistance into the box
X[Br] - rear cone radiation reactance into the box
Driver parameters[edit]
In order to tune a sealed box to a driver, the driver parameters must be known. Some of the parameters are provided by the manufacturer, some are found experimentally, and some are found from general
tables. For ease of calculations, all parameters will be represented in the SI units meter/kilogram/second. The parameters that must be known to determine the size of the box are as follows:
f[0] - driver free-air resonance
C[MS] - mechanical compliance of the driver
S[D] - effective area of the driver
Resonance of the driver[edit]
The resonance of the driver is usually either provided by the manufacturer or must be found experimentally. It is a good idea to measure the resonance frequency even if it is provided by the
manufacturer to account for inconsistent manufacturing processes.
The following diagram shows verga and the setup for finding resonance:
Where voltage V1 is held constant and the variable frequency source is varied until V2 is a maximum. The frequency where V2 is a maximum is the resonance frequency for the driver.
Mechanical compliance[edit]
By definition compliance is the inverse of stiffness or what is commonly referred to as the spring constant. The compliance of a driver can be found by measuring the displacement of the cone when
known masses are place on the cone when the driver is facing up. The compliance would then be the displacement of the cone in meters divided by the added weight in newtons.
Effective area of the driver[edit]
The physical diameter of the driver does not lead to the effective area of the driver. The effective diameter can be found using the following diagram:
From this diameter, the area is found from the basic area of a circle equation.
Acoustic compliance[edit]
From the known mechanical compliance of the cone, the acoustic compliance can be found from the following equation:
C[AS] = C[MS]S[D]^2
From the driver acoustic compliance, the box acoustic compliance is found. This is where the final application of the sub-woofer is considered. The acoustic compliance of the box will determine the
percent shift upwards of the resonant frequency. If a large shift is desire for high SPL applications, then a large ratio of driver to box acoustic compliance would be required. If a more flattened
response is desire for high fidelity applications, then a lower ratio of driver to box acoustic compliance would be required. Specifically, the ratios can be found in the following figure using line
(b) as reference.
C[AS] = C[AB]*r
r - driver to box acoustic compliant ratio
Sealed box design[edit]
Volume of box[edit]
The volume of the sealed box can now be found from the box acoustic compliance. The following equation is used to calculate the box volume
V[B]= C[AB]&gam
Box dimensions[edit]
From the calculated box volume, the dimensions of the box can then be designed. There is no set formula for finding the dimensions of the box, but there are general guidelines to be followed. If the
driver was mounted in the center of a square face, the waves generated by the cone would reach the edges of the box at the same time, thus when combined would create a strong diffracted wave in the
listening space. In order to best prevent this, the driver should be either be mounted offset of a square face, or the face should be rectangular.
The face of the box which the driver is set in should not be a square. | {"url":"http://en.wikibooks.org/wiki/Acoustics/Sealed_Box_Subwoofer_Design","timestamp":"2014-04-20T11:25:16Z","content_type":null,"content_length":"35932","record_id":"<urn:uuid:acbd6fe9-81a6-44a9-b08d-3a2dc3d0925e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
QC is a failed pathological program
Scott Aaronson
Dyakonov paper
for its conclusion that quantum computing is a failed, pathological research program, which will soon die out and be of interest only to sociologists:
This is a brief review of the experimental and theoretical quantum computing. The hopes for eventually building a useful quantum computer rely entirely on the so-called "threshold theorem". In
turn, this theorem is based on a number of assumptions, treated as axioms, i.e. as being satisfied exactly. Since in reality this is not possible, the prospects of scalable quantum computing will
remain uncertain until the required precision, with which these assumptions should be approached, is established. Some related sociological aspects are also discussed.
I do think that QC is a failed research program.
QC starts with the hypothesis that it is impossible to efficiently simulate a quantum system with a classical (Turing) computer. I suspect that is correct. But the leap to scalable QC seems extremely
doubtful to me.
1 comment:
1. Not a rhetoric but a simple question: What minimum power must a quantum computer have, before it could be said to be a well-scaled one? How many qubits? Where do the quantum computing researchers
draw the line, if they do?
At least as of today it cannot be a very precise line, I suppose, but it doesn't have to be so anyway. A rough indication would do. But unless one has some indication of such a datum, one
couldn't begin to think very meaningfully about this issue. | {"url":"http://blog.darkbuzz.com/2013/01/qc-is-failed-pathological-program.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+darkbuzz%2FtTKq+%28Dark+Buzz%29","timestamp":"2014-04-24T07:24:57Z","content_type":null,"content_length":"91768","record_id":"<urn:uuid:6237cab7-cad5-487f-a0b4-f154d021fc8b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Untitled Document
Vann McGee: List of Publications
13 March 2007
1. "Finite Matrices and the Logic of Conditionals," Journal of Philosophical Logic 10 (1982), pp. 349-51.
2. "How Truthlike Can a Predicate Be? A Negative Result," Journal of Philosophical Logic 14 (1985), pp. 399-410.
3. "A Counterexample to Modus Ponens," Journal of Philosophy 82 (1985), pp. 462-71.
4. "The Degree of the Set of Sentences of Predicate Provability Logic That Are True Under Every Interpretation" (with George Boolos), Journal of Symbolic Logic 54 (1987), pp. 165-71.
5. "Conditional Probabilities and Compounds of Conditionals," The Philosophical Review 98 (1989), 99. 485-541.
6. "Applying Kripke's Theory of Truth," Journal of Philosophy 86 (1989), pp. 530-39.
7. "We Turing Machines Aren't Expected-Utility Maximizers (Even Ideally)," Philosophical Studies 64 (1991), pp.115-23.
8. "An Epistemic Principle Which Solves Newcomb's Paradox" (with Keith Lehrer), Grazer philosophische Studien 40 (1991), pp 197-217.
9. "Reply to Christian Piller," Grazer philosophische Studien 40 (1991), pp. 229-32.
10. "Particulars, Universals, and Individual Qualities" (with Keith Lehrer) in Kevin Mulligan, ed., Language, Truth, and Ontology (Dordrecht, Holland: Kluwer Academic Publishers, 1992), pp. 37-47.
11. "Maximal Consistent Sets of Instances of Tarski's Schema (T)," Journal of Philosophical Logic 21 (1992), pp 235-41.
12. "Two Problems with Tarski's Theory of Consequence," Proceedings of the Aristotelian Society 92 (1992), pp. 273-92.
13. "Learning the Impossible," in Brian Skyrms and Ellery Eells, eds., Probability and Conditionals (New York and Cambridge: Cambridge University Press, 1994), pp. 177-99.
14. "Afterword: Truth and Paradox" in Robert M. Harnish, ed., Basic Topics in the Philosophy of Language (London: Harvester Wheatsheaf, 1994), pp. 615-33.
15. "A Semantic Conception of Truth?" Philosophical Topics 21 (1993), pp. 83-111. Reprinted in Brad Armour-Garb and J. C. Beall, eds., Deflationary Truth (New York: Open Court Press), pp. 111-142.
16. "On the Degrees of Unsolvability of Modal Predicate Logics of Provability," Journal of Symbolic Logic 59 (1994), pp. 253-61.
17. "Distinctions Without a Difference" (with Brian McLaughlin), Southern Journal of Philosophy 33 supplement (1995) (Spindel Conference volume for 1994), pp. 203-52.
18. "Philosophical Logic" in Donald Borchert, ed., Encyclopedia of Philosophy Supplement (New York: Macmillan, 1996), pp. 402-06.
19. "Logical Operations," Journal of Philosophical Logic 25 (1996), pp. 567-80.
20. "The Complexity of the Modal Predicate Logic of 'True in Every Transitive Model of ZFC,'" Journal of Symbolic Logic 62 (1997), pp. 1371-78.
21. "How We Learn Mathematical Language," The Philosophical Review 106 (1997), pp. 35-68.
22. "Inductive Definitions and Proofs," in Edward Craig, ed., Routledge Encyclopedia of Philosophy (London and New York: Routledge, 1998), vol. 4, pp. 752-55.
23. "Semantic Paradoxes and Theories of Truth," in Edward Craig, ed., Routledge Encyclopedia of Philosophy (London and New York: Routledge, 1998), vol. 8, pp. 642-48.
24. "Everything," in Gila Sher and Richard Tieszen, eds., Between Logic and Intuition (New York and Cambridge: Cambridge University Press, 2000), pp. 54-78.
25. "Revision," Philosophical Issues 8 (1997), pp. 387-406.
26. "Kilimanjaro," in Ali Kazmi, ed., Meaning and Reference. Canadian Journal of Philosophy supp. vol. 23 (1997), pp.141-98.
27. "An Airtight Dutch Book," Analysis 59 (1999), pp. 257-65. Reprinted in Patrick Grim, Kenneth Baynes, and Gary Mar, eds., The Philosopher's Annual, vol 22 (Stanford, California: CSLI Publications,
2000), pp. 155-64.
28. "A Puzzle about De Rebus Beliefs" (with Agustín Rayo), Analysis 60 (2000): 297-99.
29. "The Analysis of ' x is True' as 'For Every p , if x = "p," then p," in André Chapuis and Anil Gupta, eds., Circularity, Definition, and Truth (New Dehli: Indian Council for Philosophical
Research, 2000), pp. 255-72.
30. "To Tell the Truth about Conditionals," Analysis 60 (2000), pp. 107-11.
31. "The Lessons of the Many" (with Brian McLauglin), Philosophical Topics 28 (2000): 128-51.
32. "Truth by Default," Philosophia Mathematica 9 (2001): 5-20.
33. "Ramsey and the Correspondence Theory," in Volker Halbach and Leon Horstein, eds., Principles of Truth (Frankfurt: Hänsel-Hohenhausen, 2002), pp. 153-67.
34. "Ramsey's Dialethism," in Graham Priest, J. C. Beall, and Brad Armour-Garb, eds. ThevLaw of Non-Contradiction (Oxford: Oxford University Press, 2004), pp. 276-291.
35. "The Many Lives of Ebenezer Wilkes Smith," in Godehard Link, ed., One Hundred Years of Russell's Paradox (Berlin: de Gruyter, 2004), pp. 611-24.
36. "Universal Universal Quantification," in Michael Glanzberg and J. C. Beall, eds., Liars and Heaps (Oxford: Oxford University Press, 2003), pp. 357-64.
37. "Tarski's Staggering Existential Assumptions," Synthese 142 (2004): 371-387.
38. "In Praise of the Free Lunch," in Vincent F. Hendricks, Stig Andur Pedersen, and Thomas Bollander, eds., Self-Reference (Stanford, California: CSLI, 2006), pp. 95-120.
39. "Truth," in Michael Devitt and Richard Hanley, eds., Blackwell Guide to Philosophy of Language (Oxford: Blackwell, 2006), pp. 392-410.
40. "Afterword: Trying (with Limited Success) to Demarcate the Disquotational/Correspondence Distinction," in Brad Armour-Garb and J. C. Beall, eds., Deflationary Truth (New York: Open Court Press,
2005), pp. 143-52.
41. "Two Conceptions of Truth?" Philosophical Studies 124 (2005): 71-104.
42. "Gödel's Theorem" in William Craig, ed., Encyclopedia of Philosophy, 2nd ed. (New York: Macmillan, 2006).
43. "Logical Paradoxes," in William Craig, ed., Encyclopedia of Philosophy, 2nd ed. (New York: Macmillan, 2006).
44. "Inscrutability and Its Discontents," Noûs 39 (2005): 397-425.
45. "There are Many Things," in Judith Jarvis Thomson and Alex Byrne, eds., Content and Modality (Oxford: Oxford University Press, 2006), pp. 93-122.
46. "There's a Rule for Everything" in Agustín Rayo and Gabriel Uzquiano, eds., Absolute Generality (Oxford: Oxford University Press, 2006), pp. 179-202.
1. Review of The Liar by Jon Barwise and John Etchemendy, The Philosophical Review 100 (1991), pp. 472-74.
2. Review of various articles by Artemov and Vardanyan on the modal logic of provability, Journal of Symbolic Logic, 56 (1991), pp. 329-32.
3. Review of The Concept of Logical Consequence by John Etchemendy, Journal of Symbolic Logic 57 (1992), pp. 254-55.
4. Review of If P, then Q by David Sanford, Philosophy and Phenomenological Research 53 (1993), pp. 239-42.
5. Review of A Theory of Counterfactuals by Igal Kvart, Philosophy of Science 60 (1993), pp. 518-19.
6. Review of Paradoxes of Belief and Strategic Rationality by Robert Koons, Mind 102 (1993), pp.407-10.
7. Review of Sets by Michael Potter and of articles by van Aken and Pollard on the axiomatization of set theory, Journal of Symbolic Logic 58 (1993), pp. 1077-78.
8. Review of A Structuralist Theory of Logic by Arnold Koslow, Journal of Philosophy 90 (1993), pp. 271-74.
9. Review of Logic, Logic, and Logic by George Boolos, Bulletin of Symbolic Logic 6 (2001): 58-62.
10. Review of The Revision Theory of Truth by Anil Gupta and Nuel Belnap, Philosophy and Phenomenological Research 56 (1996): 727-30.
11. Review of the second edition of The Concept of Logical Consequence by John Etchemendy, Bulletin of Symbolic Logic 6 (2001): 379-80.
12. Review of Vagueness by Timothy Williamson (with Brian P. McLaughlin), Linguistics and Philosophy 21 (1998), pp. 221-35.
13. "Logical Commitment: A Reply to Williamson" (with Brian P. McLaughlin), Linguistics and Philosophy 27 (2004): 123-36.
14. Review of The Limits of Abstraction by Kit Fine, Philosophia Mathematica 12 (2004): 278-84. | {"url":"http://web.mit.edu/philos/www/facultybibs/mcgee_bib.html","timestamp":"2014-04-21T09:45:35Z","content_type":null,"content_length":"9731","record_id":"<urn:uuid:62279bbe-ffcc-406a-8a33-8c8159ebc1d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Castle Point, NJ Precalculus Tutor
Find a Castle Point, NJ Precalculus Tutor
...Over the past year, I have worked with students in various areas, ranging from algebra, calculus, chemistry, and physics to English and US History. I also specialize in standardized test prep
(SAT/ACT/ISEE/GRE). On the side, I continue to extend my own education by taking classes of interest t...
38 Subjects: including precalculus, Spanish, chemistry, GRE
Hello! My name is Emily. I am currently a student at NYU studying Math Education with an American Sign Language minor.
11 Subjects: including precalculus, calculus, geometry, algebra 1
...But more than that, I have been tutoring for more than 25 years, and I know how to get the most out of a student. I have had great success tutoring GMAT both independently and for GMAT prep
companies. I've found that for me, it takes about 6-9 weeks on average of working with a student to get to an 80-100 point improvement, and I can work with Quant, Verbal, or both.
11 Subjects: including precalculus, calculus, geometry, algebra 1
...I have been teaching/tutoring the math section of SSAT for many years. During the first session, I evaluate the student(s) to see which areas of the test the student needs to improve on.
Afterwards, I use an individualized approach focusing on strengthening student's weakness as well as mastering other parts.
23 Subjects: including precalculus, reading, English, ASVAB
...I've been tutoring for 12 years and have helped hundreds of students master difficult subjects. I have tutored tens of students from New York private and public schools in Social Studies. I
worked in the Department of Justice early in my career and am quite familiar with the U.S. government and U.S. demographics.
47 Subjects: including precalculus, English, reading, chemistry
Related Castle Point, NJ Tutors
Castle Point, NJ Accounting Tutors
Castle Point, NJ ACT Tutors
Castle Point, NJ Algebra Tutors
Castle Point, NJ Algebra 2 Tutors
Castle Point, NJ Calculus Tutors
Castle Point, NJ Geometry Tutors
Castle Point, NJ Math Tutors
Castle Point, NJ Prealgebra Tutors
Castle Point, NJ Precalculus Tutors
Castle Point, NJ SAT Tutors
Castle Point, NJ SAT Math Tutors
Castle Point, NJ Science Tutors
Castle Point, NJ Statistics Tutors
Castle Point, NJ Trigonometry Tutors
Nearby Cities With precalculus Tutor
Allwood, NJ precalculus Tutors
Ampere, NJ precalculus Tutors
Bayway, NJ precalculus Tutors
Beechhurst, NY precalculus Tutors
Bellerose Manor, NY precalculus Tutors
Doddtown, NJ precalculus Tutors
Dundee, NJ precalculus Tutors
Five Corners, NJ precalculus Tutors
Fort George, NY precalculus Tutors
Greenville, NJ precalculus Tutors
Highbridge, NY precalculus Tutors
Hoboken, NJ precalculus Tutors
Linden Hill, NY precalculus Tutors
Manhattanville, NY precalculus Tutors
Pamrapo, NJ precalculus Tutors | {"url":"http://www.purplemath.com/Castle_Point_NJ_Precalculus_tutors.php","timestamp":"2014-04-17T22:08:12Z","content_type":null,"content_length":"24420","record_id":"<urn:uuid:42cc216a-5d0d-4386-a1fb-2f30b7b38d6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Sorry, God. I am an old guy and have a tendency to want to do things by hand. Whenever I use my TI-89, I almost feel like I'm cheating in some way. But if you really need an answer there is nothing
wrong with using technology to find the answer. Actually, I use my calculator all of the time, it's not like I sat down and memorized the trigonomic tables....hmm, I have to put that on my to-do | {"url":"http://www.mathisfunforum.com/post.php?tid=2685&qid=26353","timestamp":"2014-04-19T14:53:35Z","content_type":null,"content_length":"19378","record_id":"<urn:uuid:ca33fc0f-630d-4a7b-800b-3579ec26f2fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
The half variance approximation for mean returns
What’s that thing about arithmetic and geometric returns and the variance?
An introduction to the difference between simple and log returns is:
Suppose you are predicting the mean annual return of an asset for some number of years. To simplify the discussion, let’s buy into the fantasy that the observed returns are a good (unbiased)
estimate of future returns. If you take the mean of the historical simple returns, you will be over-estimating the mean return — call this “Amean” (as in arithmetic mean). Better is to take the
mean of the log returns and then transform that mean into a simple return — call this “Gmean”.
The approximation of Gmean using only simple returns is Amean minus half the variance of the historical simple returns.
Perhaps others will disagree but I don’t think the issue is computational — if someone can compute a variance, they should just about be able to take a logarithm. I think the issue is of how we
think rather than how we compute. It is easy to get optimistic.
“On the relationship between Arithmetic and Geometric Returns” explains where the approximation comes from, and discusses three more as well.
We’ll investigate a world in which the true annual return each year is 5% and the volatility is 20%.
What varies is the distribution of returns and the length of the history available.
The other constant is that we always look at 1000 realizations of a simulation.
normal decade
Figures 1 through 3 show the simulations where the log returns have a normal distribution and we have a decade of data (that is, 10 annual returns).
Figure 1: Amean versus Gmean for a decade with the normal distribution.
Figure 2: Amean minus Gmean versus Gmean for a decade with the normal distribution. The bias in Amean relative to Gmean is always non-trivial in this case and often quite significant.
Figure 3: The approximation minus Gmean versus Gmean for a decade with the normal distribution. The approximation is pretty much unbiased, but it can be substantially far from Gmean.
Remember that the true answer in all cases is 5 — it is just that Gmean is pretty much our best guide if we don’t get to know the secrets of the universe.
t6 decade
Figures 4 through 6 show the simulations from a decade of data where the distribution of daily returns is the t with 6 degrees of freedom.
Figure 4: Amean versus Gmean for a decade with the t6 distribution.
Figure 5: Amean minus Gmean versus Gmean for a decade with the t6 distribution.
Figure 6: The approximation minus Gmean versus Gmean for a decade with the t6 distribution. There are some differences between the normal and t6 cases, but they are fairly subtle. A more realistic
change in return distribution would be to put in volatility clustering. That probably would give significantly different results from the normal case.
normal century
Figures 7 through 9 show the simulations assuming a century of data and normally distributed returns.
Figure 7: Amean versus Gmean for a century with the normal distribution.
Figure 8: Amean minus Gmean versus Gmean for a century with the normal distribution.
Figure 9: The approximation minus Gmean versus Gmean for a century with the normal distribution. Even though the true mean return is 5% there are a few centuries out of 1000 that experienced a
negative return. Awesome.
normal millennium
Figures 10 through 12 show simulations assuming a thousand years of data and normally distributed returns.
Figure 10: Amean versus Gmean for a decade with the millennium distribution.
Figure 11: Amean minus Gmean versus Gmean for a millennium with the normal distribution.
Figure 12: The approximation minus Gmean versus Gmean for a millennium with the normal distribution.
Be careful when averaging returns.
We have no claim to know what’s right. That is, we don’t have a chance in hell of knowing the true expected return of equities.
Oh my fair North Star
I have held to you dearly
I have asked you to steer me
from “Mercy of the Fallen” by Dar Williams
Appendix R
The simulations were performed in R.
simulation function
The function that produced the simulations was:
pp.simulret <- function(years, meanann, vol,
distribution="normal", trials=1000, ...)
# simulate years of returns based on daily returns
# placed in the public domain 2013 by Burns Statistics
# testing status: untested
dots <- list(...)
if(length(dots)) {
df <- dots$df
ans <- array(NA, c(trials, 4), list(NULL,
c("Amean", "Gmean", "Var", "approx")))
for(i in 1:trials) {
logret <- rnorm(years * 252, meanann/25200,
logret <- rt(years * 252, df=df) *
sqrt((df-2)/df) * vol/100/sqrt(252) +
annlret <- colSums(matrix(logret, nrow=252))
annsret <- (exp(annlret) - 1)
ans[i,-4] <- c(mean(annsret), mean(annlret),
ans[,4] <- ans[,1] - ans[,3]/2
ans[, -3] <- ans[,-3] * 100
It is used like:
sim.norm.decade20 <- pp.simulret(10, 5, 20,
plot function
The function to do the plots was:
pp.simulretplot <- function(x, type, ...)
# plots for simulated returns
# placed in the public domain 2013 by Burns Statistics
# testing status: untested
plot(x[, "Gmean"], x[, "Amean"],
col="steelblue", xlab="Gmean",
ylab="Amean", ...)
abline(0, 1, col="gold")
plot(x[, "Gmean"], x[, "Amean"]-x[, "Gmean"],
col="steelblue", xlab="Gmean",
ylab="Amean - Gmean", ...)
abline(h=0, col="gold")
plot(x[, "Gmean"], x[, "approx"],
col="steelblue", xlab="Gmean",
ylab="Approximation", ...)
abline(0, 1, col="gold")
plot(x[, "Gmean"], x[, "approx"]-x[, "Gmean"],
col="steelblue", xlab="Gmean",
ylab="Approximation - Gmean", ...)
abline(h=0, col="gold")
Figures 1, 2 and 3 were produced with:
pp.simulretplot(sim.norm.decade20, 'gam')
pp.simulretplot(sim.norm.decade20, 'ram')
pp.simulretplot(sim.norm.decade20, 'rap')
3 Responses to The half variance approximation for mean returns
1. Another fabulously intriguing post. Why does the plot of 1000 years worth of approximate Gmean vs Gmean lose the structure that 10 and 100 years had? Presumably something do with the accuracy of
the approximation to Gmean, but I don’t understand the shape. Is to do with the different rates of convergence of estimates of mean and variance of simple returns??
□ Keiran,
Yes, that is my interpretation: that we are basically seeing the same pattern, but we only see a tiny slice of it. In the link I gave, he talks about one of the other approximations being
exact for the normal case. Looking at that would probably give a clue of what goes wrong with this approximation.
This entry was posted in Quant finance, R language and tagged arithmetic return, arithmetic return vs geometric return, geometric return vs arithmetic return. Bookmark the permalink. | {"url":"http://www.portfolioprobe.com/2013/05/06/the-half-variance-approximation-for-mean-returns/","timestamp":"2014-04-23T06:39:31Z","content_type":null,"content_length":"72983","record_id":"<urn:uuid:c0a136f9-dcd0-480b-9e13-13ef574cfb63>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - How do I numerically find eigenvectors for given eigenvalues?
There are libs for major programming languages and math scripts which provide all these methods.
In the QR method - the eigenvectors are the product of the orthogonal transformation in each iteration. Which is what the Olver (example code post #3) paper does. | {"url":"http://www.physicsforums.com/showpost.php?p=3679269&postcount=7","timestamp":"2014-04-20T21:33:10Z","content_type":null,"content_length":"7835","record_id":"<urn:uuid:edf687c6-e879-454b-9384-d25ea776216b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Doctests for extensions/cython code
Fernando Perez fperez.net@gmail....
Sun Jul 13 01:32:35 CDT 2008
Hi all (esp. Alan McIntyre),
I'm attaching two little tarballs with a set of tools that may come in
handy for testing numpy:
- plugin.tgz contains a Nose plugin that works around various issues
in the python stdlib, in nose and in cython, to enable the testing of
doctests embedded in extension modules. I must note that this code
also provides a second plugin that is ipython-aware, and the code as
shipped is NOT yet acceptable for public use in numpy, because it
insantiates a full ipython on import. But I wrote this primarily for
ipython, so for us that's OK.
For numpy, we obviously must first remove all the ipython-specific
code from there. The two plugins are separated, so it's perfectly
doable, I just ran out of time. I'm putting it here in the hopes that
it will be useful to Alan, who can strip it of the ipython
dependencies and start using it in the numpy tests.
The one thing I didn't figure out yet was how to load the plugin from
within a python script (instead of doing it at the command line via
'nosetests --extdoctests'). But this should be trivial, it's just a
matter of finding the right call in nose, and you may already know
- primes.tgz is the cython 'primes' example, souped up with trivial
code that contains some extra doctests both in python and in extension
code. It's just meant to serve as a test for the above plugin (I also
used it to provide a self-contained cython example complete with a
setup.py file in a seminar, so it can be useful as a starter example
for some).
I don't know if today's numpy.test() picks up doctests in extension
modules (if it does, I'd like to know how). I suspect the answer is
not, but if we are to encourage better examples that serve as
doctests, then actdually testing them would be good.
I hope this helps in that regard.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: plugin.tgz
Type: application/x-gzip
Size: 10149 bytes
Desc: not available
Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080712/edf9e774/attachment.tgz
-------------- next part --------------
A non-text attachment was scrubbed...
Name: primes.tgz
Type: application/x-gzip
Size: 2609 bytes
Desc: not available
Url : http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080712/edf9e774/attachment-0001.tgz
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035629.html","timestamp":"2014-04-19T09:40:50Z","content_type":null,"content_length":"5179","record_id":"<urn:uuid:d620f20b-5689-406c-b0c6-c98d979958d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Somerville, MA ACT Tutor
Find an East Somerville, MA ACT Tutor
...Points, lines, and functions, 3. Systems of equations, 4. Polynomials, 5.
9 Subjects: including ACT Math, geometry, algebra 2, algebra 1
...I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or
review sheets that they have been assigned.
17 Subjects: including ACT Math, statistics, geometry, algebra 1
...I have tutored the SAT many times, and the ACT is very similar. I have tutored the SAT many times, and the ACT is very similar. I have 4 years of full-time class room experience as a math
29 Subjects: including ACT Math, reading, calculus, geometry
...I am familiar with the wide wide range of fields they test on the ACT and can help teach scientific thinking skills that will help with this portion of the exam. I have played tennis for
approximately 10 years. I was the captain of my high school tennis team and played the second singles position my senior year.
20 Subjects: including ACT Math, reading, biology, algebra 1
...I would love to help you succeed!I have tutored elementary school students in Reading, Writing and Math for 15 years, both privately and for Commonwealth Learning Centers. Privately, I have
worked with students who needed basic support, as well as those doing very well looking for enrichment. F...
34 Subjects: including ACT Math, reading, calculus, English
Related East Somerville, MA Tutors
East Somerville, MA Accounting Tutors
East Somerville, MA ACT Tutors
East Somerville, MA Algebra Tutors
East Somerville, MA Algebra 2 Tutors
East Somerville, MA Calculus Tutors
East Somerville, MA Geometry Tutors
East Somerville, MA Math Tutors
East Somerville, MA Prealgebra Tutors
East Somerville, MA Precalculus Tutors
East Somerville, MA SAT Tutors
East Somerville, MA SAT Math Tutors
East Somerville, MA Science Tutors
East Somerville, MA Statistics Tutors
East Somerville, MA Trigonometry Tutors
Nearby Cities With ACT Tutor
Beachmont, MA ACT Tutors
Cambridgeport, MA ACT Tutors
Charlestown, MA ACT Tutors
East Milton, MA ACT Tutors
East Watertown, MA ACT Tutors
Grove Hall, MA ACT Tutors
Kendall Square, MA ACT Tutors
Kenmore, MA ACT Tutors
Reservoir, MS ACT Tutors
Somerville, MA ACT Tutors
South Waltham, MA ACT Tutors
Squantum, MA ACT Tutors
West Lynn, MA ACT Tutors
West Somerville, MA ACT Tutors
Winter Hill, MA ACT Tutors | {"url":"http://www.purplemath.com/east_somerville_ma_act_tutors.php","timestamp":"2014-04-17T15:50:47Z","content_type":null,"content_length":"23890","record_id":"<urn:uuid:7caac8cb-0d38-479a-8a81-9552935b2daf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Mathematics Tutors
Los Angeles, CA 90049
Patient, experienced UCLA Mathematics graduate. Math is my passion!
...Rather, I will ask you questions that build your understanding, and I will show you the way I think about problems as I do them. On the other hand, if all you need is someone to check your answers
on a review test or follow along with you as you do your homework...
Offering 10+ subjects including discrete math | {"url":"http://www.wyzant.com/West_Hollywood_discrete_mathematics_tutors.aspx","timestamp":"2014-04-19T01:53:49Z","content_type":null,"content_length":"59948","record_id":"<urn:uuid:ab95ee61-68fd-43f0-99aa-0400b47450cb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Mathematical Surveys and Monographs
2004; 318 pp; hardcover
Volume: 100
ISBN-10: 0-8218-0810-9
ISBN-13: 978-0-8218-0810-8
List Price: US$87
Member Price: US$69.60
Order Code: SURV/100
Over the last ten years, the theory of Bergman spaces has undergone a remarkable metamorphosis. In a series of major advances, central problems once considered intractable were solved, and a rich
theory emerged. Although progress continues, the time seems ripe for a full and unified account of the subject, weaving the old and new results together. This thorough exposition provides just that.
The subject of Bergman spaces is a masterful blend of complex function theory with functional analysis and operator theory. It has much in common with Hardy spaces, but involves new elements such as
hyperbolic geometry, reproducing kernels, and biharmonic Green functions.
In this book, the authors develop background material and provide a self-contained introduction to a broad range of topics, including recent advances on interpolation and sampling, contractive
zero-divisors, and invariant subspaces. The book is accessible to researchers and advanced graduate students who have studied basic complex function theory, measure theory, and functional analysis.
Advanced graduate students and research mathematicians interested in complex function theory and operator theory.
• Overview
• The Bergman kernel function
• Linear space properties
• Analytic properties
• Zero-sets
• Contractive zero-divisors
• Sampling and interpolation
• Proofs of sampling and interpolation theorems
• Invariant subspaces
• Structure of invariant subspaces
• References
• Index | {"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-100","timestamp":"2014-04-18T14:17:10Z","content_type":null,"content_length":"15964","record_id":"<urn:uuid:cf8a4055-4995-4b0f-b80f-59b606d43173>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burlington, MA Algebra 1 Tutor
Find a Burlington, MA Algebra 1 Tutor
...Cheers, SusieI have played violin since I was 5 years old. I was trained with the Suzuki Method and completed all levels of Suzuki by age 10. When I was an elementary school student, I played
with the middle school orchestra.
11 Subjects: including algebra 1, Spanish, accounting, ESL/ESOL
I am a licensed middle school math teacher and currently teach 8th grade. I am also the math facilitator for my building and have been teaching for 8 years. I teach a variety of levels of
students from advanced to students with special needs.
5 Subjects: including algebra 1, algebra 2, precalculus, study skills
...I've used tables to show the relation between the number on edges, vertices and surfaces. I've used Geometer Sketchpad to explore geometric shapes including similarity and congruency. I've
used Geometer Sketchpad to explore central angles, inscribed angles and arcs.
13 Subjects: including algebra 1, physics, geometry, statistics
...I have taken courses in ethnomusicology specializing in the Persian Radif and Hindustani Raga traditions. I studied early music with Bill Summers and intend to pursue a PhD in historical
musicology specializing in the late medieval and early Renaissance periods. I have experience and training in various periods and cultures, so I am more than qualified to tutor in this area.
60 Subjects: including algebra 1, reading, writing, English
...As everyone who is considering joining the military knows, the ASVAB is what will determine your job (MOS) during your time in the military. Take the time to study as best you can and secure a
job that will allow you to serve your country as well as give you viable options for the civilian world. On the day I took my test, a college hockey player tested along with me.
6 Subjects: including algebra 1, Spanish, prealgebra, GED
Related Burlington, MA Tutors
Burlington, MA Accounting Tutors
Burlington, MA ACT Tutors
Burlington, MA Algebra Tutors
Burlington, MA Algebra 2 Tutors
Burlington, MA Calculus Tutors
Burlington, MA Geometry Tutors
Burlington, MA Math Tutors
Burlington, MA Prealgebra Tutors
Burlington, MA Precalculus Tutors
Burlington, MA SAT Tutors
Burlington, MA SAT Math Tutors
Burlington, MA Science Tutors
Burlington, MA Statistics Tutors
Burlington, MA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Bedford, MA algebra 1 Tutors
Belmont, MA algebra 1 Tutors
Billerica algebra 1 Tutors
Chelmsford algebra 1 Tutors
Lexington, MA algebra 1 Tutors
Melrose, MA algebra 1 Tutors
Pinehurst, MA algebra 1 Tutors
Reading, MA algebra 1 Tutors
Saugus algebra 1 Tutors
Stoneham, MA algebra 1 Tutors
Tewksbury algebra 1 Tutors
Wakefield, MA algebra 1 Tutors
Wilmington, MA algebra 1 Tutors
Winchester, MA algebra 1 Tutors
Woburn algebra 1 Tutors | {"url":"http://www.purplemath.com/Burlington_MA_algebra_1_tutors.php","timestamp":"2014-04-19T02:32:43Z","content_type":null,"content_length":"24199","record_id":"<urn:uuid:c4ee5f98-e6bf-4d4c-b89f-24fd205bf64f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Your article and Standards
Replies: 1 Last Post: Sep 12, 1995 8:15 AM
Re: Your article and Standards
Posted: Sep 12, 1995 8:15 AM
On Tue, 12 Sep 1995, Tad Watanabe wrote:
> This may be the last response since it is really unlikely that we will
> convince each other except that we have fundamental differences that
> cannot be reconciled. I enjoy discussing with you different issues on
> which we have very different perspectives, but I'd rather discussion be
> done publicly. Although some people on nctm-l have criticzed us (and
> others) for dominating the list, I would really like to get multiple
> perspectives - which the list was designed to do. Doing this discussion
> in private just is not too satisfactory for me since I'm pretty sure I
> will never convince you to change your perspective and you will not
> change mine. We may be engaged in a very futile activity in that case
> (with all due respect to you and your perspectives).
OK, you get what you want.
> > Math problems claim to assess themselves.
I do not understand this phrase. I never wrote it.
> > If a student cannot solve quadratic equations,
> > he cannot solve quadratic equations. Period.
> I accept this point if the students are simply solving problem like
> 2x-5=7, which is context-free. However, as a mathematician, I'm sure you
> value students' ability to solve "problems" that exist within contexts,
> don't you? A test with 50 equations measures something but not too many
> facets of mathematical understanding.
Oh, yes. I attach enormous importance to word problems.
But in a sense which is different from the common naive one.
Many people think that a problem, say, about cars moving
from two cities agaist each other is useful because it
prepares students to manage cars. I think it is ridiculous.
> > Please, give an example.
> > Only not with a student who does not know English.
> A very simple minded example: Suppose you have a word problem involving
> dimensions of a garage. Many inner-city kids do not have garages at
> their homes. Now, we can always argue that what type of building
> involved makes no difference mathematically and that's true. And, that's
> the exactly the reason we want to make sure, if a student get this
> problem wrong, it was not because he was puzzled with this structure
> called "garage" but because he did not understand mathematics involved.
> You will probably think this is non-sense, but younger children (and many
> adults, too) often get side-tracked with these items that are irrelevant
> (from mathematical perspectives). Since we are assessing students'
> mathematical understanding, what the (assessment) standards is saying is
> that we need to make sure that we are indeed assessing students' mathematics.
From my perspective it means only that authors of problems should
choose the most well-known and unambiguous real-life objects.
Coins are among the best in this respect. And Standards are stupid
enough to put coins first on their black list !
It also is related to why I attach so much importance to word
problems. Because they teach something more important than
mathematics. They teach an ability to formalize real objects,
that is to operate with their abstract counterparts.
For example, abstract cars (at the middle school level)
always travel with a constant speed and never break down.
This prepares students to manage MODELS, which is essential
for any application of mathematics.
> Also, I'm not sure why you have to read "students' background" as
> "students' skin colors".
Just to visualize the problem. We can speak about length
of nose instead. The Standard explicitly mention `ethnic,
cultural and social backgrounds' (p.15) among those which
must be considered when assessing a student's work.
Thus, to assess a student's solution of a problem I
cannot simply say: `your solution is wrong because
you think that volume of a garage equals the sum of its
dimensions'. I must take the student's genealogic tree
into consideration. And education of his parents.
And nobody knows what else.
Andrei Toom | {"url":"http://mathforum.org/kb/message.jspa?messageID=1476132","timestamp":"2014-04-18T05:47:09Z","content_type":null,"content_length":"18429","record_id":"<urn:uuid:b25296ed-d433-45f3-8543-fbc2057535d2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
GENERAL PHYSICS II EXAM III NAME______________________________
SAMPLE EXAM FOR CH's 22, 23, 24
1] For each part below, an object 25.0 cm tall is located 40.0 cm in front of the mirror.
A) For an image formed at q = +120 cm, calculate the height of the image and state
whether it is real or virtual and whether it is erect or inverted.
Answer: -75 cm, real, inverted
B) For an image formed at q = - 5.0 cm,
calculate the radius of curvature of the mirror
and state whether it is concave or convex.
Answer: -11.4 cm , convex
2] Two thin lenses, each with a focal length, f = +20 cm, are placed 50 cm apart. An object
15 cm tall is placed 60 cm in front of the first lens. Where is the final image formed for the system?
Answer: No image is formed, q would be at infinity.
5] A screen is separated from a double-slit source by a distance of 1.20 m. The distance between
the slits is 0.030mm. If light of wavelength 633 nm is incident on the slits, at what angle would you
find the 30th order bright fringe?
Answer: 39.3 degrees
6] A submarine is 300 m horizontally out from the shore. It is 100 m below
the surface. A laser beam sent out from the sub strikes the surface of the water
at a point 210 m from the shore. The beam exiting the water strikes the top of a
tower standing right at the water's edge on the shore. Determine the height of
the tower.
Answer: 107 m
1) When total internal reflection happens at a glass to air interface
a) half the light is refracted
b) half the light is reflected
c) light hits the interface from the air side
d) light hits the interface from the glass side
e) the law of reflection ceases to hold
2) When you see your image in a plane mirror, your image appears to be
a) in front of the mirror
b) at the surface of the mirror
c) behind the surface of the mirror at a distance equal to your distance from the mirror
d) behind the surface of the mirror at a distance equal to half your distance from the mirror
e) in front of the surface of the mirror at a distance equal to half your distance from the mirror
3) In a camera using a simple lens, the image focused on the film would be
a) real and upright
b) real and inverted
c) virtual and upright
d) virtual and inverted
e) formed at infinity
4) A general condition such that two waves will undergo destructive interference is
a) their phase difference is zero
b) their phase difference is +90 deg
c) their phase difference is + or - 90 deg
d) their phase difference is an even integral multiple of lambda/2 (wavelengths)
e) their phase difference is an odd integral multiple of lambda/2 (wavelengths)
5) Increasing the wavelength in a double slit experiment has what effect on the position of maxima on a screen
at a fixed distance?
a) maxima get closer together
b) maxima get farther apart
c) maxima get cancelled by minima
d) it has no effect
1. d
2. c
3. b
4. e
5. b | {"url":"http://faculty.etsu.edu/hensong/GenPhys2/2020_Sample_Exam_5MWF.htm","timestamp":"2014-04-20T08:36:13Z","content_type":null,"content_length":"22901","record_id":"<urn:uuid:f4108cba-d8d3-4308-93cb-c44cdf931d2c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum tunneling and radioactive decay.
1. The problem statement, all variables and given/known data
The edge of a nucleus can be roughly modeled as a square potential barrier. An alpha particle in an unstable nucleus can be modeled as a particle with a specific energy, bouncing back and forth
between these square potential barrier.
Consider a nucleus of radius r and an alpha particle with kinetic energy E (i.e., let the potential energy within the nucleus be zero) and mass m.
Assuming that the alpha particle moves along a diameter of the nucleus and that it moves at low enough speed that relativistic effects are negligible, what is the time tau between successive
encounters between each edge of the nucleus and the alpha particle?
Express your answer in terms of [itex]K_{e}[/itex], [itex]r[/itex], and [itex]m[/itex].
2. Relevant equations
3. The attempt at a solution
Have I used the right approach to this problem? and have I got the correct answer?
Thanks in advance | {"url":"http://www.physicsforums.com/showthread.php?t=608803","timestamp":"2014-04-20T03:17:04Z","content_type":null,"content_length":"29416","record_id":"<urn:uuid:51d7184c-880d-49ad-a378-22091f16c33d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: SECURITY COUNTERMEASURES FOR POWER ANALYSIS ATTACKS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A countermeasure for differential power analysis attacks on computing devices. The countermeasure includes the definition of a set of split mask values. The split mask values are applied to a key
value used in conjunction with a masked table defined with reference to a table mask value. The set of n split mask values are defined by randomly generating n-1 split mask values and defining an nth
split mask value by exclusive or'ing the table mask value with the n-1 randomly generated split mask values.
A computing device-implemented method for carrying out a cryptographic process for processing plaintext to generate cipher text using a key, the process including a lookup operation on a table, the
method comprising: a processor executing the steps of: a) masking the key with a random key mask; b) randomly generating n-1 split masks m
, . . . , m
-1; c) defining a n
split mask m
by masking the random key mask with the plurality of n parts m
, . . . , m
and the n-1 split masks m
, . . . , m
-1; d) masking each of the n split masks m
, . . . , m
with a random value r1; e) masking the masked key with each of the n masked split masks to generate a split mask masked key; and, e) masking the plaintext with the split mask masked key to produce a
table input; and, f) performing the table lookup operation on a masked table using the table input, wherein the masked table comprises the table masked with a mask comprised of a plurality of n parts
, . . . , m
The method of claim 1 further comprising generating a new random value r1 for each plaintext value.
The method of claim 1 further comprising generating the masked table by masking each entry of the table with each of the n mask parts m
, . . . , m
The method of claim 1 wherein upon re-definition of the key, the method further comprising re-generating the n split masks.
The method of claims 1 wherein the masking comprises a bitwise exclusive or operation carried out on binary values.
The method of claim 1 wherein after masking the key the method further comprises storing the masked key, and wherein after the generating and defining the n split masks m
, . . . , m
the method further comprises storing the n split masks m
, . . . , m
The method of claim 1 wherein after the generating and defining the n split masks m
, . . . , m
the method further comprises destroying the plurality of n parts m
, . . . , m
A computing device operative to execute the method of claim
1. 9.
A computer program product comprising a non-transitory storage medium containing instructions to render a computing device operative to perform the method of claim
1. 10.
A computing device operative to execute a method for a cryptographic process for processing plaintext to generate cipher text using a key, the process including a lookup operation on a table, the
device comprising: a processor operative to: a) mask the key with a random key mask; b) randomly generate n-1 split masks m
, . . . , m
; c) define a n
split mask m
by masking the random key mask with the plurality of n parts m
, . . . , m
and the n-1 split masks m
, . . . , m
-1; d) mask each of the n split masks m
, . . . , m
with a random value r1; e) mask the masked key with each of the n masked split masks to generate a split mask masked key; and, e) mask the plaintext with the split mask masked key to produce a table
input; and, f) perform the table lookup operation on a masked table using the table input, wherein the masked table comprises the table masked with a mask comprised of a plurality of n parts m
, . . . , m
The device of claim 10 wherein the processor is further operative to generate a new random value r1 for each plaintext value.
The device of claim 10 wherein the processor is further operative to generate the masked table by masking each entry of the table with each of the n mask parts m
, . . . , m
The device of claim 10 wherein the processor is further operative to, upon re-definition of the key, re-generate the n split masks.
The device of claim 10 wherein the processor is further operative to mask by executing a bitwise exclusive or operation carried out on binary values.
The device of claim 10 wherein the processor is further operative after masking the key to store the masked key, and wherein after the generating and defining the n split masks m
, . . . , m
the device is further operative to store the n split masks m
, . . . , m
The device of claim 10 wherein the processor is further operative after the generating and defining the n split masks m
, . . . , m
to destroy the plurality of n parts m
, . . . , m
This application is a continuation of Ser. No. 12/948,915, filed Apr. 16, 2004, which is a divisional of application Ser. No. 10/825,291, filed April 16, 2004.
FIELD OF THE INVENTION [0002]
This invention relates generally to computing systems and, more particularly, to computing systems implementing security countermeasures for power analysis attacks.
BACKGROUND OF THE INVENTION [0003]
Computing systems often require operations to be carried out in a secure manner. For embedded computing devices and for pervasive systems, security of operation is often crucial. To ensure operations
and communications are secure, such systems employ cryptographic methods.
The implementation of such a cryptographic method must itself be secure. However, cryptographic methods are subject to attacks. One type of non-invasive attack on computing devices implementing
cryptographic methods is known as a power analysis attack. A power analysis attack involves the monitoring of the power consumption of one or more components of a device while the device executes a
cryptographic method.
The data derived from monitoring power consumption of the device, combined with knowledge of the operations being carded out by the device, are used to derive the secret information that is part of
the cryptographic method.
One type of power analysis attack is known as a Differential Power Analysis ("DPA") (see, for example, "Differential Power Analysis" P. Kocher, CRYPTO'99, Lecture Notes in Computer Science. 1666, pp.
388-397, 1999, Springer-Verlag). This approach involves generating a large number of inputs by varying different bits in values to be encoded using the cryptographic method implemented in a device.
The DPA attack monitors power consumption at different points in the computing device for each of these varying values and, by statistical analysis the differential data, is able to determine a
likely key value for the cryptographic method (the secret information).
It is known to use hardware techniques to implement countermeasures for such power analysis attacks. Such an approach may use smoothing or modification of the power consumption of the device to
resist a power analysis attack. For example, see U.S. Pat. No. 6,419,159 to Odinak.
Similarly, countermeasures implemented in software have been developed. U.S. Pat. No. 6,295,606 to Messages and "Towards Sound Approaches To Counteract Power-Analysis Attacks" (S. Chari, C. S. Jutla,
J. R. Rao, P. Rohatgi, CRYPTO'99, Lecture Notes in Computer Science, 1666, pp. 398-412, 1999, Springer-Verlag), describe approaches that implement countermeasures to resist power analysis attacks.
However, such software approaches involve overhead costs in performance.
U.S. Pat. No. 6,295,606 (Messages et al., Sep. 25, 2001) discloses a method for resisting a power analysis attack for a cryptographic method. The cryptographic method includes a key value that is
combined with a plaintext value by a bitwise Boolean exclusive or operation. The result is used as input for a function that provides a cipher text output. The cryptographic function is usually
implemented as one or more table look-ups. The Messerges method involves a masking step carried out by applying a bitwise Boolean exclusive or operation to the key using a random, value (the mask).
In the Messerges method the masked key is then exclusive or'd with a plaintext and the result is used as input for a function that has, itself, been modified to provide a masked output that can be
unmasked to provide the correct result data. To apply a DPA attack against a device that is using the Messerges method requires a second order DPA: power samples for the random value (mask) and the
output of the bitwise Boolean XOR of the masked key and the plaintext are required. Complex mathematical analysis is then required to enable the key value to be determined.
In the approach of Messerges, by masking each key value with a different random mask, the cryptographic function is also required to be modified. This typically results in the regeneration of a large
table for each application of the cryptographic function. A large overhead price is borne by the system implementing this approach to avoid or limit DPA attacks.
Another known approach is set out in Chari (see above) and involves splitting the key value. In this approach the key value is to be divided into a number (k) of fragments and the fragments are
combined with random bits. The approach requires a k
order DPA to attempt to determine the original key value used. However, the Chari approach requires the plaintext to be exclusive or'd with each of the split key values. The end result is that the
processor executing the Chari method will require more power as the repeated running of the cryptographic function will necessitate the dissipation of more energy. In devices such as personal digital
assistants, energy consumption is a crucial factor and therefore there are limitations to applying this approach for many types of products.
It is therefore desirable to be able implement a countermeasure that will resist a DPA attack and will not require repeated potentially power-consuming operations.
SUMMARY OF THE INVENTION [0013]
According to an aspect of the invention there is provided a method and system for improved countermeasures for power analysis security attacks.
According to another aspect of the invention there is provided a computing device-implemented method for carrying out encryption using a key value for encrypting a plaintext value to define a cipher
text, the encryption being defined using an encryption function, the method including the steps of: defining a masked encryption function by masking the encryption function using an encryption
unction mask value; defining a set of more than one split mask values, at least one of the set of split mask values being defined with reference to the encryption function mask value; generating a
final mask value by masking the key value using masking steps that comprise masking by applying the set of split mask values; determining an input value by masking the plaintext value using masking
steps that comprise masking by applying the fixed final mask value, and applying the input value to the encryption function to provide a cipher text output.
According to another aspect of the invention there is provided the above method in which the step of generating the final mask value further includes the step of masking the key value using a key
mask value prior to masking with the set of split mask values, and which flitter includes the step of using the key mask value as a mask, as part of the step of defining one of the values in the set
of split mask values with reference to the encryption function mask value.
According to another aspect of the invention there is provided the above method in which the step of defining one of the set of split mask values with reference to the encryption function mask value
further includes the steps of masking the split mask value with the other values in the set of split mask values.
According to another aspect of the invention there is provided the above method in which the step of defining a set of split mask values m1 . . . mn includes the steps of: defining the encryption
function mask value to comprise a set of random values min1 to minn; defining the set of split mask values to be the random values m1 to mn-1; and defining a masking value mn in the set of split mask
values to be (key mask value) min1 . . . minn m1 . . . mn-1.
According to another aspect of the invention there is provided the above method further including the steps of applying a random mask to an even number of the set of split mask values prior to the
step of masking the key value with the set of split mask values.
According to another aspect of the invention there is provided a computing device-implemented method for use in a cryptographic process, the cryptographic process using a key value to define input to
a cryptographic function, the method including the steps of: masking the cryptographic function using a function mask value; defining a set of more than one split mask values, at least one of the set
of split mask values being defined with reference to the function mask value; masking the key value using steps that comprise masking by applying the set of split mask values to obtain a masked input
key value; and using the masked input key value to define the input to the masked cryptographic function.
According to another aspect of the invention there is provided the above method, further including the step of randomizing the split mask values.
According to another aspect of the invention there is provided a computing device-implemented method for use with an AES key generation process for defining masked round keys for use in AES
encryption, the method including the steps of: defining a masked table for use the ABS key generation process using table mask M; defining a set of four split mask values, one of the set of split
mask values being defined with relation to table mask M; masking a set of four key values using the set of four split mask values and applying the resulting values to the AES key generation process
using the masked table and a set of intermediate mask values whereby the set of AES round keys defined using table look-up are defined by applying an appropriate intermediate mask value to the input
value for the masked table; and masking the round keys produced by the AES key generation process by applying an appropriate intermediate mask value to the round keys that are not directly defined
using table look-up.
According to another aspect of the invention there is provided the above method in which the four key values are each masked with one of a set of four key mask values and in which the split mask
value in the set of split key mask values that is defined with relation to table mask M is further masked with each of the four key mask values.
According to another aspect of the invention there is provided a computing device-implemented method for carrying out AES encryption using the round keys as defined above, the output of the AES
encryption being unmasked using the key mask values and the split mask values.
According to another aspect of the invention there is provided the above method in which the unmasking is carried out in more than one step such that the key mask values and the split mask values are
not combined so as to produce a single unmasking value.
According to another aspect of the invention there is provided a computing device program product for carrying out encryption using a key value for encrypting a plaintext value to define a cipher
text, the encryption being defined using an encryption function, the computing device program product including a computer usable medium having computer readable program code means embodied in the
medium, and including program code means for defining a masked encryption function by masking the encryption function using an encryption function mask value; program code means for defining a set of
more than one split mask values, at least one of the set of split mask values being defined with reference to the encryption function mask value; program code means for generating a final mask value
by masking the key value using masking steps that comprise masking by applying the set of split mask values; program code means for determining an input value by masking the plaintext value using
masking steps that comprise masking by applying the fixed final mask value; and program code means for applying the input value to the encryption function to provide a cipher text output.
According to another aspect of the invention there is provided the above computing device program product in which
the program code means for generating the final mask value further includes program code means for masking the key value using a key mask value prior to masking with the set of split mask values, and
further includes program code means for using the key mask value as a mask, as part of defining one of the values in the set of split mask values with reference to the encryption function mask value.
According to another aspect of the invention there is provided a system for carrying out encryption using a key value for encrypting a plaintext value to define a cipher text, the encryption being
defined using an encryption function, the system including: means for defining a masked encryption function by masking the encryption function using an encryption function mask value; means for
defining a set of more than one split mask values, at least one of the set of split mask values being defined with reference to the encryption function mask value; means for generating a final mask
value by masking the key value using masking steps that comprise masking by applying the set of split mask values; means for determining an input value by masking the plaintext value using masking
steps that comprise masking by applying the fixed final mask value, and means for applying the input value to the encryption function to provide a cipher text output.
According to another aspect of the invention there is provided the above system in which the means for generating the final mask value further includes means for masking the key value using a key
mask value prior to masking with the set of split mask values, and which system further includes means for using the key mask value as a mask, as part of defining one of the values in the set of
split mask values with reference to the encryption function mask value.
According to another aspect of the invention there is provided the above system in which the means for defining one of the set of split mask values with reference to the encryption function mask
value further includes means for masking the split mask value with the other values in the set of split mask values.
According to another aspect of the invention there is provided the above system in which the means for defining a set of split mask values m1 . . . mn includes means for: defining the encryption
function mask value to comprise a set of random values min1 to minn; defining the set of split mask values to be the random values m1 to mn-1; and defining a masking value mn in the set of split mask
values to be (key mask value) min1 . . . minn m1 . . . mn-1.
According to another aspect of the invention there is provided a system for use in a cryptographic process, the cryptographic process using a key value to define input to a cryptographic function,
the system including: means for masking the cryptographic function using a function mask value; means for defining a set of more than one split mask values, at east one of the set of split mask
values being defined with reference to the function mask value; means for masking the key value using steps that comprise masking by applying the set of split mask values to obtain a masked input key
value; and means for using the masked input key value to define the input to the masked cryptographic function.
According to another aspect of the invention there is provided the above system, further including means for randomizing the split mask values.
Advantages of the invention include software-based countermeasures for power analysis security attacks requiring limited overhead costs in energy, performance and code size. Such limited overhead
permits the use of this countermeasure approach with devices such as wireless hand-held communication devices where security is required for the operations carried out by the devices. An aspect of
the invention supports high performance cryptographic implementation by supporting large table look-ups as part of the cryptographic process.
BRIEF DESCRIPTION OF THE DRAWINGS [0036]
In drawings which illustrate by way of example only a preferred embodiment of the invention,
FIG. 1 is a block diagram showing prior art generation of a cipher text;
FIG. 2 is a block diagram showing a two-part split mask and its use in generating cipher text according to the preferred embodiment.
FIG. 3 is a block diagram showing an n-part split mask and its use in generating cipher text according to the preferred embodiment.
FIG. 4 is a block diagram showing the application of the approach of the preferred embodiment to an Advanced Encryption Standard ("AES") key generation.
FIG. 5 is a block diagram showing the application of the approach of the preferred embodiment as applied to the process of AES encryption.
DETAILED DESCRIPTION OF THE INVENTION [0042]
FIG. 1 is a block diagram that shows prior art generation of cipher text 10 from plaintext 12, using key 14. Table 16, used for took-up, is a typical implementation of a cryptographic function.
Plaintext 12 is input, along with key 14, for a bitwise exclusive or (represented in the figure as art oval). The output of the exclusive or is used for a table look up that gives cipher text 10. As
is known to those skilled in the art, this encryption of plaintext 12 is subject to power analysis attacks, such as Differential Power Analysis ("DPA") attacks, to determine the value of the secret
key and so compromise the security of the encryption carried out by the process described.
In the preferred embodiment, multiple masks (two or more) are used in the execution of the cryptographic process. In general, where the cryptographic process includes a table lookup, the multiple
masks are exclusive or'd together to form a fixed final mask for each table input. In the preferred embodiment, the multiple masks may be randomized at each invocation of the cryptographic process.
In the preferred embodiment, however, the final fixed mask for the table input is not changed. The table itself may therefore remain unchanged.
As will be appreciated by those skilled in the art, the preferred embodiment is described with reference to an encryption function that includes a table look-up. The preferred embodiment may also be
implemented, however, with respect to other cryptographic processes in which encryption or decryption functions are implemented in a manner that does not involve a table look-up. The masking steps
defined with respect to the table in the preferred embodiment will may similarly be carried out on encryption or decryption functions that are implemented in ways other than by a table look-up. The
preferred embodiment is described with reference to encryption steps. However, it will be appreciated that the preferred embodiment may be implemented with respect to decryption processes, also.
Similarly, the masking referred to in the description of the preferred embodiment is carried out by the use of a bit-wise exclusive or operation (XOR) with respect to different values expressed in a
binary format. However, other masking operations may be used. For example arithmetic masking (involving the use of addition and/or subtraction in place of the exclusive or operation) may also be
Further, the preferred embodiment may be implemented as a computer program product that includes code to cry out the steps in the process described. The preferred embodiment may be implemented as a
computer system (which includes a subsystem or system defined to work in conjunction with other systems) for encryption that includes elements that execute the functions as described. The computer
system of the preferred embodiment may be defined by, and the computer program product may be embodied in, signals carried by networks, including the Internet or may be embodied in media such as
magnetic, electronic or optical storage media.
FIG. 2 is a block diagram that illustrates an example of encryption using the approach of the preferred embodiment. FIG. 2 shows plaintext 20, key 22, and masked table 24. As is understood by those
skilled in the art, where there is a masking process carried out to alter a key value, there is a corresponding alteration in the table values that define the cryptographic function. In the example
of the preferred embodiment shown in FIG. 2, masked table 24 is generated from an original, unmasked table using a two-part mask comprising m
and m
. The values in masked table 24 ("mtable") are defined by:
(i)=mtable(i m
The two
-part mask in the preferred embodiment is randomly generated. Alternatively, this table mask value (like other table mask values useable in the preferred embodiment) may be pseudo-random or otherwise
selected in a manner that is not readily ascertainable using DPA attacks.
The secret or master key 22 is immediately masked after it is received or derived. In the example of FIG. 2, key 22 is masked (exclusive or'd) with key mask 25 (designated value "r") and is stored.
Key mask 25 is randomly generated and is a fixed value in the example of the preferred embodiment in that it is unchanged for different plaintext values.
As is referred to above, masked table 24 is defined using two randomly generated constants m
and m
. In the example of FIG. 2, m
and m
are used to generate split masks that are applied to key 22 (as initially masked by key mask 25). The process of the preferred embodiment involves a further randomly generated value, m1. This value
is used as part to the process to define a second value, m2, as is described below.
As can be seen by exclusive ors 26, 28, 30, shown in FIG. 2, key mask 25 is exclusive or'd with m
and m
, and the result is exclusive or'd with m1. The result is defined to be the value m2, one of the split mask values to be used to be applied to the key value 22 (as masked). In mathematical notation:
2=r m
As may be seen, the initial pair of masks, m1, m2 are generated such that the exclusive or of those values with r (key mask 25) is equal to the fixed mask (m
), to be used at the input of masked table 24 in the encryption process. Thus for each new key 22, the key masking and generating of initial pair of masks (m1 and m2) need be performed only once.
In the preferred embodiment example of FIG. 2, plaintext 20 is combined with the masked value for key 22, using a random value r1 and the split masks m1, m2 in the following way. Random value r1 is
generated for each new plaintext value. The value r1 is exclusive or'd with both m1 and m2, as shown in exclusive ors 32, 34 in FIG. 2. The resultant values are then exclusive or'd with the masked
value of key 22 ("mkey"). In FIG. 2, this is shown in exclusive ors 36, 38. Finally, the masked key resulting from these operations is exclusive or'd with plaintext 20 to form the input for masked
table 24, at exclusive or 40.
The result of the steps described above is that key 22 is exclusive or'd with r, (r1 m2) and (r1 m1). Because m2 is, itself, defined to be r m
m1, the result of the different exclusive or operations is that key 22 is exclusive or'd with (m
). Masked table 24 is defined by applying (m
) to the original cryptographic table, and therefore the result is that plaintext 20 is combined with a masked key 22 that will provide the appropriate input for masked table 24. However, the value m
is not directly stored, as split masks m1 and m2, as well as mkey, are the stored values that are used for different plaintext values.
Thus, for each encryption using the same key 22, only the steps involving the defined m1, m2 and mkey values are executed. Hence the encryption process using these values may be executed many times
and DPA attacks on these encryption steps alone are not possible to directly determine key 22. Attacks by power measurement of r, m
or m
are not possible, Therefore the encryption process is secure. The fact that there is no requirement to recalculate the masked table nor to recalculate values used to arrive at m1, m2 and mkey values,
means that the countermeasure is suitable for use in devices that are constrained in the power available for cryptographic processing. For example, the method of the preferred embodiment is useful in
cryptographic functions carried out in wireless handheld devices. In this sense, the method of the preferred embodiment may be considered a low power countermeasure for differential power analysis
The preferred embodiment as described in FIG. 2 requires the definition of m1 and m2, once per key. For this reason, an attacker knowing the details of the algorithm and when it is executed may be
able to launch a 3rd order DPA attack by measuring the power of m
, m
and the input to masked table 24. (Alternatively a 4th order DPA could be launched by measuring the power of m1, m2, r and the input to masked table 24.)
As will be appreciated by those skilled in the art, in implementing the process the exclusive or (m1 m2) and the exclusive or (min1 min2) are not computed. If, despite what is described, these values
are computed, a 2nd order DPA may he used to attack the cryptographic steps.
As may be seen from the above description, the masked master key (key 22 as masked with key mask 25) is stored and not unmasked. The further masking of the masked master key with additional masks
forms the fixed final mask (used at the input of the tables). This fixed final mask is not directly loaded or stored or computed on its own. In the preferred embodiment, after masked tables and split
masks are generated, m
and m
are destroyed (not stored or loaded again). The countermeasures described above are resistant to lower order DPA attacks and higher order DPA attacks are therefore required to enable an attacker to
uncover the key values used.
FIG. 3 is a block diagram showing a generalized example of the preferred embodiment. In FIG. 3, the example shows n split masks. Plaintext 50 is shown, to be combined with masked key 52. Masked table
54 is defined by input table masks m
, . . . m
, in a manner analogous to that described for the two-pan mask illustrated in FIG. 2. In the generalized case, table(i)=mtable(i m
. . . m
To obtain the set of split masks m1, . . . , mn, the random value for key mask 56, and random values m1, . . . mn-1 are randomly generated. The set of split masks m1, . . . , mn is generated as shown
in FIG. 3. This step of generating the set of n split masks is analogous to the step of generating m1, m2 in the example of FIG. 2. The result of combining masking key 52 and key mask 56 using a
bitwise exclusive or is the stored mkey value. Also stored are the split masks m1, . . . mn.
To generate input for masked table 54 for a given plaintext 50, a random value r1 is obtained. The value r1 is exclusive or'd with all stored m1, . . . , mn values, if n is even, or r1 is exclusive
or'd with any (n-1) of m1, . . . mn if n is odd. The results are then successively exclusive or'd with the mkey value. Plaintext 50 is exclusive or'd with the final result to give the input for
masked table 54.
The above approach permits a split mask to be used for a given key and to be stored and reused for different plaintexts encrypted with the same key. As will be appreciated by those skilled in the
art, it is also possible to redefine the mask input values even while the same key is being used. The result is a higher-overhead process as the masked table (and the split mask values m1, . . . ,
mn) will be redefined for each new set of mask input values. Alternatively, the masked tables can be pre-computed and stored for each stored set of split masks.
The above approach permits a key mask, a split mask and masked tables to also be used for a new key. As will be appreciated by those skilled in the art, it is possible to permit the stored split mask
and masked tables to be used for a new key by additionally storing the key mask r (key mask 56). A new key 52 is then immediately exclusive or'd with r (key mask 56). In the preferred embodiment, the
input table masks m
, . . . m
are used only once to generate the masked tables and split masks, and then are destroyed (not stored). The stored split mask and masked tables are able to be used with the new key.
The preferred embodiment as described in FIG. 3 requires the definition of m1, . . . mn, once per key. For this reason, an attacker knowing the details of the algorithm and when it is executed may be
able to launch a 3rd order DPA attack by measuring the power of m
, m
and the input to masked table 24. Alternatively, a (n+2)
order DPA attack could be launched by measuring the power of each split mask, r and the input to masked table 24.
The split mask approach of the preferred embodiment is applicable to many key scheduling and (de)encryption algorithms, such as DES and AES. An example of the use of split masks as defined in the
preferred embodiment being implemented in respect to key scheduling and encryption using AES (Advanced Encryption Standard) is described with reference to the block diagrams of FIGS. 4 and 5. In AES,
in general round keys are generated from the exclusive, or of other round keys. For this reasons the preferred embodiment as applied to AES permits new split masks to be created during the generation
of round keys, as well as to be used during the encryption process. In this way, split masks may be used to make key generation and the ensuing encryption, using the resultant set of masked keys,
more secure.
In AES encryption there is a key generation process in which a set of 44 round keys is generated from an initial secret key value. The preferred embodiment provides for split masks to be used in the
generation of this set of 44 masked round keys. This set (rk0, . . . , rk43) is shown as masked round keys 100 in the block diagram of FIG. 4. Masked round keys 100 are generated from a 128-bit key,
shown as key 102 in FIG. 4, which is represented by four 32-bit quantities, key0, key1, key2, key3. FIG. 4 shows a split mask process for key 102 to be masked to become a set of four 32-bit inputs to
AES key generation 104. As specified by the AES approach, AES key generation makes use of a function that may be implemented as a table look-up, in the generation, of the round keys. In ABS key
generation utilizing the preferred embodiment, the function is masked and is implemented using masked table 106. Masked table 106 (denoted mtable0) is defined with reference to a pre-defined table0
that implements the table look-up for AES key generation, as well as to a randomly generated input mask M. The definition is as follows:
0(i)=table0(i M).
With reference to FIG. 4, to generate masked round keys for AES encryption using the approach of the preferred embodiment, key 102 is obtained (either generated or received). As indicated above, key
102 comprises key0, key1, key2, key3, each of which is a 32-bit value. Once obtained, key 102 is masked using a key mask comprising four random 32-bit values n0, n1, n2, n3. In the preferred
embodiment as applied to the AES key generation, key 102 as masked is stored (shown as mkey0, mkey1, mkey2, mkey3).
As is the case with the description of the generalized version of the preferred embodiment, the preferred embodiment as applied to AES includes the creation and storage of an initial mask set (split
masks). In the example of FIG. 4, this is initial mask set is made up of values m0, m1, m2, m3. Values m0, m1, m2 are randomly generated. Value m3 is defined starting with the key mask and exclusive
or'ing that value with the input mask M, and m0, m1, m2:
3=M m0 m1 m2 n0 n1 n2 n3.
In the preferred embodiment, after the split masks and masked tables are generated, M is destroyed (not loaded nor stored).
In the FIG. 4 example of the preferred embodiment as applied to the AES key generation, mask values m0, m1, m2, m3 are stored after they are obtained and generated. AES key generation 104 takes four
inputs, shown as rk0, rk1, rk2, rk3 in FIG. 4. According to the preferred embodiment, these values are arrived at by randomizing the stored values m0, . . . , m3 (m0, m1 using value r1, and m2, m3
using value r2) and then exclusive or'ing the randomized values with mkey0, . . . , mkey3 respectively to give rk0, . . . , rk3. The set rk0, . . . , rk3 are then used as inputs for AES key
generation 104.
As describe in general above, the generation of AES keys includes a table look-up. This is shown as a separate step in FIG. 4, with masked table 106 (mtable0) being shown apart from AES key
generation 104. This representation allows for the illustration of the use of intermediate masks as contemplated in the preferred embodiment as applied to AES. The intermediate mask for each round
key, for the example of FIG. 4, is given below in Table 1.
-US-00001 TABLE 1 Round key Generated Mask (rk#) of round key Intermediate mask, mi( ) 0, 16, 32 n
1, 17, 33 n
2, 18, 34 n
3, 19, 35 n
4, 20, 36 n
5, 21, 37 n
6, 22, 38 n
7, 23, 39 n
8, 24, 40 n
9, 25, 41 n
10, 26, 42 n
11, 27, 43 n
12, 28 n
13, 29 n
14, 30 n
15, 31 n
updated with {circumflex over ( )}r1, {circumflex over ( )} r2
The second and third columns for Table 1 are calculated to provide correct generated masks of round keys and intermediate masks. In general, the mask of the round key is generated from the exclusive
or of other masked round keys within AES algorithm. The intermediate mask is generated to be the equivalent of the additional masks that, when combined by an exclusive or with the generated mask for
the round key, produces a resultant mask that is equal to the table input mask. In the example of FIG. 4, the table input mask is M and therefore the intermediate mask for each rk value is defined
such that the combination of the mask applied to generate the rk value, and the intermediate mask will be the exclusive or product n0 n1 n2 n3 m0 m1 m2 m3. By applying an intermediate mask defined in
this way, the input to masked table 106 will be effectively masked by M, only. However, it should be noted that the table input mask is not limited to this value (M). It is also possible to define
the table input mask to be a combination of other values used in the process (such as m
, . . . m
). The input table mask must be known (to allow it to be used in the encryption process) and be defined such that it is not readily ascertainable using low order DPA attacks.
In the preferred embodiment as illustrated in FIG. 4, the intermediate mask values are calculated and stored prior to the calculation of masked round keys 100. In the example of FIG. 4, there are
only 17 intermediate masks to be stored. Table 1 also shows the additional masking of round keys required to obtain a fixed final mask for input table lookups within the key generation or scheduling
algorithm (masked table 106) as well as for the tables in the encryption algorithm, as described in more detail below.
The key scheduling as described with reference to FIG. 4 may be carried only once for each new key 102 or it may be executed immediately before for each AES encryption. As is described above, for
added security the initial set of masks (m0, m1, m2, m3) are randomized using r1, r2, that are generated for each encryption. In the example of the preferred embodiment shown in FIG. 4, some
intermediate masks are subject to a further mask to remove the effect of this randomization before applying the value to masked table 106. This is shown by the intermediate masks denoted with either
*1 or *2 in Table 1, for which a farther mask equivalent to an exclusive or of r1 or r2 is to be carried out, respectively.
In the process shown by the block diagram of FIG. 4, key generation 104 takes the masked key to generate round keys as described in AES. As the round keys are generated, intermediate masks are loaded
and used for any round keys that are defined by a look-up access of mtable0 (in the example of FIG. 4, round keys 3, 7, 11, 15, 19, 23, 27, 31, 35, 39).
The definition of round keys is carried out as specified in AES, but round keys are masked with different values to provide a countermeasure for power analysis security attacks. The definitions of
rk0, . . . , rk3 are set out above. As part of AES key generation, rk3 is exclusive or'd with intermediate mask mi(3), mi(3)=n0 n1 n2 m0 m1 m2 (see Table 1). The round key rk4 is then defined as
4=rk0 mtable0(rk3) (0)
In this definition of rk4, the value for mtable0(rk3) is the masked table 106 value, calculated by masking the AES function table used in key generation and c(0) is a constant defined for AES.
Further round keys are then defined in accordance with ABS:
5=rk1 rk4
6=rk2 rk5
7=rk3 rk6
8=rk4 mtable0(rk7) c(1),
9=rk5 rk8,
10=rk6 rk9,
11=rk7 rk10
12=rk8 mtable0(rk11) c(2), . . .
and so forth
, as specified for AES key generation.
Finally, as is shown in FIG. 4, all round keys except those which were input to mtable0, are exclusive or'd with their intermediate masks according to Table 1. These masked round keys are then stored
and available to be used in the encryption algorithm. Because of the mariner of generating the masked round keys 100, as described above, they are each masked by the input table mask 108 (value M).
The use of split (or multiple) masks in the masking that vas carried out, makes the AES key generation in accordance with the preferred embodiment more secure from DPA attacks.
The preferred embodiment is applied also to the process of AES encryption after the masked round keys 100 are defined, as is shown in the block diagram of FIG. 5. As shown in FIG. 5, the AES
encryption steps make use of the masked round keys 100 in conjunction with a set of defined masked tables 120, to encrypt plaintext data 122.
As part of the AES encryption, the set of masked tables 120 that are used with masked round keys 100 are defined to have an input mask with a value of m0 m1 m2 m3 n0 n1 n2 n3. However, as can be seen
from this description, the input mask is never directly computed, stored or loaded.
As is shown in FIG. 5, plaintext 122 is exclusive or'd with masked round keys 100, in accordance with the AES process. The result is used as input for the appropriate table look-up in masked tables
120. The result of the ABS encryption process carried out using masked round keys 100, plaintext 122 and masked tables 120 is a set of four values that are shown as S0, S1, S2, S3 in FIG. 5.
In the example of FIG. 5, the output values (S0, S1, S2, S3) are unmasked. To increase the security of the AES encryption, the output values are unmasked in a two-step process. Initially, the output
values are each exclusive or'd with the value n0 n1 m0 m1. A second exclusive or is then carried out on the result, using the value n2 n3 m2 m3. As will be apparent to those skilled in the art, the
combination of values for the multi-step unmasking of the result may be varied. The multi-step unmasking is cared out to avoid directly calculating the value n0 n1 n2 n3 m0 m1 m2 m3. Different ways
to combine the values may be used in a multi-step unmasking process.
As may be seen from the above description, the approach of the preferred embodiment is able to be utilized in AES key generation and encryption. The split mask approach provides for increased
security for key generation in the AES process and the encryption step, using the masked round keys, is itself made more secure. The unmasking step, carried out after the masked encryption tables
have been accessed, is done using what is effectively a split mask, adding to the security of the encryption of the plaintext.
Various embodiments of the present invention having been thus described in detail by way of example, it will be apparent to those skilled in the art that variations and modifications may be made
without departing from the invention. The invention includes all such variations and modifications as fall within the scope of the appended claims.
Patent applications by RESEARCH IN MOTION LIMITED
Patent applications in class Having particular key generator
Patent applications in all subclasses Having particular key generator
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20130016834","timestamp":"2014-04-20T16:53:14Z","content_type":null,"content_length":"75792","record_id":"<urn:uuid:a814ddd3-9f7c-4f75-aaff-d34278909eae>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to enclose certain digits of a value double in parentheses
Join Date
Nov 2012
Rep Power
I need to make a program that converts fractions to decimal form, but cuts it off after a certain amount of digits, and then repeating decimals after that need to be closed in parentheses. I
have most of it done, but the part I'm stumped with is how to enclose the repeating digits in parentheses (Strings are not my strong point
here's my code so far:
Java Code:
import java.math.RoundingMode;
import java.text.DecimalFormat;
import java.util.Scanner;
public class fractions {
public static void main(String[] args) {
double num = 0, denom = 0;
double dec = 0;
Scanner numerator = new Scanner(System.in);
Scanner denominator = new Scanner(System.in);
System.out.println("Enter a fraction, each side no more than 2 digits: ");
dec = num / denom;
double d = dec;
DecimalFormat df = new DecimalFormat("0.##");
double outputNum = Double.valueOf(df.format(d));
String outputString = df.format(d);
System.out.println("The decimal form is: " +outputString);
Thanks in advance to anyone who can help me with this.
Join Date
Feb 2012
Rep Power
Just to clarify, you want the output to look something like this?
Join Date
Mar 2012
Rep Power
How about this:
- Declare a helper string
- Look for the dot in the formatted number string
- Copy all until that digit (the one you want to start with) in the helper
- Append a opening bracket to helper
- Copy the rest of the string to the end of the helper
- Append a closing bracket
Last edited by Sierra; 01-11-2013 at 02:21 PM.
I like likes!
Join Date
Nov 2012
Rep Power
Join Date
Mar 2012
Rep Power
How do you determine at which point they repeat? What is your criteria? e.g. for 6.88888889 ?
I like likes!
Join Date
Nov 2012
Rep Power
Here's a link to the actual problem, it can probably explain better than me:
Join Date
Mar 2012
Rep Power
Well I am not asking for the assignment but for your approach to the solution... ;)
I like likes!
Join Date
Nov 2012
Rep Power
Maybe the following (very old) algorithm can be of help: it finds the repeating group of digits for a number 1/p where p is prime (not equal to 2 or 5). It has its origins in Vedic math (a
very interesting topic).
Without any proof:
Java Code:
public class RepFracs {
public static void main(String[] args) {
int prime= 7;
int multiplier, factor;
for (multiplier= 1; ; multiplier++)
if ((prime*multiplier)%10 == 9) {
factor= (prime*multiplier)/10;
if (++factor == 10) factor=1;
int length= 0, x=multiplier, carry= 0;
StringBuilder sb= new StringBuilder();
do {
x*= factor; x+= carry;
carry= x/10; x= x%10;
while (x != multiplier || carry != 0);
System.out.println(length+" "+sb.reverse());
kind regards,
Last edited by JosAH; 01-11-2013 at 05:07 PM.
cenosillicaphobia: the fear for an empty beer glass
Join Date
Nov 2012
Rep Power
Maybe the following (very old) algorithm can be of help: it finds the repeating group of digits for a number 1/p where p is prime (not equal to 2 or 5). It has its origins in Vedic math (a
very interesting topic).
Without any proof:
Java Code:
public class RepFracs {
public static void main(String[] args) {
int prime= 7;
int multiplier, factor;
for (multiplier= 1; ; multiplier++)
if ((prime*multiplier)%10 == 9) {
factor= (prime*multiplier)/10;
if (++factor == 10) factor=1;
int length= 0, x=multiplier, carry= 0;
StringBuilder sb= new StringBuilder();
do {
x*= factor; x+= carry;
carry= x/10; x= x%10;
while (x != multiplier || carry != 0);
System.out.println(length+" "+sb.reverse());
kind regards,
thanks! I can't test it out right now, I don't have my work with me, but I have a good feeling about this, and if doesn't work, thanks anyways.
Well, it works for numbers 1/p (p is prime); it is one of the best tested algorithms because it's thousands years old ;-) Do with it what you want; I think it can be extended to 1/(p1*p2*p3
... pn) ...
kind regards,
cenosillicaphobia: the fear for an empty beer glass
Join Date
Nov 2012
Rep Power
thanks, but for some reason when I try to apply it, I'm getting an error with those If statements all the way to the end of the For loop, saying "unreachable code" | {"url":"http://www.java-forums.org/new-java/67549-how-enclose-certain-digits-value-double-parentheses.html","timestamp":"2014-04-17T02:16:28Z","content_type":null,"content_length":"112617","record_id":"<urn:uuid:a249498a-973a-4ec0-bb15-2122443310ed>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seminar/Tutorial activities for Quantitative Economics
Contact: Rebecca Taylor
Director of Undergraduate Studies, Department of Economics, University of Portsmouth
Published March 2002
This case study is included in the guide Seminars prepared by Rebecca Taylor for our Handbook for Economics Lecturers.
As part of my drive to ensure that students are given a stimulating learning environment I have focussed much attention on developing a more interactive participatory model for level one mathematics
and statistics units that will serve to capture the interest of students who would not traditionally be able to cope with the rigorous mathematical content in the undergraduate suite of economics
Student participation in seminars is encouraged through group focussed activities. Students are organised into small sub-groups and set tasks to complete that will enhance their understanding of the
material being covered. In order to maintain student interest the tasks vary on a weekly basis and include activities such as
• Each group receives a set of 4 questions - each question relating to a topic covered in the previous lecture. In the first half of the seminar students work through the questions in groups of
four. I provide support/advice/further explanation where necessary. The groups then write the workings to one specified question on an overhead. The second half of the seminar involves one
nominated student from each group teaching their allocated question to the rest of the students in the class. All students are free to ask questions following each presentation. Answers to all
questions are provided at the end of the seminar. Ultimately this exercise encourages students to become involved, ensures that all students have worked through every type of question relevant
from the lecture material, and supports the idea that greater understanding is gained from teaching rather than always being on the learning end of the equation. This exercise is repeated in four
seminars throughout the semester (spaced out between other activities) in order that all students have the opportunity to participate in the teaching process.
• Each student receives a cue card with an equation of a line written on one side and a set of co-ordinates written on the other side. Students spend the first 15 minutes of the seminar plotting
their line on a graph. Using the graph and the equation students are then asked to find a student who has a line that intersects with their line. This person then becomes their partner for the
rest of the seminar. The pairs of students must then find the intersection between their two lines using three different methods. At this point the students are working together and can gain from
each others understanding of the subject. This is also an effective activity in encouraging students to work with different people in the class creating greater interaction and a more integrated
and cohesive learning environment. Once the students have found their intersection point by the three methods they are asked to turn the cue cards over and find the equation of the line that uses
the two specified co-ordinates. Students are then required to find the point at which their lines intersect.
• Students are asked to form groups of 4-5 and each group member is assigned a number from 1-5. Each group is then given a question to solve that relates to a technique/topic in Quantitative
Economics (with each group focussing on a different technique/topic). Students are given 15 minutes to work through their problem and ensure that each member of the group understands all aspects
of the assigned problem. Students are then asked to get into groups whereby all students in one group were previously assigned as #1, #2, #3, #4 or #5; thus there are now 4-5 groups all of whom
have a member who had spent the first 15 minutes working through a different question. Students are then required to spend the remaining seminar time teaching the other students in the group how
to successfully solve their assigned problem. At the end of the seminar students have worked with two different groups of students, have worked through a new mathematical technique, have taught
this technique to other students, and have learned from other students how to do different types of questions in quantitative economics.
The feedback from these and other activities used is excellent- students feel engaged in the process of learning (and teaching where they have been teaching concepts to others) and consistently state
that the activities used help them to understand the topics covered. It should be noted that these techniques do not have to be isolated to the field of quantitative economics; I have also used some
of these activities in optional units at levels 2 and 3 (ie. International Trade Theory) and have found that with minor adjustments they can be very successful.
Related pages | {"url":"http://www.economicsnetwork.ac.uk/showcase/taylor_seminars","timestamp":"2014-04-21T12:35:30Z","content_type":null,"content_length":"19628","record_id":"<urn:uuid:4dbfc0bf-b419-47a9-b0e4-cba2fd49fd41>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
April 18th 2011, 11:31 AM #1
Mar 2009
Finite Field
If m is the number of elements in a finite field, then a^(m-1) = 1 for any element a. Is there a way to prove this without using Fermat's Little Theorem?
How about Lagrange's theorem? The multiplicative group of the field has m-1 elements. Hence the multiplicative order of a divides m-1, so that k*ord(a) = m-1 for some k. Then as a^ord(a) = 1, we
a^(m-1) = a^(k*ord(a)) = (a^ord(a))^k = 1^k = 1.
what you said is not strictly true.
if a = 0, then 0^(m-1) = 0. it is only true for non-zero a in F.
Fermat's Little Theorem, is actually a special case of a more general theorem, Lagrange's theorem for finite groups, which states:
the order of an element of a group, divides the order (cardinality) of the group.
here, since we have a field F, the non-zero elements F* form a group w.r.t. multiplication. this group has m-1 elements, so the order
of any a is a divisor of m-1, we can call it d, for example, so m-1 = kd.
but then a^(m-1) = a^(kd) = (a^d)^k = 1^k = 1.
the reason this happens, is that any subset S of F* closed under multiplication induces the following equivalence on F*:
a ~ b iff ab^-1 is in S. the equivalence classes are all of the form Sa = {sa : s in S, a fixed in F*}. furthermore, since
fields are cancellative, the correspondence s-->sa is a bijection, meaning all such sets Sa have the same size.
this means in particular, that |S| divides |F*| = m-1. clearly, the set of all non-negative powers of a is such a set S.
Okay, thanks. And if I was going to use Fermat's little theorem, we know b^n= b for n prime in mod n. So is it also true that b^(n^t) = b for n prime? Because I was thinking b^(n^t) = (b^(n*n^
(t-1)) = (b^n)^(n^(t-1)) = b^(n^(t-1)) by F.L.T. = b^n^(n^(t-2)) = b^(n^(t-2)) by F.L.T. = b^n^(n^(t-3)) = ... = b. Is that right, or did I mess up somewhere?
a finite field is not necessarily Zp. fermat's litle theorem is a theorem about integers, which may be phrased as a statement about the integers modulo p (Zp).
i can make little sense of what you wrote. in the field GF9, which has the elements {0,1,2,x,2x,x+1,x+2,2x+1,2x+2} (and x^2 = 2) it is not true that b(n^t) = b for n prime.
for example, consider n = 2, and t = 1, and b = x. then x^2 = 2, which is certainly NOT x.
April 18th 2011, 11:46 AM #2
April 18th 2011, 11:52 AM #3
MHF Contributor
Mar 2011
April 18th 2011, 04:46 PM #4
Mar 2009
April 18th 2011, 06:02 PM #5
MHF Contributor
Mar 2011 | {"url":"http://mathhelpforum.com/advanced-algebra/177991-finite-field.html","timestamp":"2014-04-18T04:24:56Z","content_type":null,"content_length":"40746","record_id":"<urn:uuid:56367f12-1724-4cc4-901b-10db62a513c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
geol 250: mechanical properties of rocks
Lecture 7 Mechanical Properties of Rocks
Rock properties: mass density, porosity, and permeability
Mohr's circle
Elasticity of rocks
Rock properties: strength
Engineering classification of intact rocks
Rock mass properties
Mechanics refers to the response of materials to applied loads. For engineering interest, Earth materials can be divided as rocks, soils, and fluid. Rocks are important building materials and they
provide foundations to many engineering structure. This lecture deals with mechanical properties of rocks. Derivations and examples will be given during the lecture.
Rock properties: density, porosity, and permeability
specific gravity: the ratio between the mass and that of equal volume of water (i.e. the ratio of mass density and water density).
unit weight gamma=(specific gravity)x(unit weight of water)
unit weight of water= 62.4 pcf (lbs/ft3)
for most rocks, gamma = 120 to 200.
porosity n: measurement of the relative amount of void space (containing liquids and or gases).
porosity=(void space)/(total volume)
permeability: measurement of the rate at which fluids will flow through a saturated materials. We will discuss the measurements of permeability later in the lecture of Groundwater.
Stress is force per unit area acting on a plane at any point within a material. There are three types of stresses:
compressive stress: equal forces that act towards a point from opposite directions
tensile stress: equal forces that pull away from each other.
shear stress: equal forces that act in opposite directions but
offset from each other to act as a couple.
Principal stresses (chap 8, p.133)
On any plane within a solid, there are stresses acting normal to the plane (either compressional or tensional, called normal stresses) and shear stresses acting parallel to the plane. At any point
within a solid, it is possible to find three mutually perpendicular principal stresses which are maximum, intermediate, and minimum. On the planes perpendicular to the principal stresses (called
principal planes), there are not shear stresses.
Mohr's circle (chap 8, p.134)
Suppose we wish to measure stresses (both normal and shear) acting on any given plane besides the principal stresses. In general, this is a three dimensional problem and can be done using
mathematical tensors and vectors.
In a special case where we can assume that the intermediate and minimum stresses are equal (for example below the ground surface), we can work in two dimensions. Mohr's circle provides a simple,
graphical method to find the normal and shear stresses on inclined planes from principal planes using the maximum and minimum principal stresses.
The application of stress to a material causes it to deform. The amount of deformation is called strain.
axial strain: deformation along the direction of loading dL/L.
lateral strain: the lateral extension perpendicular to the direction of loading, dB/B.
Poisson's ratio = (lateral strain)/(axial strain).
Elasticity of rocks
Some of the deformation of a rock under stress will be recovered when the load is removed. The recoverable deformation is called elastic and the nonrecoverable part is called plastic deformation.
Plastic behavior involves continuous deformation after some critical value of stress has been reached.
Commonly, the elastic deformation of rock is directly proportional to the applied load. The ratio of the stress and the strain is called modulus of elasticity.
Rock properties: strength
Rock strength indicates the level of stress needed to cause failure.
compressive strength is the compressive stress required to break a rock sample. The unit is pounds per square inch (psi) or newtons per square meter (pascals).
unconfined (uniaxial) compression test:
the rock sample is unconfined at its side while the load is applied vertically until failure occurs. In this case, the compressive strength is called unconfined compressive strength (uniaxial
compressive strength).
confined compress test:
For design of underground structure (such as tunnels, mining, waste repository), we need to take into account of the confining pressure at depth. This is done at laboratory by so-called triaxial
compression test. The failure curve constructed using Mohr's circle after a series of tests gives the shear strength (cohesion) and internal friction (angle of shearing resistance) of the rock (or
soil) sample. This will be further discussed on Mohr-Coulomb failure criterion in the next lecture on Soil Mechanics.
Engineering classification of intact rocks
The engineering classification of intact rocks is based on the uniaxial compressive strength and the modulus of elasticity, developed by Deere and Miller. Intact rock is internally continuous,
intact, and free from weakness planes such as jointing, bedding, and shearing.
Rocks are subdivided into five strength categories: A through E for very high to very level of strength.
Rock classification also involves the modulus of elasticity. More specifically, the modulus ratio is used, which is the ratio of the modulus of elasticity to the unconfined compressive strength.
Three modulus ratio categories are H (high) for >500, M (medium) for 200-500, and L (low) for <200.
Rock mass properties
The strength and deformation properties of intact rocks cannot be directly applied to the overall rock mass in the field situation. The strength and behavior of a rock mass are largely controlled by
the nature of its discontinuities or weakness planes. Discontinuities act to lower the strength of the rock mass. The rock mass tends to fail along existing weakness planes rather than develop new
fracture within intact solid rocks.
Examples of rock mass discontinuities include:
sedimentary: bedding planes, sedimentary structure (mud cracks, ripple marks, cross beds, etc.)
structural: faults, joints, fissures
metamorphic: foliation
igneous: cooling joints, flow contacts, intrusive contacts, dikes, sills | {"url":"http://ijolite.geology.uiuc.edu/04FallClass/geo250/lectures/lect7_rockmech.html","timestamp":"2014-04-19T14:28:35Z","content_type":null,"content_length":"7062","record_id":"<urn:uuid:d0ffe17a-bf96-4822-84ec-083da1bdab03>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Miami, FL Prealgebra Tutor
Find a West Miami, FL Prealgebra Tutor
...He is an active substitute teacher and has a great ability to help kids understand math. You will see an immediate improvement once he begins with your student. He has 4 kids of his own (ages
24, 21, 17 and 15) and has worked with students as a tutor, youth worker, coach and substitute teacher for over 20 years.
3 Subjects: including prealgebra, geometry, algebra 1
...I know about directors and directors' style, actors, classics from the 1930s and beyond. Having taught Precalculus for many years, I have had great results with my students, one on one. Even
the ones having difficulties in the toughest schools and at the highest levels did eventually very well.
48 Subjects: including prealgebra, reading, statistics, chemistry
I am a senior in college majoring in Biology with minors in Mathematics and Exercise Physiology. In the past I have tutored students ranging from elementary school to college in a variety of
topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others ...
30 Subjects: including prealgebra, reading, biology, algebra 1
...Over the summer, I will be teaching a graduate level statistics course for online MPA students before I start as a professor at Miami in the fall. I am happy you took the time to check out my
profile. Just to tell you a bit more about my approach - I specialize in helping students of all ages understand difficult concepts using a friendly and personally tailored tutoring approach.
16 Subjects: including prealgebra, writing, statistics, geometry
Hello there! With more than 15 years acting as a tutor, I tend to gravitate more to Science-based classes, specially Math (Algebra, Geometry, Trigonometry, Calculus I, II and III, Differential
Equations, Statistic, SAT Math, etc..) and Physics. I enjoy tutoring as it allows me to help students to ...
13 Subjects: including prealgebra, chemistry, physics, calculus | {"url":"http://www.purplemath.com/West_Miami_FL_prealgebra_tutors.php","timestamp":"2014-04-18T11:16:16Z","content_type":null,"content_length":"24377","record_id":"<urn:uuid:5a926701-9dcb-4f52-bd8f-7f98e9d09711>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
if it is 10:30am mst, what time is it in Kabul
You asked:
if it is 10:30am mst, what time is it in Kabul
Mountain Standard Time
10:00:00pm Afghanistan Time
10:00:00pm Afghanistan Time, the time zone used in Afghanistan UTC+4:30
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/if_it_is_10:30am_mst,_what_time_is_it_in_kabul","timestamp":"2014-04-17T10:23:16Z","content_type":null,"content_length":"60846","record_id":"<urn:uuid:b0f3e6a8-e97c-4919-a0ac-d0b4d09e796d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Difference of Perfect Squares example please
• 11 months ago
• 11 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51770907e4b098003c7659f3","timestamp":"2014-04-17T18:31:48Z","content_type":null,"content_length":"39570","record_id":"<urn:uuid:db8a99c0-f0b5-4cee-ba49-18a2c26b8a4a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merrillville Prealgebra Tutor
Find a Merrillville Prealgebra Tutor
I have experience with kindergarten through high-school, in a variety of subjects. I have been substituting for over 7 years and am on preferred lists at several schools. My favorite subjects are
language arts and math. I am a very patient person and really enjoy teaching.
16 Subjects: including prealgebra, reading, English, algebra 1
...My teaching style for tutoring is, 1) establish the student's current level of expertise in the subject, 2) bring the student to a level of proficiency in the basics of the subject if
necessary, 3) teach the student how to apply what he or she has learned, 4) provide the student with guidance in ...
11 Subjects: including prealgebra, physics, French, photography
...I started tutoring back in my early years of high school in areas such as math, science, and in study skills of other areas such as history, English, and other miscellaneous courses. There have
been many tutoring opportunities handed to me because they know I am fit for the job. I have tutored ages K-12 as well as college students.
25 Subjects: including prealgebra, chemistry, physics, calculus
...I have extensive experience working with students who are struggling to meet grade-level expectations as well as working with gifted students who require additional challenges outside of their
classroom curriculum. I truly enjoy helping my younger students master the fundamentals of math and lit...
38 Subjects: including prealgebra, Spanish, reading, statistics
...I feel that I am a very patient tutor, and can greatly help children learn material that they are struggling in. I do not give them the answers, but rather push them in the right direction, and
make sure that it sticks with follow-up questions. I am also a fun person that can make learning enjoyable for anyone, usually through metaphors that will resonate with children.
28 Subjects: including prealgebra, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/Merrillville_Prealgebra_tutors.php","timestamp":"2014-04-20T01:55:44Z","content_type":null,"content_length":"24258","record_id":"<urn:uuid:e71459b2-112f-4802-b92d-6d9e985ca845>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
A space-time coding approach for RFID MIMO systems
This paper discusses the space-time coding (STC) problem for RFID MIMO systems. First, a mathematical model for this kind of system is developed from the viewpoint of signal processing, which makes
it easy to design the STC schemes. Then two STC schemes, namely Scheme I and Scheme II, are proposed. Simulation results illustrate that the proposed approaches can greatly improve the symbol-error
rate (SER) or bit-error rate (BER) performance of RFID systems, compared to the non space-time encoded RFID system. The SER/BER performance for Scheme I and Scheme II is thoroughly compared. It is
found that Scheme II with the innate real-symbol constellation yields better SER/BER performance than Scheme I. Some design guidelines for RFID-MIMO systems are pointed out.
Radio frequency identification (RFID) is a contactless, usually short distance, wireless data transmission and reception technique for identification of objects. It is believed that RFID can
substitute, in the not-far future, the widely used optical barcode technology due to the limitations of the latter in i) the barcode cannot read non-line-of-sight (NLOS) tag; ii) each barcode needs
personal care to be read; and iii) limited information-carrying ability of the barcode. Currently, a single antenna is usually used at the reader and tag of RFID in the market. However, RFID research
community recently started to pay attention on using multiple antennas at either the reader side or the tag side [1,2]. The reason is that using multiple antennas is an efficient approach to
increasing the coverage of RFID, solving the NLOS problem, improving the reliability of data communications between the reader and tag, and thus further extending the information-carrying ability of
RFID. Besides, some advanced technology in multiple transmit and receive antennas (MIMO) can be used to solve the problem of detecting multiple objects simultaneously, see e.g., [3].
There have been several studies about RFID-MIMO. In general, these studies are somehow scattered in different topics. It is difficult to find the logical relationship among these studies. Therefore,
the state of the art of the studies will be reviewed in a large degree in a chronological order. The work [4] first showed the idea of using multiple antennas at the reader for both transmission and
reception. In [1], the authors first proposed to use multiple antennas at the tag and showed the performance gain by equipping multiple antennas at the reader (for both transmission and reception)
and the tag. In [5], the multipath fading for both single-antenna based RFID channel and RFID-MIMO channel was measured and compared. The improvement on the fading depth by using MIMO can be clearly
seen from the measured power distribution (see, e.g., Figure Ten therein). In [6], the authors first proposed to apply the Alamouti space-time coding (STC) technique, which is now popularly used in
wireless communication systems, to the RFID systems. The reference [6] presented a closed-form expression for the bit-error rate (BER) of the RFID system with the none-coherent frequency shift keying
modulation and multiple transmit antennas at the tag and single transmit/receive antenna at the reader, where the double Rayleigh fading is assumed at the forward and backward links. In [7], the
interrogation range of ultrahigh-frequency-band (UHF-band) RFID with multiple transmit/receive antennas at the reader and single antenna at the tag was analyzed, where the forward and backward
channels are assumed to take the Nakagami-mdistribution. In [3], the blind source separation technique in antenna array was used to solve the multiple tag identification problem, where the reader is
equipped with multiple antennas. The work [8] applied the maximal ratio combining technique to the RFID receiver, where the channel of the whole chain, including forward link, backscattering
coefficient, and backward link, was estimated and used as the weighting coefficient for the combining branches. In [9], a prototype for the RFID-MIMO in the UHF-band was reported. In [10], both
MIMO-based zero-forcing and minimum-mean-square-error receivers were used to deal with the multiple-tag identification problem, where the channel of the whole chain was estimated, similar to the
approach in [8]. It is reported in [11] that four antennas are fabricated in a given fixed surface at the reader. The measurement results showed that an increase of 83% in area gave a 300% increase
in available power to turn on a given tag load and the operational distance of the powered device is increased to 100 cm by the four-antenna setup from roughly 40 cm for the single-antenna setup. The
result in [11] suggests that the MIMO technique can be very promising to the RFID technology.
In the aforementioned reports, the Alamouti STC technique has been shown to be able to extend to RFID-MIMO systems. However, it can only apply to the case where the tag has two antennas. Since
implementing four antennas at the tag have been shown to be possible in experiments, it is necessary to investigate the possibility of applying other STC techniques to RFID-MIMO systems. In this
paper, we will study how to apply the real orthogonal design (ROD) technique, proposed by Tarokh et al. in [12], to RFID-MIMO systems. This technique is suitable for the case where the tag is
equipped up to eight antennas, which should be sufficient for the RFID technology in the near future.
The paper is organized as follows. A modified MIMO-RFID channel model will be developed in Section “Channel Modeling of RFID MIMO Wireless Systems”. The ROD in [12] and the companion of the ROD
(CROD) proposed in [13] are briefly introduced in Section “A Space-Time Coding Scheme for RFID MIMO Systems”. Two space-time decoding approaches for RFID MIMO systems will be discussed in Section
“Two Space-Time Decoding Approaches for RFID MIMO Systems”. Section “Simulation Results” presents the simulation results and discussions, and Section “Conclusions” concludes the paper.
Channel Modeling of RFID MIMO Wireless Systems
In this paper our discussion is confined only on narrowband RFID systems. The block diagram of the RFID MIMO system is illustrated in Figure 1, where both the reader and tag are equipped with
multiple antennas.
Figure 1. A block diagram of the RFID MIMO system.
In terms of equation (1) of [1], the narrowband RFID MIMO wireless channel can be expressed as
where the reader and tag are equipped with N[rd] and N[tag] antennas, respectively, x (an N[rd]×1 vector) is the transmitted signal at the reader, y (an N[rd]×1 vector) is the received signal at
the reader, n is the receiver noise, H^f (an N[tag]×N[rd] matrix) is the channel matrix from the reader to the tag, H^b (an N[rd]×N[tag] matrix) is the channel matrix from the tag to the reader,
and Sis the backscattering matrix, which is also called signaling matrix. It is assumed that the N[rd] antennas at the reader are used for both reception and transmission. This assumption is just for
brevity of the notation. It is straightforward to extend the approach presented in this paper to the case where the reader has different numbers of antennas for reception and transmission. The
channels H^f and H^b are assumed to be complex Gaussian distributed, H^f and H^b are mutually independent, and all the entries of either H^f or H^b are independent of each other. It is also assumed
that Re(H^f), Im(H^f), Re(H^b), Im(H^b) are mutually independent and of the same distribution.
In most general case where the modulated backscatter signals at the tag are transferred between the antennas, the signaling matrix Sis a full matrix [1]. However no application of the full signalling
matrix has been identified up to now [1]. Therefore, we will consider the situation where the RF tag antennas modulate backscatter with different signals and no signals are transferred between the
antennas. In this case, the signaling matrix is a diagonal matrix [1]
where Γ[i](t) is the backscattering coefficient of ith antenna at the tag. The ith tag identity (ID) is contained in the coefficient Γ[i](t).
Note that in the RFID system, the transmitted signal xis mainly used to carry the transmit power, while the information data (i.e., tag ID) is carried out by S. Therefore, the central issue for the
RFID is to decode Γ[1], …, from the received signal. Next we transform equation (1) to the conventional form in signal processing. Let us define
Then equation (1) can be rewritten as
Equation (3) converts the original system model (1) to the conventional form in signal processing: the signal to be estimated or decoded is packed in a vector, whose entries are independent of each
A Space-Time Coding Scheme for RFID MIMO Systems
Let us first review the real orthogonal design proposed by Tarokh et al. in [12].
Definition 1
[12] A real orthogonal design of size m is an m×k matrix with entries 0, , , …, such that , where D is a diagonal matrix with diagonal entries being , i=1,2,…,m, and the coefficients l[i1],
l[i2], …, l[ik]are strictly positive integers.
In some cases, we need to explicitly specify the arguments of . In these cases, the ROD will be denoted as ( , …, , where , …, are the arguments of .
The construction of general RODs can be found in [12]. For completeness, the RODs for the cases of m=2,3,4, denoted as , , respectively, are listed as follows:
For the construction of , readers are referred to [12].
To formulate the decoding algorithm for the ROD, let us define the companion of the ROD as follows.
Definition 2
A companion of a real orthogonal design , denoted as , is a matrix satisfying the following equation
For the RODs as shown in equations (4)-(6), their CRODs are
For a given ROD, the calculation of its CROD is given in [13].
For the CRODs as defined in equations (7)-(9), it can be easily shown that the following equality
holds true, where the superscript ^T stands for the transpose (without conjugate!) of a matrix or vector. As can be seen from the discussion in Section “Simulation Results”, one can remove the
inter-symbol interference (ISI) by using the above property of CROD, but the diversity gain thus obtained from the multiple channels is limited when the channel is complex instead of real.
To find the decoding scheme, let us consider the property of , where the superscript ^H stands for the conjugate transpose of a matrix or vector. We have
where the entry marked with ★ means that its value can be inferred from the value of its corresponding symmetric entry. It can be checked that the structural property as shown in equations (11)-(13)
also holds true for higher dimensional CRODs.
Using RODs and the corresponding CRODs, a general space-time encoding scheme and two decoding approaches for RFID-MIMO systems can be developed as follows.
Consider the equivalent RFID-MIMO channel (3). Denote by T[f] a symbol period. Suppose that the channels of both forward and backward links do not change with time during a coding block period KT[f].
The transmit signal x at the reader is also fixed during one coding block period KT[f]. Therefore, the equivalent composite channel will not change with time when we only consider the signal
processing for one coding block. Let us define
Let (of dimension N[tag]×K) be a ROD in variables , , …, , where , , …, are the symbols to be transmitted at the N[tag] transmit antennas in one STC frame. Define
where w(t)is the baseband waveform of the transmit signal at the tag. The transmitted signal across the N[tag] transmit antennas at the tag can be expressed as
where E[0] is the total power used for the transmission of one symbol per time slot. The scaling coefficient is to normalize the overall energy consumption per time slot at the tag side to be E[0]
no matter how many antennas are deployed at the tag.
Two Space-Time Decoding Approaches for RFID MIMO Systems
The received signal after sampling can be expressed as
where is the receiver noise (a matrix) at the corresponding time instant. Notice that is of dimension N[rd]×K, since one frame of the transmitted signal contains the pulses of K time slots.
Denote by [M][j] the jth row of a matrix M. Let us consider the jth row of the matrix which is the received signal at the jth antenna of the reader for the time instants 1, …, K respectively. Let
Since the transmitted signal is space-time coded, the entries in [y][j] should be related with each other somehow. Right-hand multiplying both sides of equation (15) with the matrix , , we have
From equation (17) we can see that the transmitted symbols are decoupled from each other in the processed signal z[j] through the processing algorithm (16). However, it is not efficient to decode
the symbols directly from (17) since the complex channel makes the phase of randomly change over [0, 2Π]. Define
Multiplying both sides of (17) by will remove the phase ambiguity of the equivalent channel. This gives
To collect all the diversities provided by multiple receive antennas at the reader, we sum up all ’s. This gives
The symbols can be easily decoded from equation (20).
For the convenience of exposition in next section, we call the encoding and decoding scheme discussed above as Scheme I.
Another decoding scheme (hereafter it is referred to as Scheme II) is to exploit the property of the matrix , as shown in equations (11)-(13). Right-hand multiplying both sides of equation (15) with
the matrix , we have
From equations (21) and (11)-(13) we can see that, if the symbols are real, the symbol to be decoded, say for some k, and the ISI caused by other symbols, are projected into different subspaces in
the complex plane: the desired signal is in the real subspace, while the ISI is in the imaginary subspace. Therefore, a very simple decoding method for this case works in the following way: From kth
entry of u[j] (denoted as u[j,k]), get the real part of u[j,k] [denoted as Re (u[j,k])], and then decode in terms of Re (u[j,k]).
The diversities provided by multiple receive antennas at the reader can be collected in the following way:
Simulation Results
In this section, we investigate the symbol-error rate (SER) or bit-error rate (BER) performance of both Schemes I and II. In Scheme I, the quadrature phase shift keying (QPSK) modulation is used and
the constellation of transmitted symbols is . In Scheme II, the binary phase shift keying (BPSK) modulation is used and the constellation of transmitted symbols is ±1. Therefore, the SER in Scheme
II reduces to BER. At the transmitter of the reader, the signal x takes the form of a random vector whose entry is uniformly distributed among . It is seen that x is of unity power. Each entry of
the channels H^f and H^b is of mean zero and variance unity.
In the figures to be shown, the signal-to-noise power ratio (SNR) is defined as the , where is the variance of the each entry of noise vector .
Figure 2 shows the SER of Scheme I for different cases: Figures 2(a) and (b) illustrate how the SER changes with N[tag] for fixed N[rd], i.e., when N[rd]=1 and 4 respectively; while Figures 2(c)
and (d) demonstrate how the SER changes with N[rd] for fixed N[tag], i.e., when N[tag]=1 and 4 respectively.
Figure 2. SER of RFID MIMO systems for Scheme I with QPSK modulation.(a)SER vs N[tag] for N[rd]=1(b)SER vs N[tag] for N[rd]=4(c)SER vs N[rd] for N[tag]=1(d) SER vs N[rd] for N[tag]=4.
Figure 3 shows the BER of Scheme II for different cases: Figures 3(a) and (b) illustrate how the BER changes with N[tag] for fixed N[rd], i.e., when N[rd]=1 and 4 respectively; while Figures 3(c)
and (d) demonstrate how the BER changes with N[rd] for fixed N[tag], i.e., when N[tag]=1 and 4 respectively.
Figure 3. BER of RFID MIMO systems for Scheme II with BPSK modulation.(a)BER vs N[tag] for N[rd]=1(b)BER vs N[tag] for N[rd]=4(c)BER vs N[rd] for N[tag]=1(d)BER vs N[rd] for N[tag]=4.
From Figures 2 and 3 the following phenomena can be observed:
Claim 1
Comparing the dashed curves, which corresponds to the performance of the non space-time encoded RFID system with single antenna at both reader and tag sides, and the solid curves in Figures 2(b),
(d), and Figures 3(b), (d), we see that deploying multiple antennas at both reader and tag can greatly improve the SER/BER performance of RFID systems.
Claim 2
When N[rd] is fixed to be one, increasing N[tag ]considerably decreases the BER of the system in Scheme II, but only marginally decreases the SER of the system in Scheme I. For example, when SNR=18
dB and N[rd]=1, the BER of Scheme II decreases from 1.6×10^−2 at N[tag]=1 to 2.0×10^−3 at N[tag]=2 and 8.8×10^−5 at N[tag]=4, respectively. For the same SNR and N[rd], the SER of Scheme
I decreases from 4.7×10^−2 at N[tag]=1 to 2.9×10^−2 at N[tag]=2 and 3.0×10^−2 at N[tag]=4 respectively. The reason for this phenomenon is that the channel diversity provided by N[tag]
antennas at the tag side is harvested by Scheme II [as seen from equations (11)-(13)], but not harvested by Scheme I [as seen from equation (17)].
Claim 3
When N[tag] is fixed to be one, increasing N[rd ]noticeably and monotonically decreases the SER or BER of the system. This phenomenon can be clearly seen from Figure 2(c) and Figure 3(c). The reason
is that only the array gain is provided by the system when N[tag]=1 and it is indeed collected by both Scheme I and Scheme II. Due to the double Rayleigh fading channel, the system performance
cannot be improved conspicuously by only exploiting this array gain.
Claim 4
When N[rd] (or N[tag]) is fixed and greater than one, increasing N[tag] (or N[rd]) greatly decreases the SER or BER of the system, especially for Scheme II. For example, when SNR=18 dB and N[tag]=
4, the SER of Scheme I decreases from 3.0×10^−2 at N[rd]=1 to 2.7×10^−3 at N[rd]=2 and 7.5×10^−5 at N[rd]=4, respectively. For the same SNR and N[tag], the BER of Scheme II decreases
from 8.8×10^−5 at N[rd]=1 to 1.2×10^−6 at N[rd]=2 and 2.4×10^−8 at N[tag]=4 respectively. To achieve the BER=8.8×10^−5 for the case of Scheme II and N[tag]=4, the SNR gain is about
7.5 dB and 10 dB, respectively, by deploying N[rd]=2 and N[rd]=4 antennas at the reader, compared to the single-antenna setup at the reader. On the other side, to achieve the BER=1.3×10^−3 for
the case of Scheme II and N[rd]=4, the SNR gain is about 9 dB and 13.5 dB, respectively, by deploying N[tag]=2 and N[tag]=4 antennas at the tag, compared to the single-antenna setup at the tag.
This is dramatic improvement for the system performance.
Claim 5
Scheme II yields much better SER performance than Scheme I. There are two reasons. The first reason, which is obvious, is that different symbol constellations are used in Schemes I and II. In the
above simulations, one symbol in Scheme I actually carries two bit information, while one symbol in Scheme II carries only one bit information. The second reason, which is somewhat subtle to see, is
that the diversity gain harvested by Scheme I is lower than that harvested by Scheme II, even though Scheme II throw away the signal in another half signal space. This observation can be seen by
comparing equations (11)-(13) and (22) (for Scheme II) and equations (17), (18) and (20) (for Scheme I). For Scheme I, it is seen from (17) and (18) that the N[tag] independent channels are not
coherently summed. In (20), the N[rd] independent summed-channels are further summed. Thus Scheme I yields a diversity order of N[rd] and the system-inherited diversity order N[tag] is sacrificed.
For Scheme II, it is seen from (11)-(13) that the N[tag] independent channels are first coherently summed, yielding a diversity order of N[tag]. From (22), the N[rd] independent summed-channels are
further summed, yielding a diversity order of N[rd]. Thus a total diversity order of N[rd]×N[tag] is obtained in Scheme II.
Claim 6
Comparing Figure 2 and Figure 3, we can conclude that it is better to deploy as many antennas as possible at the reader. At least the number of antennas at the reader side should be not less than the
number of antennas at the tag side. In this way, the full channel diversity generated by multiple antennas at the tag can be maximally exploited.
It may be argued that it is not fair to compare the SER performance of Scheme I and Scheme II, since the former uses QPSK modulation, while the latter uses BPSK modulation. To make the comparison
complete, the BER performance of Scheme I with BPSK modulation is shown in Figure 4 for the corresponding cases. Figure 2, Figure 4 and Figure 3 show that the BER performance of Scheme I is much
worse than that of Scheme II, even though the BER of Scheme I with BPSK modulation is lower than the SER of Scheme I with QPSK modulation for the same configuration of antenna numbers at the reader
and tag. By comparing Figure 4 and Figure 3 we can see that Claims 1-6 obtained based on the comparison between Figure 2 and Figure 3 also holds true qualitatively.
Figure 4. BER of RFID MIMO systems for Scheme I with BPSK modulation.(a)BER vs N[tag] for N[rd]=1(b) BER vs N[tag] for N[rd]=4(c) BER vs N[rd] for N[tag]=1(d) BER vs N[rd] for N[tag]=4.
From the above phenomena, the following conclusions can be drawn: if the required data rate is not high, it is better to use real-symbol constellation for the transmitted symbols at the tag and
correspondingly to use Scheme II decoding policy at the reader receiver; by keeping the cost of the system under constraint, it is better to deploy multiple tag antennas and reader antennas, and the
number of reader antennas should be at least equal to the number of tag antennas.
It is interesting to compare the ROD based STC and Alamouti STC. Figure 5 shows the comparison. It can be seen that Scheme II and Alamouti STC yield the same BER performance, both are better than
Scheme I. This is due to the fact that both Scheme II and Alamouti STC collect all the available channel diversities, while Scheme I does not.
Figure 5. A comparison among Scheme I, Scheme II and the Alamouti STC. For the curves marked with “Scheme I”, “Scheme II” and “Alamouti”, N[tag]=2 and N[rd]=1.
Finally, let us compare the complexity of Scheme I and Scheme II. Both Scheme I and Scheme II perform the same processing, as shown in equations (4)-(6), for the transmitted symbols at the tag. As
seen from (4)-(6), the symbol processing at the tag is quite simple: only the sign of the symbols to be transmitted needs to be changed at some time slots for some antennas. For the processing of a
block of space-time decoding at the reader, Scheme I needs N[rd](K^2 + K + N[tag]) complex multiplications and N[rd]K(K−1) + (N[rd]−1)K + N[rd](N[tag]−1)=N[rd](K^2 + N[tag]−1)−K complex
additions, and Scheme II needs N[rd]K^2 complex multiplications, N[rd]K(K−1) complex additions, and (N[rd]−1)K real additions. Therefore, the computational burden of Scheme II is a little less
than that of Scheme I. With regard to the hardware cost of the proposed STC technique, the main increase in the cost arises from the deployment of multiple antennas. The cost increase for the
involved signal processing unit is negligible at either tags or readers, since the space-time encoding is very simple, which can be easily dealt with by the embedded chip at tags, and the required
computational burden for the space-time decoding at readers is also negligible compared to the relatively strong computation power of readers.
In this paper, we have discussed the space-time encoding and decoding problem for RFID MIMO systems. First, a mathematical model for this kind of system is developed from the viewpoint of signal
processing, which makes it easy to design the STC schemes. Two STC schemes, namely Scheme I and Scheme II, are proposed. Simulation results illustrate that the proposed approaches can greatly improve
the SER/BER performance of RFID systems, compared to non space-time encoded RFID systems. Besides, the SER/BER performance for Scheme I and Scheme II is thoroughly compared and it is found that
Scheme II with the innate real-symbol constellation yields better SER/BER performance than Scheme I.
As is commonly assumed in the STC technique, the channel state information (CSI) is required to be available at the receiver side of the reader to adopt the technology of Scheme I and Scheme II. The
channel estimation problem for RFID systems has been discussed in [8,10], where a method for estimating the channel of the whole chain, including forward link, backscattering coefficient, and
backward link, is presented. However, to estimate the forward and backford channels H^f and H^b separately remains an open issue. On the other hand, if the CSI is also available at the transmitter
side of the reader, we can combine the design for the reader transmit signal and STC for the tag to further improve the system performance. For the first step towards the optimal transmit signal
design at the reader side, readers are referred to the reference [14].
Sign up to receive new article alerts from EURASIP Journal on Embedded Systems | {"url":"http://jes.eurasipjournals.com/content/2012/1/9","timestamp":"2014-04-20T03:35:13Z","content_type":null,"content_length":"146499","record_id":"<urn:uuid:acbd3bc5-cc2c-4c1c-bc6c-b058a25f199d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Allen Iverson height in cm
You asked:
Allen Iverson height in cm
• 183.0 centimetres
the height 183.0 centimetres
• 179.705 centimetres
the height 179.705 centimetres
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/allen_iverson_height_in_cm","timestamp":"2014-04-19T04:51:09Z","content_type":null,"content_length":"70183","record_id":"<urn:uuid:8d180612-ac6f-4eb3-ac97-d47f7d60910b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kotzig's Conjecture
Comment on Unsolved Problem Number Four
Kotzig's Conjecture
It is certainly true that the references supplied by Yair Caro claim to settle the conjecture in total, but a closer scrutiny of the Xing and Hu paper reveals an unfortunate gap in the argument. They
show in the paper, and as far as I know correctly and in my words, that if k>11 then any graph with a (2k-8)-cycle contains some pair of vertices joined by at least two paths of length (exactly) k.
In other words, no counterexample to the Kotzig conjecture has a cycle of length exactly 2k-8. They then contrast this with a lemma of Kotzig, a lemma incidentally cited without proof or reference to
any printed proof, which, again as I can see correctly, claims that: Any P(k)-graph contains a 2n-cycle for a n with 3\leq n\leq k-4.
From this they draw the incorrect conclusion that any P(k) graph has a (2k-8)-cycle, a contradiction that would prove the Kotzig conjecture for k>11. The conclusion is incorrect because Kotzig only
in essence states that there exists some 2n cycle in the interval, not that there exists such a cycle for all n. My reading of this may of course be incorrect, but if it is then it still remains to
fill in Kotzig's part of the proof. Last modified August 25, 2000, by S.C. Locke. How to contact me. | {"url":"http://math.fau.edu/locke/unsolv4r.htm","timestamp":"2014-04-19T14:29:46Z","content_type":null,"content_length":"2038","record_id":"<urn:uuid:58f0c63f-9245-43ad-a6b9-f494fba23c32>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
uantum information can be negative
Quantum information can be negative
Based on work with M. Horodecki, and A. Winter
Even the most ignorant cannot know less than nothing. After all, negative information makes no sense. But, although this may be true in the everyday world we are accustomed to, negative information
does exist in the quantum world. Small objects such as atoms, molecules and electrons behave radically different than larger objects -- they obey the laws of quantum mechanics.
What could negative information possibly mean? In short, after I send you negative information, you will know less. Such strange situations can occur because what it means to know something is very
different in the quantum world. In the quantum world, we can know too much, and it is in these situations where one finds negative information. Negative information turns out to be precisely the
right amount to cancel the fact that we know too much.
While all this might appear to be very mysterious (not to mention, an abuse of the word know!), negative information, can be put on a rigorous footing. I will try to explain how to do so here, in a
manner which I hope is accessible to all. This description is intended for those who have an interest in this subject, but may not have a background in quantum information theory. Most of this text
should be understandable by anyone willing to put in a bit of effort, and the rest should be understandable by anyone with some knowledge of quantum mechanics (or by anyone willing to put in a lot of
effort). So if there are parts which continue to be unclear after some time, please let me know [email J.Oppenheim (at) damtp.cam.ac.uk], and I can modify this text to make things clearer. An
executive summary of the result can be found at the end of this text here if you get impatient.
In order to concentrate on the main points, I have sacrificed some precision, and even some accuracy, so those with a modest background in physics should first try our recent article on negative
information, which is available in Nature here. It was written with Michal Horodecki, and Andreas Winter. Patrick Hayden has written a commentary on it here, and a short description of the contents
can be found in this piece by Andreas Trabesinger. The original version of the paper can be obtained at the pre-print arxiv here. It has a cartoon and George Orwell quotes, which were deemed
inappropriate for Nature. We will shortly finish a more technical account which has the full proofs, calculations and details.
Read the rest of this article | {"url":"http://www.ucl.ac.uk/oppenheim/negative-information.shtml","timestamp":"2014-04-16T16:24:15Z","content_type":null,"content_length":"8210","record_id":"<urn:uuid:f66a503a-2a8b-4df0-85db-b80fbc3ac0d2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lua programming/Expressions
As explained before, expressions are pieces of code that have a value and that can be evaluated. They cannot be executed directly (with the exception of function calls), and thus, a script that would
contain only the following code, which consists of an expression, would be erroneous:
3 + 5
-- The code above is erroneous because all it contains is an expression.
-- The computer cannot execute '3 + 5', since that does not make sense.
Code must be comprised of a sequence of statements. These statements can contain expressions which will be values the statement has to manipulate or use to execute the instruction.
Some code examples in this chapter do not constitute valid code, because they consist of only expressions. In the next chapter, statements will be covered and it will be possible to start writing
valid code.
To evaluate an expression is to compute it to find its value. This is true both in computer science and in mathematics, which also have expressions. In fact, one of the main difference between
mathematics and computer science is that programming languages have statements and expressions, while mathematics only have expressions. When expressions are evaluated, they must be evaluated to a
value. Values will sometimes be numbers, sometimes be text and sometimes be any of many other data types, which is why they are said to have a type.
In Lua, and in programming in general, expressions will usually consist of one or more values with zero or more operators. Some operators can only be used with some types (it would be illogical, for
example, to try to divide text, while it makes sense to divide numbers). There are two kinds of operators: unary operators and binary operators. Unary operators are operators that only take one
value. For example, the unary - operator only takes one number as a parameter: -5, -3, -6, etc. It takes one number as a parameter and negates that number. The binary - operator, however, which is
not the same operator, takes two values and subtracts the second from the first: 5 - 3, 8 - 6, 4 - 9, etc.
It is possible to obtain a number's type as a string with the type function:
print(type(32425)) --> number
Numbers generally represent quantities, but they can be used for many other things. The number type in Lua works mostly in the same way as real numbers. Numbers can be constructed as integers,
decimal numbers, decimal exponents or even in hexadecimal. Here are some valid numbers:
• 3
• 3.0
• 3.1416
• 314.16e-2
• 0.31416E1
• 0xff
• 0x56
Arithmetic operationsEdit
The operators for numbers in Lua are the following:
Operation Syntax Description Example
Arithmetic negation -a Changes the sign of a and returns the value -3.14159
Addition a + b Returns the sum of a and b 5.2 + 3.6
Subtraction a - b Subtracts b from a and returns the result 6.7 - 1.2
Multiplication a * b Returns the product of a and b 3.2 * 1.5
Exponentiation a ^ b Returns a to the power b, or the exponentiation of a by b 5 ^ 2
Division a / b Divides a by b and returns the result 6.4 / 2
Modulo operation a % b Returns the remainder of the division of a by b 5 % 3
You probably already know all of these operators (they are the same as basic mathematical operators) except the last. The last is called the modulo operator, and simply calculates the remainder of
the division of one number by another. 5 % 3, for example, would give 2 as a result because 2 is the remainder of the division of 5 by 3. The modulo operator isn't as common as the other operators,
but it can be very useful in certain cases.
Nil is the type of the value nil, whose main property is to be different from any other value; it usually represents the absence of a useful value. A function that would return nil, for example, is a
function that has nothing useful to return (we'll talk later about functions).
A boolean value can be either true or false, but nothing else. This is literally written in Lua as true or false, which are reserved keywords. The following operators are often used with boolean
values, but can also be used with values of any data type:
Operation Syntax Description
Boolean negation not a If a is false or nil, returns true. Otherwise, returns false.
Logical conjunction a and b Returns the first argument if it is false or nil. Otherwise, returns the second argument.
Logical disjunction a or b Returns the first argument if it is neither false nor nil. Otherwise, returns the second argument.
Essentially, the not operator just negates the boolean value (makes it false if it is true and makes it true if it is false), the and operator returns true if both are true and false if not and the
or operator returns true if either of arguments is true and false otherwise. This is however not exactly how they work, as the exact way they work is explained in the table above. In Lua, the values
false and nil are both considered as false, while everything else is considered as true, and if you do the logic reasoning, you'll realize that the definitions presented in this paragraph correspond
with those in the table, although those in the table will not always return a boolean value.
Strings are sequences of characters that can be used to represent text. They can be written in Lua by being contained in double quotes, single quotes or long brackets, which were covered before in
the section about comments (it should be noted that comments and strings have nothing in common other than the fact they can both be delimited by long brackets, preceded by two hyphens in the case of
comments). Strings that aren't contained in long brackets will only continue for one line. Because of this, the only way to make a string that contains many lines without using long brackets is to
use escape sequences. This is also the only way to insert single or double quotes in certain cases. Escape sequences consist of two things: an escape character, which will always be a backslash ('\')
in Lua, and an identifier that identifies the character to be escaped.
Escape sequences in Lua
Escape sequence Description
\n A new line
\" A double quote
\' A single quote (or apostrophe)
\\ A backslash
\t A horizontal tab
\### ### must be a number from 0 to 255. The result will be the corresponding ASCII character.
Escape sequences are used when putting the character directly in the string would cause a problem. For example, if you have a string of text that is enclosed in double quotes and must contain double
quotes, then you need to enclose the string in different characters or to escape the double quotes. Escaping characters in strings delimited by long brackets is not necessary, and this is true for
all characters. All characters in a string delimited with long brackets will be taken as-is. The % character is used in string patterns to escape magic characters, but the term escaping is then used
in another context.
"This is a valid string."
'This is also a valid string.'
"This is a valid \" string 'that contains unescaped single quotes and escaped double quotes."
This is a line that can continue
on more than one line.
It can contain single quotes, double quotes and everything else (-- including comments). It ignores everything (including escape characters) except closing long brackets of the same level as the opening long bracket.
"This is a valid string that contains tabs \t, double quotes \" and backlashes \\"
"This is " not a valid string because there is an unescaped double quote in the middle of it."
For convenience, if an opening long string bracket is immediately followed by a new line, that new line will be ignored. Therefore, the two following strings are equivalent:
[[This is a string
that can continue on many lines.]]
This is a string
that can continue on many lines.]]
-- Since the opening long bracket of the second string is immediately followed by a new line, that new line is ignored.
It is possible to get the length of a string, as a number, by using the unary length operator ('#'):
print(#("This is a string")) --> 16
The string concatenation operator in Lua is denoted by two dots ('..'). Here is an example of concatenation that concatenates "snow" and "ball" and prints the result:
print("snow" .. "ball") --> snowball
This code will concatenate "snow" and "ball" and will print the result.
Other typesEdit
The four basic types in Lua (numbers, booleans, nil and strings) have been described in the previous sections, but four types are missing: functions, tables, userdata and threads. Functions are
pieces of code that can be called, receive values and return values back. Tables are data structures that can be used for data manipulation. Userdata are used internally by applications Lua is
embedded in to allow Lua to communicate with that program through objects controlled by the application. Finally, threads are used by coroutines, which allow many functions to run at the same time.
These will all be described later, so you only need to keep in mind that there are other data types.
Literals are notations for representing fixed values in source code. All values can be represented as literals in Lua except threads and userdata. String literals (literals that evaluate to strings),
for example, consist of the text that the string must represent enclosed into single quotes, double quotes or long brackets. Number literals, on the other hand, consist the number they represent
expressed using decimal notation (ex: 12.43), scientific notation (ex: 3.1416e-2 and 0.31416E1) or hexadecimal notation (ex: 0xff).
Coercion is the conversion of a value of one data type to a value of another data type. Lua provides automatic coercion between string and number values. Any arithmetic operation applied to a string
will attempt to convert this string to a number. Conversely, whenever a string is expected and a number is used instead, the number will be converted to a string. This applies both to Lua operators
and to default functions (functions that are provided with the language).
print("122" + 1) --> 123
print("The number is " .. 5 .. ".") --> The number is 5.
Coercion of numbers to strings and strings to numbers can also be done manually with the tostring and tonumber functions. The former accepts a number as an argument and converts it to a string, while
the second accepts a string as an argument and converts it to a number (a different base than the default decimal one can optionally be given in the second argument).
Operator precedenceEdit
Operator precedence works the same way in Lua as it typically does in mathematics. Certain operators will be evaluated before others, and parentheses can be used to arbitrarily change the order in
which operations should be executed. The priority in which operators are evaluated is in the list below, from lower to higher priority. Some of these operators were not discussed yet, but they will
all be covered at some point in this book.
• Boolean or: or
• Boolean and: and
• Relational operators: <, >, <=, >=, ~=, ==
• Concatenation: ..
• Level 1 mathematical operators: +, -
• Level 2 mathematical operators: *, /, %
• Unary operators: not, #, - (the - here is the unary - as in -5, not as in 5 - 3)
• Exponentiation: ^
Last modified on 5 February 2014, at 03:42 | {"url":"http://en.m.wikibooks.org/wiki/Lua_programming/Expressions","timestamp":"2014-04-20T21:04:24Z","content_type":null,"content_length":"33845","record_id":"<urn:uuid:7025dc8c-1a07-4763-b9c8-31b144a560a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
New ERCIM Working Group
New ERCIM Working Group on Applications of Numerical Mathematics in Science
by Mario Arioli
At the last meeting of the ERCIM Executive Committee, the members approved the creation of a new Working Group on the Applications of Numerical Mathematics in Science.
Such a Working Group was conceived when a number of researchers, working for institutions participating directly in ERCIM, expressed their interest in building up stronger links between
mathematicians within ERCIM. In particular, it was felt to be vital that a forum be created within the ERCIM institutional organisations, in which a cross-fertilisation between numerical techniques
used in different fields of scientific computing might take place. The Working Group therefore intends to focus on this underpinning theme of computational and numerical mathematics. The intention is
that any resulting numerical algorithm will achieve wider applicability, greater robustness, and better accuracy.
Structure of the Working Group
A preliminary survey of active researchers within ERCIM laboratories indicates that the following four major fields have strategic interest:
• Numerical Linear Algebra. Topics range from sparse matrix theory, direct and iterative solvers for large and sparse linear systems of equations, to the computation of eigenvalues and eigenvectors
for large-scale problems, including the use of symbolic manipulation techniques for the solution of polynomial systems of equations.
• Numerical Solution of Differential Equations. The topics of major interest are finite-element methods, mesh generation, multigrid methods, wavelets, spectral methods and time-stepping methods.
• Continuous Optimisation and Optimal Control. Of interest here are interior point methods for large-scale linear, quadratic and nonlinear programming, SQP methods for nonlinear programming and
numerical methods for optimal control.
• Large Scale Scientific Computing. In this interdisciplinary field, topics of interest include many of those cited in the previous sections, but also include parallel computing and the production
of mathematical software.
There is a strong interaction between the fields; each of them frequently uses techniques developed in at least one of the others.
A number of application areas are likely to benefit from the results and activities of the Working Group, including the simulation of electromagnetic phenomena, electrical circuit theory,
errors-in-variable modelling and mathematical statistics, computational chemistry, computational biology, computational materials, CFD and structural engineering, mathematics for financial
derivatives, finite-element modelling for medical simulation, and environmental modelling and image processing.
Fields of interest of each working group member organisation.
The Working Group will be organised by a steering committee involving one expert from each field of interest, which will have the target of stimulating initiatives that cross the various fields. One
of these representatives will be able to participate in the ERCIM organisational meeting in order to promote the initiatives of the Working Group and to discuss the budget and the resources needed to
accomplish them. The table summarises the interest of each organisation, as far as we can ascertain, in each of the specific topics.
The Working Group looks forward to broadening the scope of its main research topics into additional numerical areas. The Group strongly believes that the best way to build stronger links between the
ERCIM laboratories is to encourage young scientists to act as intermediaries. The recruitment of young scientists justifies the involvement of several universities in our initiative.
Finally, the Working Group will, through its members, promote all possible initiatives within the European Programmes for Research. We will encourage grant applications and involvement in the
research, technological development and demonstration (RTD) framework programmes of the European Union.
Please contact:
Mario Arioli, CLRC
Tel: +44 1235 445332
E-mail: m.arioli@rl.ac.uk | {"url":"http://www.ercim.eu/publication/Ercim_News/enw48/arioli.html","timestamp":"2014-04-19T22:34:33Z","content_type":null,"content_length":"5847","record_id":"<urn:uuid:61d0033b-df72-4f0a-8838-a695f543137d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Black Holes
From Scholarpedia
Black Holes are regions of space in which gravitational fields are so strong that no particle or signal can escape the pull of gravity. The boundary of this no-escape region is called the event
horizon, since distant observers outside the black hole cannot see (cannot get light from) events inside.
Although the fundamental possibility of such an object exists within Newton's classical theory of gravitation, Einstein's theory of gravity makes black holes inevitable under some circumstances.
Prior to the early 1960s, black holes seemed to be only an interesting theoretical concept with no astrophysical plausibility, but with the discovery of quasars in 1963 it became clear that very
exotic astrophysical objects could exist. Nowadays it is taken for granted that black holes do exist in at least two different forms. Stellar mass black holes are the endpoint of the death of some
stars, and supermassive black holes are the result of coalescences in the centers of most galaxies, including our own.
No signal can propagate from inside a black hole, but the gravitational influence of a black hole is always present. (This influence does not propagate out of the hole; it is permanently present
outside, and depends only on the total amount of mass, angular momentum, and electric charge that have gone into forming the hole.) Black holes can be detected through the influence of this strong
gravity on the surroundings just outside the hole. In this way, stellar mass holes produce detectable X-rays, supermassive black holes produce a wide spectrum of electromagnetic signals, and both
types can be inferred from the orbital motion of luminous stars and matter around them. Phenomena involving black holes of any mass can produce strong gravitational waves, and are of interest as
sources for present and future gravitational wave detectors.
Classical vs. relativistic black holes
Something like a black hole exists within Newton's classical theory of gravity. In that theory, an energy argument tells us that there is an escape velocity \(v_{\rm esc} =\sqrt{2GM/R}\) from the
surface of any spherical object of mass \(M\) and radius \(R\ .\) If this velocity is greater than the speed of light \(c\) then light from this object cannot escape to infinity. Thus the condition
for such an "unseeable" object is
\[R<2GM/c^2\ .\]
In the classical theory, a particle could overcome this gravity with strong enough engines to provide the energy needed for escape. This is not so in general relativity, Einstein's theory of
gravitation. In that theory, escaping the black hole is equivalent to moving faster than light, an impossibility in relativity.
To understand the relativistic black hole it is useful to think of space being dragged inward towards a gravitational center, at a faster rate near the center than far from it. The distance at which
space is moving inward at the speed of light represents the location of the event horizon, since no signal can progress outward through space faster than \(c\ .\) This comparison is more than a
metaphor; black hole analog experiments with accelerating gas flows and other phenomena are being designed.
An important difference from Newton's theory is that Einstein's, and other relativistic theories of gravitation are nonlinear in the sense that gravitation (as well as mass) can be a source of
gravity. Thus when a massive object collapses small enough, the tendency to continue the collapse and form a black hole can become unstoppable.
Stationary black holes
In the Newtonian theory, gravity is described by the potential \(\Phi\ .\) Inside a spherical object the form of \(\Phi(r)\) depends on the interior structure, but in the vacuum outside matter the
potential \(-GM/r\) depends only on the interior mass. Similarly, in Einstein's theory the stationary (time-independent) spherically symmetric exterior solution, called the Schwarzschild spacetime,
depends only on the mass of the interior object. If the interior object is small enough, then the Schwarzschild exterior extends to small enough radius that there is a horizon, a surface across which
light cannot move outward. This horizon radius \(R_H =2GM/c^2\) is, coincidentally, the same as the critical radius for "unseeable" objects in Newton's theory. (The meaning of "radius" as distance to
the center is not straightforward for the Schwarzschild solution. Radius \(R_H \)here actually means that the area of the event horizon is \(4\pi R_H ^2\ .\))
In Einstein's theory, the "exterior" solution can be taken to apply with no interior solution. In this case it is gravity itself, rather than matter, that acts as the source of gravity. The
inward-extended exterior solution does not reach a center, but rather is connected via a spacetime bridge to another universe, or another section of our own. For an astrophysical black hole, formed
from the collapse of matter, a physical solution for the matter distribution replaces the pure vacuum Schwarzschild solution in the interior of the black hole. This physical solution lacks the
spacetime bridge of ideal mathematical black holes, but contains a central "singularity" where matter is compressed to infinite density. Very close to this singularity it is expected that the laws of
general relativity will no longer apply, and as-yet unknown laws of quantum gravity are needed.
A more general stationary black hole solution of Einstein's theory is the Kerr solution, a vacuum spacetime with both mass and angular momentum, and taken to represent a rotating black hole. In its
pure mathematical form the Kerr hole contains a spacetime bridge, but as in the case of the Schwarzschild black hole this bridge is absent in realistic black holes that form by the collapse of
Unlike the Schwarzschild spacetime, the Kerr solution is not the exterior spacetime of a material object with angular momentum. (In fact no realistic solution has been found to join a Kerr exterior
to a material interior.) The Kerr solution only becomes the exterior spacetime asymptotically at very late times after the collapse of an object.
Two other exact mathematical black hole solutions are the Reissner-Nordström spacetime, representing a hole with mass and electrical charge, and the Kerr-Newman spacetime, representing a hole with
mass, electrical charge, and angular momentum. These spacetimes are not astrophysically relevant, since astrophysical bodies have negligible net electrical charge. [For a detailed description of
these spacetimes see, e.g., Misner et al. (1973), Part VII; or Wald (1984), Chap. 12.]
All of these spacetimes, including the ones with angular momentum, are stationary: that is, they are independent of time. But in relativity there is no unique meaning to time, so an important
question is: "Just what 'time' is it of which the stationary black holes are independent?" The answer lies in the fact that one can assign every spacetime point four coordinates, four labels that
uniquely identify the location of each point. One of these coordinates is called the "coordinate time." Spacetimes that are said to be stationary, like the spacetime of a Kerr hole, have a special
property: the time coordinate may be chosen so that the spacetime geometry is the same at any moment of this time coordinate.
At large distances from a stationary black hole, where spacetime curvatures are weak, this stationary time coordinate can be chosen also to have another important property: to agree with the "proper
time," or ordinary clock time, of an observer at rest with respect to the hole. Since we ourselves are more-or-less at rest (or are at nonrelativistic velocities) very far from black holes, this kind
of stationary coordinate time is the time used in astronomical observations. For observers near the hole, however, proper time and stationary coordinate time will not agree. An interval of proper
time between two events is shorter (near the horizon, much shorter) than the interval of stationary coordinate time between those two events. The relationship between stationary coordinate and proper
time is further complicated by special relativistic time dilation for observers who are rapidly moving. Figure 2 illustrates this by showing the coordinate time vs. radius for a particle falling into
a black hole, and comparing it with the proper time measured by an observer riding along with the infalling particle. The progress as measured with proper time is in no way special at the horizon:
falling observers will not notice anything strange as they pass the point of no return. But as described in coordinate time, the observer (and likewise, the surface of a collapsing star) takes an
infinite amount of time to reach the horizon. Since coordinate time is the proper time of distant observers, astronomers will see the particle reach the horizon only in the infinite future.
The two (or more) types of time are sometimes a source of confusion in discussions of black hole phenomena, since they often give totally different answers to the question "how long does it take?"
Black hole parameters
Astrophysical black holes are characterized by two parameters: their mass and their angular momentum (or spin). The mass parameter \(M\) is equivalent to a characteristic length \(GM/c^2=1.48\,{\rm
km}(M/M_{\rm o})\ ,\) or a characteristic timescale \(GM/c^3=4.93\times10^{-6} \,{\rm sec}(M/M_{\rm o})\ ,\) where \(M_{\rm o}\) denotes the mass of the Sun. These scales, for example, give the order
of magnitude of the radii and periods of near-hole orbits. The timescale also applies to the process in which a developing horizon settles into its asymptotically stationary form. For a stellar mass
hole this is of order \(10^{-5} \,{\rm sec} \ ,\) while for a supermassive hole of \(10^8M_{\rm o}\ ,\) it is thousands of seconds.
For Schwarzschild holes, and approximately for Kerr holes, the horizon is at radius \(R_H= 2GM/c^2 \ .\) At the horizon, the "acceleration of gravity" has no meaning, since a falling observer cannot
stop at the horizon to be weighed. What is relevant at the horizon is the tidal stresses that stretch and distort the falling observer. This tidal stretching is given by the same expression, the
gradient of the gravitational acceleration, as in Newtonian theory\[2GM/R_H^3=c^6/(4G^2M^2)\ .\] In the case of a solar mass black hole the tidal stress (acceleration per unit length) is enormous at
the horizon, on the order of \(3\times10^9(M_{\rm o}/M)^2\mathrm{s}^{-2}\ :\) that is, a person would experience a differential gravitational field of about \(10^9\) Earth gravities, enough to rip
apart ordinary materials. For a supermassive hole, by contrast, the tidal force at the horizon is smaller by a typical factor \(10^{10\mbox{--}16}\) and would be easily survivable. However, at the
central singularity, deep inside the event horizon, the tidal stress is infinite.
In addition to its mass \(M\ ,\) the Kerr spacetime is described with a spin parameter \(a\) defined by the dimensionless expression
where \(J\) is the angular momentum of the hole. For the Sun (based on surface rotation) this number is about 0.2, and is much larger for many stars. Since angular momentum is ubiquitous in
astrophysics, and since it is expected to be approximately conserved during collapse and black hole formation, astrophysical holes are expected to have significant values of \(a/M\ ,\) from several
tenths up to and approaching unity.
The value of \(a/M\) can be unity (an "extreme" Kerr hole), but it cannot be greater than unity. In the mathematics of general relativity, exceeding this limit replaces the event horizon with an
inner boundary on the spacetime where tidal forces become infinite. Because this singularity is "visible" to observers, rather than hidden behind a horizon, as in a black hole, it is called a naked
singularity. Toy models and heuristic arguments suggest that as \(a/M\) approaches unity it becomes more and more difficult to add angular momentum. The conjecture that such mechanisms will always
keep \(a/M\) below unity is called cosmic censorship.
The inclusion of angular momentum changes details of the description of the horizon, so that, for example, the horizon area becomes
\[\mbox{Horizon area}=(4\pi G^2/c^4)\left[ \left(M+\sqrt{M^2-a^2 \;}\ \right)^2+a^2\right]\,.\]
This modification of the Schwarzschild (\(a=0 \)) result is not significant until \(a/M\) becomes very close to unity. For this reason, good estimates can be made in many astrophysical scenarios with
\(a \) ignored.
Dynamical black holes
The event horizon is defined as the outer boundary of the region from which there is no escape. For stationary black holes this surface is at a fixed location in space, but more generally the horizon
is dynamical; it can grow, change shape, oscillate. In particular, a horizon can be born.
The birth of a horizon is a change first studied by Oppenheimer and Snyder for the collapse of spherical pressureless fluid. The general scenario is for a horizon to be born deep inside the
collapsing matter and to spread outward.
Small changes in a horizon can be treated as perturbations, greatly simplifying the mathematics of Einstein's equations. In this way, it has been found that black holes have characteristic patterns
of oscillations, called quasinormal modes. These modes are like mechanical resonances, but are highly damped by the emission of gravitational waves, and have both periods and damping times on the
order of the characteristic time \(GM/c^3\ .\)
For large nonspherical changes in a dynamical horizon, only supercomputer simulations can give quantitative answers. The focus of such work has been the merger of two black holes in binary orbit
around each other, a scenario of special interest as a source of strong gravitational wave emission. Only in 2005 were technical problems first overcome [Pretorius (2005) (See Fig. 3.), Campanelli et
al. (2006), Baker et al. (2006)] so that an accurate picture could emerge of how two horizons join to become a single final horizon.
There is one change that a horizon cannot make according to classical (nonquantum) general relativity: it cannot decrease its area. But considerations by Stephen Hawking and others of quantum effects
in black hole spacetimes suggest that radiation arising in the close exterior of the black hole can carry off energy, and decreases the mass (and hence horizon area) of the hole. Although no quantum
theory of relativistic gravitation currently exists, it is generally accepted that this Hawking radiation will be a feature of any such theory. The radiation behaves as if the horizon were a
blackbody (perfect thermal emitter) at a temperature \(6\times10^{-8}(M_{\rm o}/M)\)K. Thus for astrophysical black holes, mass loss by Hawking radiation is much less important than the mass increase
due to absorption of the 3K cosmic microwave background, and that mass increase is itself negligible.
Astrophysical black holes
Black holes in our Universe can be grouped as: primordial black holes, stellar-mass black holes, and supermassive black holes. The first is highly speculative; the second and third are broadly
Primordial black holes of all masses are postulated to have formed from quantum fluctuations in the early Universe, but those with mass less than around \(10^{13}\)kg would have already evaporated
due to Hawking radiation. No significant observational evidence has yet been found of the existence of these objects [MacGibbon and Carr (1991)].
Stellar-mass black holes, ranging from a few to a few tens of solar masses, are normal but rare endpoints in the evolution of massive stars. When a star exhausts its nuclear fuel and cools, it must
collapse unless it is supported by nonthermal forces. It is known that such nonthermal forces cannot resist gravitational compression for masses greater than around 1.5 to 3\(M_{\rm o}\ .\) (The
uncertainty is due to uncertainty in our knowledge of nuclear physics at high densities.) There are many stars far more massive than this, but in their death throes massive stars expel a great deal
of their mass in supernova explosions. Even stars initially as massive as 20-30\(M_{\rm o}\) may blow off enough mass to leave a remnant neutron star smaller than 1.5\(M_{\rm o}\ .\) Stars more
massive than 20-30\(M_{\rm o}\) may form a neutron star core, only to have it collapse due to fallback of material from the stellar mantle. Still more massive stars (above \(\sim40M_{\rm o}\)) may
form a black hole directly in collapse, with or without a supernova explosion. These masses are rather uncertain at present, as our understanding of supernova explosion mechanisms is still evolving
[Woosley and Janka (2005), Muno (2006)]. Collapsing gas clouds larger than \(\sim100M_{\rm o}\) are expected to dissipate from radiation pressure, which would prevent more massive stars from forming,
and consequently sets an approximate upper limit for stellar mass black holes.
Observational evidence for stellar-mass black holes comes primarily from X-ray astronomy. A black hole in a close binary orbit can form an accretion disk (see Figure 1) of matter pulled off a normal
stellar companion. In its inspiral, disk material is heated by shearing, and becomes a strong X-ray emitter. If observations reveal a point-like X-ray emitter with a mass inferred from orbital
dynamics to be above that possible for a neutron star, it becomes a black hole candidate. At present there are over a dozen such black hole candidates, including the first object to be identified as
a black hole, the X-ray source Cygnus X-1 (estimated mass \(10\pm3M_{\rm o}\)) [Casares (2006)].
The other class of observationally-supported black holes is that of supermassive black holes, ranging from hundreds to billions of solar masses. Evidence for the existence of black holes at the upper
end of this range is overwhelming. By contrast, the existence of black holes with mass roughly of order \(10^2\) to \(10^4M_{\rm o}\ ,\) referred to as "intermediate-mass black holes," is
speculative. The supermassive holes may have begun as primordial or stellar-mass black holes, but have grown through absorption of stars or gas, or through mergers with other holes [Ferrarese and
Ford (2005)].
Evidence for supermassive black holes originally came from quasars, small but intense radio sources seen at cosmological distances. Their huge luminosities implied high mass, while their rapid
variability implied extremely small size. More recently, measurements of Doppler shifts of stars and gas in the centers of galaxies have shown that compact objects of mass greater than \(10^{6}M_{\rm
o}\) reside in the cores of most galaxies; the very small size of these central objects rules out any plausible alternative to the black hole explanation. The black hole at the center of our own
Milky Way galaxy has a measured mass of \(\sim3\times10^6M_{\rm o}\ ,\) based on the orbits of stars near the radio source Sagittarius A* associated with the Galactic core [Ghez et al. (2005); see
Fig. 4].
Gravitational waves from black holes
Black holes are of particular interest for gravitational waves, and vice versa. For gravitationally-bound systems, the typical maximum gravitational wave amplitude (dimensionless strain) is \(h\sim
(2GM/c^2)^2/(RD)\sim R_H^2/(RD)\ ,\) where \(D\) is the distance to the system, \(M\) and \(R\) are the mass and size of the system, and \(R_H\) is the horizon radius of a black hole of the same
mass. From this it is clear that strongest gravitational waves will involve systems at or near black hole compactness: in particular, systems containing black holes in close proximity to one another
[Flanagan and Hughes (2005)].
Conversely, a system containing only black holes would not be expected to radiate any sort of radiation other than gravitational waves, and in any system containing black holes, gravitational waves
provide the most accurate probes of conditions deep in their gravitational wells. In particular, gravitational wave observations could definitively test whether a compact mass is truly a black hole
[Cutler and Thorne (2002)].
The strongest intrinsic source of gravitational waves in the present Universe is the collision of two comparable-massed black holes. A sufficiently tight black hole binary system will lose energy
through emission of gravitational waves, causing the orbit to circularize and then slowly shrink over time.
When the orbital radius approaches the Schwarzschild radius of the system, complex nonlinear dynamics come into play and the final stages of the inspiral and merger can only be modeled through
numerical simulations. These simulations are quite challenging, but recent breakthroughs [Pretorius(2005)(See Fig. 5), Campanelli et al.(2006), Baker et al.(2006)] have led to the first complete
inspiral-merger gravitational waveforms. After the black holes have coalesced, they form a single highly-distorted black hole which quickly settles down to a quiescent Kerr state through emission of
quasinormal "ringdown" gravitational waves [Pretorius (2005), Campanelli et al. (2006), Baker et al. (2006)].
Another gravitational emission mechanism for supermassive black holes would come from their capture of compact stellar objects, particularly stellar-mass black holes. Unlike comparable-mass mergers,
these captures would be on highly eccentric orbits that would not circularize before merger, resulting in intricate rapidly precessing orbits in the final months or years of inspiral. The consequent
gravitational-wave signal would allow for exquisitely precise measurements of the parameters of the supermassive black hole, as well as tests of any deviation from the predictions of general
relativity [Cutler and Thorne (2002)].
For stellar-mass black holes (tens of \(M_{\rm o}\)), the final stages of inspiral and merger occur at frequencies of hundreds to thousands of Hz, a regime targeted by ground-based gravitational-wave
observatories. Systems of supermassive black holes that are millions of \(M_{\rm o}\) emit at frequencies of tens of mHz, which will be detectable by proposed space-based gravitational-wave
detectors. Black holes in the billions of \(M_{\rm o}\ ,\) or a stochastic background of signals from more moderate supermassive black holes at earlier stages of their inspiral, might emit waves at
nHz frequencies that would be measurable through years-long correlated timing measurements of radio pulsars [Jenet et al. (2005)].
J. G. Baker et al., Physical Review Letters 96, 111102 (2006).
M. Campanelli et al., Physical Review Letters 96, 111101 (2006).
J. Casares, "Observational evidence for stellar-mass black holes", astro-ph/0612312 (2006).
C. Cutler and K. S. Thorne, "An Overview of Gravitational-Wave Sources", gr-qc/0204090 (2002).
L. Ferrarese and H. Ford, Space Science Reviews 116, 523 (2005) astro-ph/0411247.
E. Flanagan and S. A. Hughes, New Journal of Physics 7, 204 (2005).
A. M. Ghez et al., The Astrophysical Journal 620, 744(2005) astro-ph/0306130.
F. A. Jenet at al., The Astrophysical Journal 625, L123 (2005) astro-ph/0504458.
H. MacGibbon and B. J. Carr, The Astrophysical Journal 371, 447 (1991).
C. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation (Freeman, San Francisco, 1973).
P. Muno, "Which Stars Form Black Holes and Neutron Stars?", astro-ph/0611589 (2006).
F. Pretorius, Physical Review Letters 95, 121101 (2005).
R. M. Wald, General Relativity, (University of Chicago Press, Chicago, 1984),
S. Woosley and T. Janka, Nature Physics 1, 147, astro-ph/0601261 (2005).
Internal references
• Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
Suggested reading
K. S. Thorne, Black Holes and Time Warps: Einstein's Outrageous Legacy (W. W. Norton, New York, 1994).
E. F. Taylor and J. A. Wheeler, Exploring Black Holes: Introduction to General Relativity (Benjamin Cummings, 2000).
S. L. Shapiro and S. A. Teukolsky, Black Holes, White Dwarfs and Neutron Stars: The Physics of Compact Objects (Wiley-Interscience, 1983).
R. H. Price The Physical Basis of Black Hole Astrophysics, Chap.5 in 100 Years of Relativity, Spacetime Structure: Einstein and Beyond (World Scientific, Singapore, 2005). | {"url":"http://www.scholarpedia.org/article/Black_hole","timestamp":"2014-04-17T22:05:26Z","content_type":null,"content_length":"60738","record_id":"<urn:uuid:0fffb569-c4cf-4d7e-9204-b707fa7a1454>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Feltonville, PA Math Tutor
Find a Feltonville, PA Math Tutor
...I am able to help students develop test taking and study strategies specific to the NCELX exam in addition to targeted content review based on the student's NCLEX strengths and weaknesses. The
research that I focused on for my PhD utilized a number of anthropologic theories and research methods. I have taken 4 graduate level classes in anthropology.
39 Subjects: including SAT math, ACT Math, SPSS, English
...Middle school and early High School are the ages when most children develop crazy ideas about their abilities regarding math. It upsets me when I hear students say, 'I'm just not good in math!
' Comments like that typically mean that a math teacher along the way wasn't able to present the materi...
9 Subjects: including geometry, Microsoft Outlook, algebra 1, algebra 2
...Their success becomes my success. Every student brings a unique perspective and a unique set of expectations to his or her lesson, causing me to adapt my teaching style and approach to forge a
connection that works for both of us. I have learned a great deal from my students in this process!
21 Subjects: including algebra 1, algebra 2, calculus, SAT math
...Throughout my teaching, I was able to constantly monitor my students using the AIMSweb Mathematics Concepts and Applications assessment (M-CAP), and pin-point which areas my students struggle
in, be it number sense, fractions, algebra, etc. If selected to be your tutor, my goal will be to not on...
12 Subjects: including algebra 1, Microsoft Excel, geometry, Microsoft PowerPoint
...I work as an Autistic Support teacher at Child Guidance Resource Centers. I also work at the Vanguard school in the summer with Autistic students. I teach phonics on a daily basis and
implement interventions for those who struggle with phonics.
10 Subjects: including algebra 1, reading, grammar, special needs
Related Feltonville, PA Tutors
Feltonville, PA Accounting Tutors
Feltonville, PA ACT Tutors
Feltonville, PA Algebra Tutors
Feltonville, PA Algebra 2 Tutors
Feltonville, PA Calculus Tutors
Feltonville, PA Geometry Tutors
Feltonville, PA Math Tutors
Feltonville, PA Prealgebra Tutors
Feltonville, PA Precalculus Tutors
Feltonville, PA SAT Tutors
Feltonville, PA SAT Math Tutors
Feltonville, PA Science Tutors
Feltonville, PA Statistics Tutors
Feltonville, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Boothwyn Math Tutors
Bridgewater Farms, PA Math Tutors
Chester Township, PA Math Tutors
Chichester, PA Math Tutors
Elwyn, PA Math Tutors
Franklin Center, PA Math Tutors
Garden City, PA Math Tutors
Green Ridge, PA Math Tutors
Linwood, PA Math Tutors
Lower Chichester, PA Math Tutors
Milmont Park, PA Math Tutors
Parkside Manor, PA Math Tutors
Twin Oaks, PA Math Tutors
Upland, PA Math Tutors
Village Green, PA Math Tutors | {"url":"http://www.purplemath.com/Feltonville_PA_Math_tutors.php","timestamp":"2014-04-18T23:40:56Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:8c02a1bd-e017-4004-b61f-84c39b120139>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annales Academiæ Scientiarum Fennicæ
Volumen 31, 2006
The articles are stored in PDF and DVI formats and also in compressed (with gzip) PostScript format, so you can view them on your screen if your WWW browser supports it. You can also download the
articles in compressed PostScript format for printing.
Beardon, A.F., and T.W. Ng: Parametrizations of algebraic curves, pp. 541-554.
Bergweiler, Walter: Fixed points of composite entire and quasiregular maps, pp. 523-540.
Bieske, Thomas: Equivalence of weak and viscosity solutions to the p-Laplace equation in the Heisenberg group, pp. 363-379.
Bildhauer, Michael, and Martin Fuchs: Higher order variational problems on two-dimensional domains, pp. 349-362.
Björn, Anders: Removable singularities for bounded p-harmonic functions and quasi(super)harmonic functions, pp. 71-95.
Broch, Ole Jacob: Extension of internally bilipschitz maps in John disks, pp. 13-30.
Carrión, Humberto, Pablo Galindo, and Mary Lilian Lourenço: Banach spaces whose bounded sets are bounding in the bidual, pp. 61-70.
Chang, Jianming, and Mingliang Fang: On entire functions that share a value with their derivatives, pp. 265-286.
Cruz-Uribe, D., SFO, A. Fiorenza, J.M. Martell and C. Pérez: The boundedness of classical operators on variable L^p spaces, pp. 239-264.
Di Biase, Fausto, Alexander Stokolos, Olof Svensson and Tomasz Weiss: On the sharpness of the Stolz approach, pp. 47-59.
Fuglede, Bent: A sharpening of a theorem of Bouligand. With an application to harmonic maps, pp. 173-190.
Futamura, Toshihide, Yoshihiro Mizuta, and Tetsu Shimomura: Sobolev embeddings for variable exponent Riesz potentials on metric spaces, pp. 495-522.
Goblet, Jordan: A selection theory for multiple-valued functions in the sense of Almgren, pp. 297-314.
Halburd, R.G., and R.J. Korhonen: Nevanlinna theory for the difference operator, pp. 463-478.
Kanas, Stanislawa, and Toshiyuki Sugawa: On conformal representations of the interior of an ellipse, pp. 329-348.
Kraus, Daniela, and Oliver Roth: Weighted distortion in conformal mapping in euclidean, hyperbolic and elliptic geometry, pp. 111-130.
Li, Xiaonan, Fernando Pérez-González, and Jouni Rättyä: Composition operators in hyperbolic Q-classes, pp. 391-404.
Liu Ming-Sheng and Zhang Xiao-Mei: Fixed points of meromorphic solutions of higher order linear differential equations, pp. 191-211.
Martín, Joaquim, and Javier Soria: Characterization of rearrangement invariant spaces with fixed points for the Hardy-Littlewood maximal operator, pp. 39-46.
McShane, Greg: Simple geodesics on surfaces of genus 2, pp. 31-38.
Navas, Andrés: On uniformly quasisymmetric groups of circle diffeomorphisms, pp. 437-462.
Nieminen, Tomi: Generalized mean porosity and dimension, pp. 143-172.
Nyström, Kaj: Caloric measure and Reifenberg flatness, pp. 405-436.
Petracovici, Lia: Non-accessible critical points of certain rational functions with Cremer points, pp. 3-11.
Pérez-García, David: The trace class is a Q-algebra, pp. 287-295.
Rickman, Seppo: Simply connected quasiregularly elliptic 4-manifolds, pp. 97-110.
Sankaranarayanan, A.: On Hecke L-functions associated with cusp forms II: On the sign changes of S[f](T), pp. 213-238.
Saucan, Emil: The existence of quasimeromorphic mappings, pp. 131-142.
Short, Ian: The hyperbolic geometry of continued fractions K(1 | b[n]), pp. 315-327.
Storm, Peter A.: Rigidity of minimal volume Alexandrov spaces, pp. 381-389.
Tolsa, Xavier, and Joan Verdera: May the Cauchy transform of a non-trivial finite measure vanish on the support of the measure?, pp. 479-494. | {"url":"http://www.emis.de/journals/AASF/Vol31/vol31.html","timestamp":"2014-04-17T13:12:49Z","content_type":null,"content_length":"5205","record_id":"<urn:uuid:2ceac0af-ecd6-4791-a5a4-e7f483526c8e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric Representation Theory (Lecture 1)
Posted by John Baez
This fall, the so-called Quantum Gravity Seminar at U. C. Riverside will actually tackle geometric representation theory — the marvelous borderland where geometry, groupoid theory and logic merge
into a single subject. And there are two other new things about this seminar.
First, it will be jointly run by John Baez and James Dolan. In addition to explaining well-known stuff, we’ll report on research we’ve done with Todd Trimble over the last few years. Second, we plan
to offer videos as well as written notes of the seminar. We’re still working the bugs out of the technology, so please bear with us.
As usual, the seminar will meet on Tuesdays and Thursdays, and you can ask questions and discuss things here at the $n$-Category Café.
This week, I kicked off the proceedings with a gentle introduction to a few of the main themes.
• Lecture 1 (Sept. 27) - John Baez on some of the basic ideas of geometric representation theory. Classical versus quantum; the category of sets and functions versus the category of vector spaces
and linear operators. Group representations from group actions. Representations of the symmetric group $n!$ from types of structure on $n$-element sets. Representations of the general linear
group $GL(n,F_q)$ from types of structure on the $n$-dimensional vector spaces over the field with $q$ elements, $F_q$. Uncombed Young diagrams $D$, and ‘$D$-flags’ as structures either on $n$
-element sets or $n$-dimensional vector spaces. Irreducible representations of $n!$ versus representations coming from the actions of $n!$ on sets of $D$-flags. Counting $D$-flags: $q$-factorials
and their limit as $q \to 1$. The ‘field with one element’. Projective geometry.
We’re offering the videos in streaming and/or downloadable form, both as .mov files. Downloading them takes a long time, but you may need to do this, since the streaming videos seem to work well only
if you have a good internet connection.
.mov files can best be played using a free program called QuickTime. If you have QuickTime and your web browser has .mov files associated to this program, you should be able to click on the first
“streaming video” link above and watch the video. An alternate method is to launch the QuickTime player on your computer, click on “File” and then “Open URL”, and type in the URL provided. This has
the advantage that you can easily make the picture bigger.
If you can handle URL’s that begin with rtsp, you can instead go the corresponding URL of that form, e.g. rtsp://mainstream.ucr.edu/baez_9_27_stream.mov. This may also have advantages, but at present
my computer gags on such URL’s.
If you encounter problems or — even better — know cool tricks to solve such problems, please let us know about them here!
If you catch mistakes, let me know and I’ll add them to the list of errata.
Posted at October 7, 2007 12:43 AM UTC
Re: Geometric Representation Theory (Lecture 1)
This looks great, but I wonder if there’s any chance of making the video files available for download rather than just streaming? My effective bandwidth seems to be a factor of 4 too small to receive
the stream – and trying to watch a lecture that plays for 5 seconds then halts for 15 is pretty painful – but if I could download the files to play smoothly I wouldn’t care how long that took.
Posted by: Greg Egan on October 7, 2007 2:34 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
The videos are being recorded by the multimedia technologies group at UCR, and being stored on their server. I’ll ask them if there’s a way to let people download them. There should be some way.
Alas, right now the second video seems really bad: the sound keeps dropping out completely. It’s a video of Jim Dolan introducing his way of thinking about this stuff. If I can’t solve the problem
any other way, I may even ask him to give this class again! It won’t be the same, though — in part because there were lots of interesting questions.
We have a lot to learn about making and distributing videos. From my home, the streaming video works flawlessly. I don’t know if the folks at UCR ever tried watching these videos from farther away,
e.g. Australia or Europe.
Posted by: John Baez on October 7, 2007 4:10 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Just to say that I could watch the whole video without problems from Europe, but then it’s sunday morning…
I agree with Greg that it would be nice to be able to download the videos, perhaps to use some of them as material for a course.
Nice lecture btw. (apart from the final confusion about PGL(n+1,F))
Posted by: lieven on October 7, 2007 9:56 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
The video streams fine to Canada. Very nice!
On a related topic, I’ve started looking into options for giving lectures live over the net with video. Does anyone have suggestions? One requirement is that audio be bidirectional, so the audience
can ask questions.
By the way, is there a place here for people to discuss meta topics, such as getting mathml to work (I’ve spent hours on this and still can’t get it to work correctly), choosing an RSS reader that
handles the n-category cafe well, etc?
Posted by: Dan Christensen on October 7, 2007 2:04 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Hi, Dan. I’m glad our streaming video streams nicely right up to Canada.
“Meta topics” covers a lot of ground, even $n$-category theory itself, which is about as “meta” as you can get. But, the particular meta topics you list are perfectly suited to our perennial thread
on TeXnical Issues. Despite the title, this thread is not just about TeX. Go for it!
Posted by: John Baez on October 8, 2007 1:23 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
I can stream this fine (from Nebraska, not very far), but I’d still like to download if possible! My Internet connections go in and out, and I have plenty of disk space, so I like to download
anything that I’m liable to want to watch again (in case there’s no Internet when I decide to review it).
On a related note, does anybody know how to get Firefox (or Shockwave Flash) to tell me where it’s storing the temporary file behind any given display? I could download all of the Catsters’ videos
from YouTube, since I found them in my operating system’s temp folder; other times, I find thing in the browser’s cache folder. But this video I can’t find anywhere!
Posted by: Toby Bartels on October 7, 2007 8:06 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
If you want to grab videos from YouTube quickly, you should try a plugin for Firefox called DownloadHelper. It doesn’t work for this, but it should help you with the rest of the flash video websites.
Posted by: Anonymous Coward on October 7, 2007 9:54 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Hey, that works great! Now I don’t even have to turn on Javascript to surf YouTube (unlike some websites …). Thanks, Anonymous Coward!
And since John’s videos are now also available for download, I’m all set!
Posted by: Toby Bartels on November 13, 2007 3:44 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Congratulations on the video! However I too have been unable to watch very much because of the streaming problem.
Posted by: Eugenia Cheng on October 8, 2007 12:30 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
I’ll tell the multimedia folks at UCR to make the video available in other forms. At the very least, they can give it to me as a file which I can put on my website, YouTube, and so on.
As tdstephens points out, we’re at the bleeding edge here.
Posted by: John Baez on October 8, 2007 3:25 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Thank you for your efforts on this project. These videos, the catsters, and several others are at the leading edge of these exciting times for undergrad through post-grad level communication.
(“several others” is vague, and I can’t actually think of any others that are this good…)
Posted by: tdstephens3 on October 8, 2007 3:07 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
I have a remark/question/observation on the groupoidification program.
One of the big messages of this program is, I gather, that in order to understand representations well we ought to be looking at the corresponding action groupoids.
So, if a group $G$ acts on a set or space $V$
$\rho(g) : V \to V$
we’d form the groupoid
$V // G$
whose objects are the elements of $V$ and which has all morphisms of the form
$v \stackrel{\rho(g)}{\to}\rho(g)(v)$
for all $v \in V$ and $g \in G$.
Now, what is an action groupoid, abstractly speaking? One striking property of $V// G$ is that it is still equipped with an action of $G$:
$\tilde \rho(g) : V//G \to V//G$
These $\tilde \rho(g)$ now are functors. This means they can have natural transformations between them.
Once could say that the action groupoid $V//G$ has precisely the right morphisms in order to make all group element actions homotopic.
Namely the action $\tilde \rho$ on $V // G$ has the property that
$\array{ & earrow \searrow^{\tilde \rho(g)} \\ V//G &\Downarrow^{\simeq}& V//G \\ & \searrow earrow_{\tilde \rho(g')} }$
any two group element actions $\tilde \rho(g)$ and $\tilde \rho(g')$ are related by a unique natural transformation.
This is of course just another aspect of the statement that the weak quotient $V // G$ is equivalent to the strict quotient $V / G$ (regarded as a discrete category).
But how can we describe the existence of these unique 2-morphisms abstractly?
I believe that one way to do it is this:
let me write
$\Sigma \mathrm{Aut}(V)$
for the category which contains the single object $V$ and all its automorphisms. This way our representation is a morphism
$\rho : \Sigma G \to \Sigma \mathrm{Aut}(V)$
I am thinking that the action groupoid $V//G$ is the strict pushout (in $2\mathrm{Cat}$) of
$\array{ \Sigma G &\hookrightarrow& \Sigma(G // G) \\ \downarrow^\rho \\ \Sigma \mathrm{Aut}(V) }$
in that
$\array{ \Sigma G &\hookrightarrow& \Sigma ( G // G ) \\ \downarrow^\rho && \downarrow \\ \Sigma \mathrm{Aut}(V) &\stackrel{\tilde {(\cdot)}}{\to}& \Sigma \mathrm{Aut}(V // G) }$
is the universal strict completion of this cone.
Posted by: Urs Schreiber on October 8, 2007 10:07 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Completely irrelevant, but this just drives me crazy. $\begin{matrix} &\overset{\tilde{\rho}(g)}{↷}&\\ V//G&\Downarrow\simeq&V//G\\ &\underset{\tilde{\rho}(g')}{\curvearrowbotright}& \end
Well, OK, that still looks a little goofy (the clockwise top arrow is rather ugly, at least on my system — when, oh when will the Stix fonts arrive?). But it’s better than what you had.
Carry on …
Posted by: Jacques Distler on October 8, 2007 3:45 PM | Permalink | PGP Sig | Reply to this
TeXnical Issues
I have moved the ensuing discussion of mathematical graphics and fonts to the thread on TeXnical Issues. I think it’s best if we discuss such matters there, so people who come along later can easily
find what was said.
Posted by: John Baez on October 8, 2007 8:45 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
There is now a discussion of this point (the action groupoid as the $\mathrm{INN}(G)$-rep induced from a $G$-rep) on slides 161 and following here.
Posted by: Urs Schreiber on October 9, 2007 2:05 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Urs, at least on my computer your slides do not appear to be numbered.
Just wondering out loud about action groupoids and 2-morphisms: What would it mean (if anything) if we were to assume that all 2-morphisms are isomorphisms?
Posted by: Charlie Stromeyer Jr on October 9, 2007 9:10 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Urs, at least on my computer your slides do not appear to be numbered.
The slides don’t carry explicit numbers themselves, but the pdf reader with which you view them should be able to tell you which page of the pdf file you are viewing. The pdf readers that I use do
that automatically.
And, by the way, the idea is that you use the hypertext navigation tools provided by your pdf reader to navigate these slides. I have provided various hyperlinks. You should follow them as desired
and then use the odf readers BACK button to jump back.
Just wondering out loud about action groupoids and 2-morphisms: What would it mean (if anything) if we were to assume that all 2-morphisms are isomorphisms?
Which 2-morphisms do you have in mind here?
Since we are talking about groupoids, most everything in sight tends to be invertible. All natural transformations between functors between groupoids are, for instance. Those were the only
2-morphisms that appeared in my above comment.
Or maybe are you wondering how we generalize action groupoids to action 2-groupoids, when we have a 2-group acting on something?
That’s an interesting question, I think. This is in part what I tried to address in my comment: how do we define the action groupoid abstractly (instead of in components as often done), such that we
would know, for instance, how it categorifies.
In as far as the characterization in terms of that pushout along
$\array{ \Sigma G &\to& \Sigma \mathrm{INN}(G) \\ \downarrow^\rho \\ \Sigma \mathrm{Aut}(V) }$
makes sense, this would indicate the right categorification.
This is at least apparently what appears when we consider non-fake-flat associated $n$-transport.
(If you cannot see any slide numbers, open the document, go to the table of contents on the second slide, follow the link “Parallel $n$-transport” and then the link “Associated $n$-vector
Posted by: Urs Schreiber on October 10, 2007 9:50 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Thank you, Urs, for answering my two inquiries. Even though I was using the Safari web browser, I could read all of your Math ML perfectly (weird, huh?), but then I had to switch to Windows to get
the PDF reader to show numbers at the bottom with your slides.
I had meant my question in a general categorified sense, i.e., what happens if a 2-group is acting on something. Although I know something, e.g., about groupoids and n-gerbes, the concept of
n-groupoids is new to me so I will read your slides and your previous posts on this subject.
Posted by: Charlie Stromeyer Jr. on October 10, 2007 4:49 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
what happens if a 2-group is acting on something
Okay, good, that’s a very good question. I tried to reply to it above, but maybe it’s worth amplifying this a little more:
Given a 2-group $G_{(2)}$ and a representation of it on some object $V$ in some 2-category $C$, i.e. a 2-functor
$\rho : \Sigma G_{(2)} \to \Sigma \mathrm{Aut}_C(V)$
from the one-object 2-groupoid given by $G_{(2)}$ to the 2-groupoid whose single object is $V$ and whose groupoid of morphisms is that of automorphisms of $V$ in $C$.
Then: what is the corresponding action 2-groupoid?
The answer I proposed was based on the following observation:
if $\rho$ is used to associate an associated 2-bundle to a principal $G_{(2)}$ 2-bundle, then we are lead to find that the $3$-curvature of any 2-connection on that 2-bundles takes values in an
induced 3-representation of $\mathrm{INN}_0(G_{(2)})$ – and that this 3-representation lives on the action 2-groupoid of $\rho$.
I am not entirely sure yet how much weakening to allow here (for $n=1$ it seemed we wanted everything to be strict), but it seems that we want to define the action 2-groupoid $V // G_{(2)}$ by the
$\array{ \Sigma G_{(2)} &\hookrightarrow& \Sigma \mathrm{INN}_0(G_{(2)}) \\ \downarrow^\rho && \downarrow \\ \Sigma \mathrm{Aut}(V) &\to& \Sigma \mathrm{Aut}(V // G_{(2)}) } \,.$
If this holds water, I might at some point be so obnoxious as to start referring to it as groupoidification from $n$-transport.
To me it seems that this should clarify a couple of important issues, like the relation between $n$-curvature and quantization.
In any case, it makes me await all further details on the groupoidification program with great suspense.
Posted by: Urs Schreiber on October 10, 2007 6:11 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
«This is of course just another aspect of the statement that the weak quotient $V//G$ is equivalent to the strict quotient $V/G$ (regarded as a discrete category).»
Why are they equivalent? In general, there can be more than one morphism between two objects of $V//G$ (for example, if $G$ acts in the evident way on the one-element set $1$, $1//G$ will have one
object and and the group of endomorphisms of this object will be $G$). So $1//G$ cannot be equivalent to a discrete category. Do you assume that the action is free?
Posted by: Mathieu Dupont on October 10, 2007 11:26 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Why are they equivalent?
Right, they are not in general. My mistake. Rather, the isomorphism classes of $V//G$ yield $V / G$, so $\pi_0(V//G) = V/G \,.$
Thanks for catching that.
Posted by: Urs Schreiber on October 10, 2007 1:07 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Here’s a comment by Apoorva Khare on the homework exercise in Lecture 1:
Dear Prof. Baez,
Hi, while doing the homework, in order to get the ($q$ or usual) multinomial coefficient — as the answer for the number of $D$-flags on $F_q^n$ (for $q$ a prime power or $q= 1$) — I realised:
When you write a $D$-flag on a set — in the form of an UNCOMBED Young diagram, do you want to specify if the integers/subsets in each row are written in INCREASING (or DECREASING) order? Because
the subsets
$X_0 \subseteq X_1 \subseteq \dots$
are the only data given in a $D$-flag, i.e. $X_{i+1} - X_i$ is NOT given in a specific order, but just as a set.
The reason this was left out was because you only drew $D$-flags on sets with $n = 1+1+ \cdots +1$, so this case never arose.
Here’s my reply:
You just answered your question, but I’ll do it more slowly.
A D-flag on an $n$-element set $X$ is a bunch of nested subsets
$\emptyset = X_0 \subseteq X_1 \subseteq \dots \subseteq X_k = X$
where the cardinality of $X_{i+1} - X_i$ is the number of boxes in the $i$th row of the uncombed Young diagram $D$.
So, for example, if $D$ looks like this:
there are 3 $D$-flags on the set $\{1,2,3\}$, namely:
$X_1 = \{1,2\} \qquad X_2 = \{1,2,3\}$$X_1 = \{2,3\} \qquad X_2 = \{1,2,3\}$$X_3 = \{1,3\} \qquad X_2 = \{1,2,3\}$
You’re suggesting that we cleverly keep track of these by putting numbers in the boxes of our Young diagram. We can do that:
Now to the point: the order of the numbers within each row doesn’t matter, since it’s just a notation for a set $X_i - X_{i+1}$. So, we can without loss of generality write them in increasing
order, as I’ve done.
Posted by: John Baez on October 8, 2007 7:01 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Here’s a comment by Jagannatha Prasad Senesi on Lecture 1:
One thing you mentioned on Thursday when you wrote down the vector space $\mathbb{C}^X$ is that we should not confuse this with the set of functions from $X$ to $\mathbb{C}$. But that’s exactly
what it is, isn’t it?
In fact if we think of the freely generated vector space $\mathbb{C}^X$ as ‘Vector space valued functions on $X$’, where the vector space is just $\mathbb{C}$, then this begins to sound very
similar to the construction of an induced representation (from a subgroup H to a group G), one description of which goes something like…
‘Vector valued functions on $G$ which are $H$-equivariant’.
These induced representations are also freely generated.
Here’s my reply:
There are two different vector spaces, which one should not mix up:
1. The complex vector space of all complex functions on the set $X$.
2. The complex vector space having the set $X$ as basis.
These are canonically isomorphic when $X$ is finite — that’s why it’s tempting to mix them up. But, they’re NOT ISOMORPHIC AT ALL when $X$ is infinite. And, even when $X$ is finite, they’re not
naturally isomorphic.
(You can ask Jim about the difference between ‘naturally’ and ‘canonically’.)
The notation $A^B$ usually means ‘all functions from $B$ to $A$’. I warned the class that I’m using $\mathbb{C}^X$ to mean ‘the complex vector space with $X$ as basis’, not ‘the set of all
functions from $X$ to $\mathbb{C}$’. But I added that since $X$ for us will often be finite, we can often ignore the difference between these, as long as we keep our wits about us.
Similarly, your description of induced representations is fine when $H$ and $G$ are finite, but potentially problematic otherwise.
I’ll cc this to some other people in the class, since I bet you’re not the only one who was puzzled by my remark.
Posted by: John Baez on October 8, 2007 7:14 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
If $A$ is an abelian group, there is the notation $A^{(B)}$ for the set of functions $f:B\to A$ such that $f$ is nonzero at most finitely many points of $B$. In other words, it’s the direct sum of
$B$ copies of $A$, rather than the direct product. It’s somewhat common in number theory. I don’t use it too often myself, but sometimes it is really convenient. Another option would just be $\
oplus_B A.$
Of course, there’s a functorial map $A^{(B)}\to A^B$, which is an isomorphism if $B$ is finite. So in what sense is this not a ‘natural’ isomorphism?
Posted by: James on October 8, 2007 11:36 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
The map you describe, $A^{(B)} \to A^B$ is not functorial. The functor $B \to A^B$ is contravariant in $B$ since it works by composition, whereas the functor $B \to A^{(B)}$ is covariant since it is
given by
(1)$\sum_{i=1}^n a_i b_i \to \sum_{i=1}^n a_i f(b_i)$
with $a_i \in A$ and $b_i \in B$.
The relationship between $A^B$ and $A^{(B)}$ is that one is the dual of the other. This holds for $\mathbb{C}^X$ and $\mathbb{C}[X]$ as well with their standard linear topologies. The duality pairing
is the obvious one:
(2)$f(\sum_{b \in B} a_b b) = \sum_{b \in B} a_b f(b)$
where all but finitely many of the $a_b$s are finite.
For finite sets then there is an isomorphism $A^{(B)} \to A^B$ since we can make $B \to A^{(B)}$ into a contravariant functor via
(3)$\sum_{b \in B} a_b b \to \sum_{c \in C} (\sum_{b \in f^{-1}(c)} a_b) c$
with this construction, the isomorphism $A^{(B)} \to A^B$ given by sending $b$ to the delta function at $b$ is natural in the categorical sense.
So in fact there is a difference between $\mathbb{C}[n]$ and $\mathbb{C}^n$ but it’s not usually anything worth worrying about.
By the way, the notations $k^{(X)}$ and $k^X$ seem fairly standard usage for the coproduct and product in the category of topological vector spaces.
Posted by: Andrew Stacey on October 9, 2007 2:44 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Andrew explained it all very nicely, but let me repeat what he said in more lowbrow terms.
There’s a covariant functor
$\mathbb{C}[-] : Set \to Vect$
sending each set $X$ to $\mathbb{C}[X]$, which is the complex vector space with $X$ as basis.
There’s a contravariant functor
$\mathbb{C}^{-} : Set \to Vect$
sending each set $X$ to $\mathbb{C}^X$, which is the complex vector space of functions from $X$ to $\mathbb{C}$.
Since one of these functors is covariant while the other is contravariant, it doesn’t make sense to ask if there’s a natural transformation from one to the other.
Moreover, if $X$ is infinite, $\mathbb{C}[X]$ and $\mathbb{C}^X$ are not isomorphic.
We can cure the latter problem by restricting attention to finite sets. We get a covariant functor
$\mathbb{C}[-] : FinSet \to Vect$
and a contravariant functor
$\mathbb{C}^{-} : FinSet \to Vect$
These assign isomorphic vector spaces to any finite set $X$. But alas, it still doesn’t make sense to ask if they’re naturally isomorphic.
To cure this problem, we can restrict to the groupoid of finite sets, with bijections as morphisms. Any contravariant functor from a groupoid to a category can be turned into a covariant one, by
cleverly sticking an ‘inverse’ in the right place.
Using this trick, we get two covariant functors from $FinSet$ to $Vect$. The first assigns to each finite set the vector space having that set as basis, and does the obvious thing on morphisms. The
second assigns to each finite set the vector space of functions on that set, and does the obvious thing on morphisms… but with an inverse cleverly stuck into the formula!
These functors are then naturally isomorphic.
This is the sense in which we don’t need to worry about the difference between $\mathbb{C}[X]$ and $\mathbb{C}^X$ when $X$ is a finite set.
Of course, the fact that it took me this many paragraphs to explain how we don’t need to worry about the difference, means that in fact we really do need to worry about the difference — until all
this stuff becomes second nature.
Posted by: John Baez on October 9, 2007 7:33 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
But Andrew wasn’t restricting to the underlying groupoid of finite sets. He was instead defining a covariant functor on finite sets, $X \mapsto \mathbb{C}^X$, where the effect on a function $g: X \to
Y$ is to send it to the linear map
$(f: X \to \mathbb{C}) \mapsto (g_{*}(f): y \mapsto \sum_{x: g(x) = y} f(x)),$
which is an idea reminiscent of Kan extension. In fact, it really is the taking of an adjoint (in the usual linear algebra sense): if we denote the pulling back or restriction along $g$ by $g^{*}: \
mathbb{C}^Y \to \mathbb{C}^X$, then we have
$\langle g_{*}(f), h \rangle = \langle f, g^{*}(h) \rangle$
where the inner product is defined by taking the basis $X$ to be self-dual (which was also implicit in Andrew’s comment, when he referred to the delta function).
Posted by: Todd Trimble on October 9, 2007 8:45 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
“Since one of these functors is covariant while the other is contravariant, it doesn’t make sense to ask if there’s a natural transformation from one to the other.”
Uh, good point. I guess my low-level thinking is not as categorically enlightened as I had hoped. You learn something new every day, and this one just started!
Posted by: James on October 9, 2007 11:00 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
“Since one of these functors is covariant while the other is contravariant, it doesn’t make sense to ask if there’s a natural transformation from one to the other.”
Actually, it does make sense if you twist your head round a bit.
Suppose we have functors $F, G: \mathbf{A}^{op} \times \mathbf{A} \to \mathbf{B}.$ One can talk about dinatural transformations from $F$ to $G$. Such a thing is a family $(\alpha_A: F(A, A) \to G(A,
A))_{A \in \mathbf{A}}$ of maps in $\mathbf{B}$, such that for each map in $\mathbf{A}$, a certain hexagon commutes. See e.g. Categories for the Working Mathematician.
In particular, suppose we have functors $P: \mathbf{A} \to \mathbf{B}, \quad Q: \mathbf{A}^{op} \to \mathbf{B}.$ By composing $P$ and $Q$ with the two product-projections of $\mathbf{A}^{op} \times \
mathbf{A}$, we obtain functors $F$ and $G$ of the form above. The phrase ‘natural transformation from $P$ to $Q$’ can then be interpreted as ‘dinatural transformation from $F$ to $G$’. Such a thing
is a family $(\alpha_A: P(A) \to Q(A))_{A \in \mathbf{A}}$ of maps, such that for each map $f: A \to A'$ in $\mathbf{A}$, a certain square commutes… Hmm, not ready to draw commutative diagrams yet,
but it’s a square in which one side is equal to the composite of the other three. In one-dimensional notation, it says $\alpha_A = Q(f) \circ \alpha_{A'} \circ P(f).$
Now that the question makes sense, what’s the answer? It’s still no! Though if my back-of-the-envelope calculations are correct (and this particular envelope was already heavily scribbled on), it’s
‘yes’ if you restrict to the category of sets and injections.
Posted by: Tom Leinster on October 10, 2007 12:34 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
I’ve seen $\mathbb{C}\langle X \rangle$ used to denote the vector space with basis $X$, although similar pointy-bracket notation is used also for the field of quotients of a polynomial ring.
I personally like either $\mathbb{C} \cdot X$ or $X \cdot \mathbb{C}$ to denote the $X$-fold sum (coproduct) of copies of $\mathbb{C}$. This notation is current in other contexts, like enriched
category theory, as in the tensor $v \cdot a$ of an object of a $V$-enriched category by an object of $V$, where $(-) \cdot a$ is left adjoint to the representable $hom(a, -): A \to V$ (in the
enriched sense).
Posted by: Todd Trimble on October 9, 2007 1:17 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
How silly of me. $\mathbb{C}(X)$ is notation used for the field of quotients. But I think $\mathbb{C}\langle X \rangle$ is sometimes used for the algebra of non-commuting polynomials.
Posted by: Todd Trimble on October 9, 2007 1:23 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
I often use $\mathbb{C}[X]$ for the complex vector space with $X$ as basis. Why? Because when $X = G$ is a group, this is called the ‘group algebra’ of $G$, and denoted $\mathbb{C}[G]$.
However, in this particular lecture, I wanted to use the notation $n$ for the $n$-element set. Nobody uses $\mathbb{C}[n]$ for the complex vector space with this set as basis — everyone uses $\mathbb
{C}^n$. So, since we’re mainly talking about finite sets anyway, I decided to use $\mathbb{C}^X$ as my notation for the vector space with $X$ as basis.
Posted by: John Baez on October 9, 2007 3:01 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
“You can ask Jim about the difference between ‘naturally’ and ‘canonically’.)”
This is an interesting subject. Could you elaborate on this? Thanks
Posted by: Goncalo Marques on October 9, 2007 7:02 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Could you elaborate on this?
Briefly: two functors $F,G : C \to D$ are ‘naturally isomorphic’ if they do isomorphic things to any object, and this isomorphism can be chosen in a way that’s compatible with all morphisms in $C$.
They’re ‘canonically isomorphic’ if they do they do isomorphic things to any object, and this isomorphism can be chosen in a way that’s compatible with all isomorphisms in $C$. The second one is
But let me be a bit more precise.
I’ll assume you know the usual concept of naturally isomorphic functors.
Jim says two functors
$F, G : C \to D$
are canonically isomorphic if they become naturally isomorphic when restricted to the groupoid whose objects are those of $C$, but whose morphisms are just the isomorphisms of $C$.
There are lots of situations where this comes up. In particular, even if $F$ is covariant and $G$ is contravariant, we can talk about $F$ and $G$ being canonically isomorphic, since we can always
turn a contravariant functor from a groupoid into a covariant one.
I used this trick in a previous comment to note that $\mathbb{C}^X$ and $\mathbb{C}[X]$ are canonically isomorphic when $X$ is a finite set — even though they’re not naturally isomorphic.
(But, I didn’t come out and use the phrase ‘canonically isomorphic’.)
Posted by: John Baez on October 9, 2007 7:43 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Jim says two functors
$F,G : C \to D$
are canonically isomorphic if they become natural isomorphic when restricted to the [core of $C$]
Cool. Didn’t know that. The claim is that we can give “canonical” a technically precise sense?
Surely you have a list of (further) examples with which to convince oneself that this is indeed the right way to formalize “canonical isomorphism”?
I mean, suppose I doubted it: convince me!
Posted by: Urs Schreiber on October 9, 2007 7:53 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Sorry, I don’t feel in the mood to go through the various senses in which people use the word ‘canonically’ and see how many fit this definition. A bunch do; a bunch don’t. The main thing is to have
some term for the concept of ‘naturally — with respect to isomorphisms’.
Posted by: John Baez on October 9, 2007 8:09 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
The main thing is to have some term for the concept of ‘naturally — with respect to isomorphisms’.
Okay. But I must say I am lacking a feeling for why we would want to call that particular notion “canonical isomorphism”.
What bothers me a bit is that this would imply that two functors can be canonically isomorphic without being isomorphic.
That seems to go against the grain of the usual use of the word “canonical”. No? Usually we’d want to see two things that are isomorphic, and then say: “Ah, but they are not just isomorphic, but even
canonically isomorphic: there is a god-given choice of isomorphism’”.
On the other hand, for the definition you mentioned, any two functors which are isomorphic are also canonically isomorphic.
Maybe the notion “isomorphic after pulled back to the core of the domain” could be called
essentially isomorphic
instead, or something like that?
Posted by: Urs Schreiber on October 10, 2007 9:57 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Here’s a comment by Chris Rogers on Lecture 1:
Hi Dr. Baez,
I have a quick question: In the quantum case we are talking about Vect, which intuitively seems to have a lot of structure built into it that we care about i.e. structure that is important to
quantum mechanics. Aren’t we “cheating” classical mechanics by just treating it as Set, when classical mechanics really lives in the category of symplectic manifolds, which has a lot more
relevant structure going on than Set? And if we are talking about group theory in this context, aren’t we eventually going to want to say something about the relationship between canonical
transformations (not just functions between sets) and representations of unitary operators? I guess I’m a little confused.
Here’s my reply:
Aren’t we “cheating” classical mechanics by just treating it as Set, when classical mechanics really lives in the category of symplectic manifolds, which has a lot more relevant structure
going on than Set?
Right, definitely. When I wrote “classical” on the board, what I really meant is not so much “classical mechanics” as “classical logic” — i.e., the way we treat Set as the foundations of
mathematics. I actually hinted at this, but I didn’t want to make a big deal about it. There’s really too much to say about this…
The category of symplectic manifolds, or even better (maybe) Poisson manifolds, is actually much more like the category of vector spaces than people tend to realize. They’re both non-cartesian -
Quantum quandaries: a category-theoretic perspective.
for details on what ‘cartesian’ means. The category Set is cartesian. For more on cartesian versus noncartesian categories, try:
In any event, what matters most in this seminar is how group actions on sets are related to group representations on vector spaces, and the extent to which we can find a ‘purely combinatorial’
description of portions of quantum mechanics. We’re not really going to talk much about classical mechanics.
Posted by: John Baez on October 8, 2007 8:09 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
chris rogers asked: Aren’t we cheating classical mechanics by just treating it as Set, when classical mechanics really lives in the category of symplectic manifolds, which has a lot more relevant
structure going on than Set?
john baez mostly agreed. i’ll give a contrary viewpoint here even if the situation isn’t completely clear. we’re _not_ really just treating classical mechanics as living in the category of sets; it’s
more like we’re treating it as living in the category of free modules over the rig r of truth-values, or perhaps over some extension rig of r such as the rig of “costs”. this is very parallel to the
way we’re treating quantum mechanics as living in the category of free modules over the rig r’ of rational numbers, or perhaps over some extension rig of r’ such as the rig of complex numbers.
a homomorphism between free modules in this context is a _relation_ rather than a
function. an equivariant such relation is a union of “double cosets”, or rather of the orbits-in-cartesian-products to which they correspond. this is how double cosets got to be so important as to
deserve a much more suggestive name than “double cosets”.
the category of sets and relations (or its close cousin the category of sets and cost-matrixes, arguably a better formalization of classical mechanics than the original dehydrated elephant symplectic
geometry) is very similar to the category of vector spaces and linear operators in many ways but is the archetype of “classical” in the same way that the category of vector spaces and linear
operators over the rig of rational (or real or complex) numbers is the archetype of “quantum”. there’s more parallelism between “classical” and “quantum” than a lot of people realize.
it may come up in the seminar though how the meaning of “quantum” is somewhat up for grabs, as is the meaning of “geometric” (in “geometric representation theory”). john seemed to suggest in the
first seminar lecture that “geometric” here means something like “classical” or at least “classically inspired even if still quantum”, which i thought was an interesting idea.
Posted by: james dolan on October 9, 2007 10:22 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
the category of sets and relations…is the archetype of “classical”
I’ll take that as support for my position.
Posted by: David Corfield on October 9, 2007 10:47 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Here’s an article on Vermeer’s camera.
Posted by: Mike Stay on October 8, 2007 9:23 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
For those poor folks who can’t watch the video of my lecture yet and are wondering what the heck Mike is talking about:
I began my discussion of projective geometry with a little spiel about its roots in Renaissance painting… and the class got to talking about Vermeer’s use of the camera obscura to help with
perspective. Thinking about these things is a good way to get an intuition for the projective plane.
Projective geometry will be very important in this seminar! For more, try my page on octonionic projective geometry (just ignore the stuff about octonions), and also week106 and week145.
Posted by: John Baez on October 8, 2007 9:39 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
duu ~ nope ~ sorry ~ can’t get lectures here ~ vid or otherwise :( just the pdf’s would be nice ~ for starters at least ~ vids for download would also be best ~ in the interim ~ just drop them on
utube ~ that works for some :)
Posted by: k on October 9, 2007 1:36 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
You mentioned Renaissance projective technique. Are you aware of the Hockney dispute that many of the masters used cameras? That camp claims that cameras were long a secret within the guild and
patchwork perspective errors show the use of narrow view lenses.
Posted by: RodMcGuire on October 9, 2007 7:17 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Yes, I’m aware of the Hockney debate — thanks for the link!
What most people don’t know is that the masters used, not just projective geometry, but also groupoidification. Cheaters!
Posted by: John Baez on October 9, 2007 8:04 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
The lecture streams beautifully from Sheffield! And that’s just on bog-standard home broadband. I like the software - click and play, simple and effective. Also, it seems one can instantaneously jump
to various times in the video.
Whose satchel was that obscuring the right hand board?
I think we are all sorely missing Derek Wise’s notes!
Posted by: Bruce Bartlett on October 11, 2007 10:34 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Bruce wrote:
The lecture streams beautifully from Sheffield! And that’s just on bog-standard home broadband.
Great! It’s nice to hear someone outside the US can watch this thing. Please try to find out why Eugenia had trouble with the streaming video, also presumably from Sheffield.
Maybe she has ‘sub-bog-standard’ broadband? Also known as ‘skinnyband’?
I like the software - click and play, simple and effective.
Yeah, it works fine for me. Unfortunately, in response to some of the complaints on this blog entry, the UCR multimedia folks have switched to a different streaming format that doesn’t work at all
for me, together with a downloadable format that’s taking forever for me to download. We’ll see how it goes…
Also, it seems one can instantaneously jump to various times in the video.
Yeah, that’s a feature I really like.
I think we are all sorely missing Derek Wise’s notes!
Yeah, me too! I keep trying to get U.C. Davis to fire him, so far to no avail. Luckily, you can now download Alex Hoffnung’s notes of Lecture 1 — and pretty soon, some other people’s lecture notes.
We’ll see what you like best.
More stuff coming soon.
Posted by: John Baez on October 12, 2007 5:14 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
John wrote:
Unfortunately, in response to some of the complaints on this blog entry, the UCR multimedia folks have switched to a different streaming format that doesn’t work at all for me, together with a
downloadable format that’s taking forever for me to download. We’ll see how it goes…
Ouch, that’s no good that it’s messed things up for you! For what it’s worth, I’m getting the streaming about twice as fast now than before (it’s gone from a factor of 4 to a factor of 2 slower than
proper time), but I’d still prefer to download if possible. Is there a public link for downloads, or is that still experimental?
Posted by: Greg Egan on October 12, 2007 10:03 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
In fact I was in London when I was having the streaming problems, using bog-standard broadband but wireless. It works fine from my office in Sheffield down a wire.
Posted by: Eugenia Cheng on October 12, 2007 12:10 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
A video of the first lecture is now available for download here. It’s big — 556 megabytes. If you click on this link you should see a blue letter Q on your screen (assuming you have QuickTime
loaded). Your webbrowser should say it’s transferring data from mediaserve.ucr.edu… and then you can expect to wait a long time for the file to download.
Go on a hike, have dinner, maybe dessert too, and when you’re done you’ll have your very own movie of me lecturing about geometric representation theory.
In general, the downloadable videos should appear here. My second lecture is already available — more about that soon.
I’m glad Eugenia can now watch the video in streaming form. I’ll be interested to hear Greg’s report if he tries the downloadable version. I hope his computer has enough disk space.
We’re just beginning to work the bugs out of this business. It might have been easier to learn how to take a video, chop it into 10-minute fragments, and upload it to YouTube. But ultimately, my
university shouldn’t need to rely on YouTube to run online classes.
Posted by: John Baez on October 12, 2007 8:12 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
The download of the first lecture took 6 hours (Australian bog-standard broadband is 256 kilobaud, i.e. about 25 kilobytes/sec) but it went without a hitch, and the result plays flawlessly.
Thanks very much for adding this option! For those of us with slow connections, we can always download the movie overnight, and it’s much more watchable than a stream when you can’t keep up with the
data rate. And while the files are big they’re not unmanageable; you can even fit a whole lecture on a CD, which might be handy.
Posted by: Greg Egan on October 13, 2007 7:53 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Thanks for the report!
I’m glad it works. I’m sorry it takes so long to download the darn thing. The picture resolution is higher than the typical video you’d find on YouTube — but you’ll be grateful for that if you watch
lecture 4, where I accidentally started writing too small.
Posted by: John Baez on October 13, 2007 8:18 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
I’m sure you have done it with malice aforethought but why don’t you mention ‘partitions’ of n instead of uncombed…
Posted by: jim stasheff on October 13, 2007 2:58 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
‘Partitions’ means two things already, neither of which is an uncombed Young diagram:
1. A partition of a finite set is a way of writing it as a disjoint union of nonempty subsets.
2. A partition of a natural number is a way of writing it as a sum of nonzero natural numbers, where we don’t care about the order.
An ‘uncombed Young diagram’ is a way of writing a natural number as a sum of nonzero natural numbers where we do care about the order. So,
(or 2+1+1) is different from
(or 1+2+1). If we decide we don’t care about the order, we can ‘comb’ our uncombed Young diagram, making sure the rows get shorter as we march down. This gives an ordinary Young diagram — which is
just a way of representing a partition in sense 2 above.
So, there are three confusingly similar, deeply related but crucially distinct concepts floating around, two of which are already called ‘partitions’. We decided to call the third one something else.
Posted by: John Baez on October 13, 2007 7:59 AM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
To make it clear I’m not being pedantic just for the fun of it, let me illustrate how these concepts are really different.
There are 5 partitions of a 3-element set:
{ {1}, {2}, {3} }
{ {1, 2}, {3} }
{ {1, 3}, {2} }
{ {1}, {2, 3} }
{ {1, 2, 3} }
There are 3 partitions of the number 3:
And, there are 4 uncombed 3-box Young diagrams:
or if you prefer:
Posted by: John Baez on October 13, 2007 8:07 AM | Permalink | Reply to this
verma modules and d-modules
i’ve been struggling recently to try to understand some of the ideas in kirwan’s “an introduction to intersection homology theory” (and/or the second edition co-authored by woolf). i’ll try to state
here some of the questions that are bugging me although of course the ones that are really bugging me i’m having trouble articulating even to myself.
i’m starting with the last chapter of the book (chapter 8 in the first edition, chapter 12 in the second edition), on “the kazhdan-lusztig conjecture”, because that’s one of the main things that i’m
interested in learning about, and because that’s the chapter that i’ve come closest to being able to understand in the past.
(i have the vague impression that the second edition was to some extent designed to try to convert the book into one that can be read in the forwards direction instead of backwards. since i’m still
trying to read it backwards myself, i’m not finding the second edition changes very relevant so far.)
the last chapter seems pretty decipherable up to and including the part (page 151 in the first edition, about page 204 in the second edition) where they describe the functor f that extracts a
representation of the semi-simple lie algebra g from a “d-module” over the corresponding flag manifold. then (page 152) they start appealing to heavy machinery from earlier in the book
(“riemann-hilbert correspondence” and “intersection sheaf complex”) to obtain particular d-modules whose images under f will be verma modules and/or irreducible modules with highest weights in the
affine weyl orbit of 0.
since i haven’t yet worked my way backwards to understanding that heavy machinery, though, i’m hoping that the particular d-modules in question can be described in a more direct and explicit lowbrow
way, without the heavy machinery. so that’s my question (for now): can you give such an explicit lowbrow (but pleasant) description of these d-modules??
(“lowbrow” and “pleasant” are defined here according to my personal taste. for example “parabolic induction” and “schubert calculus on flag manifolds” are in the lowbrow and pleasant direction while
“intersection sheaf complex” and “perverse sheaf” are in the opposite direction, for now.)
i’m starting here with the attitude that a d-module is (insofar as i understand it yet) essentially a sheaf of systems of differential equations, with the sections over a given open set standing in
for the “unknown functions” which are to be solved for. so, i’d like to understand what is this alleged system of differential equations that corresponds to (for example) the verma module with
highest weight 0.
since the verma module “v_w” with highest weight w is the module possessing a generic maximal weight vector of weight w, i can see how for example a module homomorphism from v_w to the module given
by smooth complex-valued functions on some manifold m on which the lie algebra acts as vector fields is essentially a function f satisfying the differential equation stating that “f is a maximal
weight vector of weight w”; is that all that’s going on here?? with the flag manifold playing the role of m, or something like that?? unfortunately i don’t seem to be able to get this interpretation
to mesh yet with other things that kirwan says.
i could try to say a lot more about my vague intuitions about what’s going on here but i might get bogged down if i did so for now i’ll just post this as is.
Posted by: james dolan on October 15, 2007 10:17 AM | Permalink | Reply to this
Re: verma modules and d-modules
James, just to avoid any potential confusion, you are talking about generalized Verma modules as opposed to ordinary Verma modules, right?
Hmm, I have never seen the book to which you refer but perhaps this paper or this paper might have some clues for what you are looking for.
Posted by: Charlie Stromeyer Jr on October 15, 2007 2:09 PM | Permalink | Reply to this
Re: verma modules and d-modules
James, just to avoid any potential confusion, you are talking about generalized Verma modules as opposed to ordinary Verma modules, right?
i’m pretty sure that i’m talking about just plain ordinary verma modules here.
Hmm, I have never seen the book to which you refer but perhaps this paper or this paper might have some clues for what you are looking for.
those might be a bit more sophisticated than what i’m looking for at the moment, but there might be something useful in the references.
Posted by: james dolan on October 15, 2007 2:48 PM | Permalink | Reply to this
Re: verma modules and d-modules
First, there are several good references
for this story (aka Beilinson-Bernstein localization): my favorite is Gaitsgory’s recent notes http://www.math.harvard.edu/~gaitsgde/267y/catO.pdf
- I have several other references listed
on my web page at http://www.math.utexas.edu/users/benzvi/Langlands.html
including a nice survey
by Milicic. Of course there’s the original Beilinson-Bernstein papers and Bernstein’s D-modules notes as well, and the chapter
on D-modules in Gelfand-Manin’s Encyclopedia book has a nice overview.
The relation between Lie algebra reps and D-modules is precisely analogous to the relation between modules over a commutative algebra and [quasicoherent] sheaves on varieties. If a Lie algebra
g acts on a space X then Ug (the enveloping algebra) maps to global differential operators on X, and thus there are natural adjoint functors back and forth from g-reps to D-modules on X: namely
global sections and tensor.
(Global sections of a D-module are a module over Ug, while for a Ug module
we can tensor it with diffops D_X over Ug, ie induce). These functors are pretty easy
to write down in concrete situations.
B-B theorem then says that for X=flag variety for g this is an equivalence of categories (if we restrict to Ug-modules with a fixed dominant regular infinitesimal character, eg the [finite!] Weyl
group orbit of 0 you’re considering).
On the flag variety in the Kazhdan-Lusztig context we are considering D-modules — aka [quasicoherent] sheaves with a flat connection — which on each Schubert variety are just flat vector bundles
(hence trivial of some rank). e.g. for sl_2 we’re looking at P^1 and we consider D-modules which are constant on A^1 and
at the point infinity.
The main (only?) thing one needs right
now from the theory of D-modules is the
functor j_* of pushforward (extension).
This is easiest perhaps to see for the inclusion of a closed subvariety: D-module is a sheaf with an action of D, differential operators. If you’re a D-module on a closed subvariety Z of X you
already know how to act by all functions on X, by restriction, but not by all vector fields on X — so you induce, you let the normal vector fields to Z act freely. So in the transverse direction to Z
you look like the symmetric algebra on the normal bundle – or more poetically like delta functions. Thus the extension can be thought of as distributions on X supported on Z and valued in your
original module.
One can show that the category of D-modules we’re considering has a single simple object for each stratum, and that’s called the IC
sheaf. for P^1 that would be the constant vector bundle on P^1 for the open cell and the delta-functions at the point infinity for the closed one. These correspond to two simple representations of
sl_2: the trivial one-dim rep, and the Verma module with highest weight -2 – note 0,-2 is the Weyl group orbit of 0, shifted to be centered at -1 (=-rho).
In general as you notice the pushforward
of a D-module from a closed subvariety
is an induced module, like the Verma — in fact the Verma module with highest weight lambda (antidominant) is just the
delta-functions on the one-point orbit on the flag variety (considered as a module over appropriate twisted differential operators).
On the other hand we have another natural D-module which is all functions on the big cell (ie functions on A^1 for sl_2) — this is a contragredient Verma module (coinduced representation). Note that
constant functions sit inside of this – this is one of the characterizing features of the IC sheaf, it’s a sub of the *-extension for the corresponding orbit (and a quotient of the !-extension, which
in this case would be the Verma.)
BTW as for Riemann-Hilbert correspondence
it’s just the functor that assigns to a flat connection its deRham complex, as a complex of sheaves (ie without taking global sections) — which is a complex of sheaves whose cohomology is locally
constant. Similarly a (left) D-module is a (more complicated) sheaf with a flat connection and we may assign to it a deRham complex defined the exact same way. If the module is “small” (holonomic) –
like all the ones we’re considering on the flag variety – then the cohomologies
of the deRham complex are locally constant along the strata of some stratification – aka a constructible complex.
So then one can try to describe
the category of D-modules purely topologically – leading to the proof
of the Kazhdan-Lusztig conjectures.
oops. got carried away there. oh well.
Posted by: David Ben-Zvi on October 15, 2007 5:21 PM | Permalink | Reply to this
Re: verma modules and d-modules
ok, thanks; that was extremely helpful. i still have to mull it over a bit more (and probably talk it over some more with john or todd) before i can ask any more sensible questions, though.
Posted by: james dolan on October 16, 2007 10:18 AM | Permalink | Reply to this
Re: verma modules and d-modules
Yes, thank you David for teaching us something new. In case anyone is curious, both David and James ARE talking about ordinary Verma modules as in this wikipedia entry . There are also generalized
Verma modules as in this wikipedia entry . To figure out the case with D-modules and generalized Verma modules one would probably have to look at the first paper I mentioned, but I won’t be able to
see this paper until Saturday.
Posted by: Charlie Stromeyer Jr on October 16, 2007 1:49 PM | Permalink | Reply to this
Re: verma modules and d-modules
i’m still struggling with this, but i’d like to try to describe some of my thoughts about it so far.
first, it sounds like you’re confirming my guess that the functor from the representation category to the d-module category amounts to interpreting a presentation of a representation as a system of
differential equations by means of the realization of the lie algebra as first-order differential operators on the flag manifold.
so i think then that a “germ of a solution” of this system of differential equations at a point f of the flag manifold should be essentially a morphism from the original representation to the
representation given by germs of functions at f. and you (and kirwan and woolf) seem to be saying that when the original representation is a verma module induced from the stabilizer borel subalgebra
of a flag f1, then the nature of the stalk of germs of solutions of the system at a flag f2 depends (as f1 and f2 vary) only on the double coset characterizing the geometric relationship between f1
and f2. (moreover you seem to be giving specific information about _how_ the nature of the stalk depends on the double coset, but i’ll worry about that later.)
and it’s tempting to think that i can understand the role played by the double coset here as being very similar to the role played by double cosets in an apparently analogous situation in the
category of representations of a discrete group g. namely, if h and j are subgroups of g, and r1 and r2 are representations of h and j respectively, then hom_g(ind(r1),co-ind(r2)) is a cartesian
product of separate sectors corresponding to the double cosets between h and j in g (where “ind” here indicates “induced representation”), with the sector corresponding to the double coset x being
isomorphic to hom_k(r1,r2) where k is “the intersection of h and j when in position corresponding to x”.
optimistically speaking, then, perhaps the only significant extra wrinkle in the case at hand is that since a verma module is an induced representation of an “infinitesimal group” (aka lie algebra)
rather than of an ordinary group, its isomorphism class depends not just on the conjugacy class of subalgebra that it was induced from, but on the specific choice of subalgebra within that conjugacy
class. that is, perhaps a morphism from an induced representation to a co-induced representation is still associated with a double coset, but the double coset in question is now built into the
relationship between the inducing subalgebra h and the co-inducing subalgebra j.
(pessimistically speaking, though, there are probably all sorts of other complications to worry about as well. for example i seem to be suggesting that the representation of a lie algebra given by
the germs of functions (or more generally of sections of an equivariant vector bundle) at a point of a homogeneous space is analogous to a co-induced representation, whereas there’s something fishy
about that.)
anyway, i would like a theorem something like this: given a complex lie group g, and given lie subgroups h and j, and representations r1 and r2 of the lie algebras h# and j# respectively (where “x#”
here indicates “lie algebra of the lie group x”), the hom-space hom_g#(ind(r1),co-ind(r2)) is isomorphic to hom_k#(r1,r2) where k# is the intersection of h# and j#; and/or perhaps some trickier
variant on this idea, with the role of “co-ind(r2)” played by something closer to the stalk of germs at the point with stabilizer subalgebra j# of sections of an appropriate equivariant vector
then on top of that i’d like to understand the details of how to apply such a theorem in the kazhdan-lusztig context, which i really don’t understand yet.
(one of my goals here is to understand in what sense or senses “geometric representation theory” really is “geometric”. i think that i have a good sense of how induced representations of groups (lie
or otherwise) are “geometric”, but it seems trickier to understand the “geometric” interpretation of induced representations of lie _algebras_.)
Posted by: james dolan on October 25, 2007 6:36 AM | Permalink | Reply to this
Re: verma modules and d-modules
David Ben-Zvi should be better able to address the points you make than I can, but I can say something about the very last issue you raise:
You will want to look at Cartan geometries. More specifically, this paper and this paper examine a type of Cartan geometry called parabolic geometry.
I know only a little about parabolic geometry because it happens to include projective geometry and von Neumann generalized projective geometry via his continuous geometries which is what my interest
Posted by: Charlie Stromeyer Jr on October 25, 2007 2:26 PM | Permalink | Reply to this
Re: verma modules and d-modules
Charlie wrote:
You will want to look at Cartan geometries.
Heh. Jim knows a lot about Cartan geometry. My student Derek Wise did a Ph.D. thesis on Cartan geometry, part of which appears here. I gave a little intro to Cartan geometry and Derek’s work in
I hadn’t heard the term “parabolic geometry” for a Cartan geometry modeled on $G/P$ with $G$ semisimple and $P \subseteq G$ parabolic, but that certainly does cover a bunch of famous examples.
Posted by: John Baez on October 26, 2007 11:25 AM | Permalink | Reply to this
Re: verma modules and d-modules
It’s good that Jim knows a lot about Cartan geometry. I forgot to tell Jim that he won’t understand the intros of the two papers I referred to because the intros are not detailed enough, however,
once you start reading the papers right after the intros then the papers begin to become understandable because they start to explicitly define Cartan geometries via algebras and groups.
Derek Wise’s work looks quite interesting because he applies his model to the correct de Sitter spacetime. I say “correct” only because the last time I checked (which I admit was over 4 years ago)
the latest astronomical data showed that we live in de Sitter spacetime.
Posted by: Charlie Stromeyer Jr on October 26, 2007 1:13 PM | Permalink | Reply to this
Re: verma modules and d-modules
Brief comment: (need to run to airport to your part of the world, speaking at Caltech in a couple of days in case anyone in the area is interested) Induced representations of Lie algebras can be
described just as those of Lie groupsby passing to formal groups - Lie algebras (in char zero!) are the same as formal groups, and the same is true for their representations. And if you’d like a
formal group is a group object in an appropriate category so reps of Lie algebras are really encompassed by reps of groups..
Anyway given Lie algebras g,h the induced representation of the trivial rep of h to g is given by distributions on exp g/ exp h. A similar picture will hold for inducing any other rep, looking at
another D-module of distributions valued in the corresponding vector bundle.
The picture for Verma modules is the same, except we were identifying exp g/exp b with the formal neighborhood of the basepoint in G/B, so thinking of delta functions on G/B supported (set
theoretically) at [B]. Of course replacing B with P you can get generalized Vermas… But it’s interesting that ALL reps of g can be written as a “direct integral” of Verma modules in an appropriate
sense – this is one way to look at Beilinson-Bernstein (which I think is the way it’s explained in Gaitsgory’s notes I referred to).
I’ll try to look at your other questions from Pasadena (certainly what you say in the first couple of paragraphs is right, haven’t thought about the third yet).
Posted by: David Ben-Zvi on October 25, 2007 6:54 PM | Permalink | Reply to this
Re: verma modules and d-modules
Brief comment: (need to run to airport to your part of the world, speaking at Caltech in a couple of days in case anyone in the area is interested)
i think that i’ll probably be able to attend, but i have to check my schedule.
Induced representations of Lie algebras can be described just as those of Lie groups by passing to formal groups- Lie algebras (in char zero!) are the same as formal groups, and the same is true
for their representations.
yes, this is very much the attitude that i’ve been taking towards the subject, thinking of the enveloping algebra as a co-algebraic group whose “co-spectrum” has just a single global point (or
dualizing the picture to get the formal algebraic group). to me this is indeed legitimately geometric but i could also try to appreciate a more classical viewpoint according to which a genuine
“space” should have enough global points to support a description of the geometry allegedly taking place on it. when i was wondering outloud just how “geometric” geometric representation theory is i
was imagining the situation of such a classically-minded person trying to picture what’s going on.
(i’m not exactly an adept of contemporary algebraic geometry but i have a background in for example lawvere’s school of “synthetic differential geometry” which has a similar or perhaps even more
extreme commitment towards viewing infinitesimal spaces as legitimate spaces.)
And if you’d like a formal group is a group object in an appropriate category so reps of Lie algebras are really encompassed by reps of groups..
Anyway given Lie algebras g,h the induced representation of the trivial rep of h to g is given by distributions on exp g/ exp h. A similar picture will hold for inducing any other rep, looking at
another D-module of distributions valued in the corresponding vector bundle.
yes, in principle this is the way that i’ve been thinking of these induced representations of lie algebras, except that i’ve probably been forgetting to take full advantage of the language of
distributions as a means of describing them.
The picture for Verma modules is the same, except we were identifying exp g/exp b with the formal neighborhood of the basepoint in G/B, so thinking of delta functions on G/B supported (set
theoretically) at [B].
ok, i think that this might be helping to explain some of what you said about “delta functions” in your original reply, which i didn’t really follow even after john tried to explain it to me. i still
have to mull it over a bit more to see whether i really understand it now though.
on the other hand, the picture that you’re describing here of the infinitesimal homogeneous space exp(g#)/exp(b#) as the “formal neighborhood” of one single basepoint in the macroscopic homogeneous
space g/b is exactly the picture that i was alluding to in trying to explain how the role played by double cosets in connection with homomorphisms from induced representations to co-induced
representations (or something morally similar to co-induced representations) is morally the same though technically different in the case of lie algebra representations as compared to the case of
group representations. in the lie algebra case you really have to pick out a special basepoint, and the induced or co-induced representation remembers where that basepoint is, and the double coset
corresponding to the geometric relationship between the pair of such basepoints comes into play even before you decide to focus on a particular sector of the hom-space between the representations.
however, i’m still at the stage of just trying to develop a morally correct picture of what’s going on; i’m not claiming that i’ve got the technical details correct yet. some of what seems to be
going on in the kazhdan-lusztig context isn’t smoothly meshing yet with the picture that i’m trying to create in my mind.
Of course replacing B with P you can get generalized Vermas.. But it’s interesting that ALL reps of g can be written as a “direct integral” of Verma modules in an appropriate sense- this is one
way to look at Beilinson-Bernstein (which I think is the way it’s explained in Gaitsgory’s notes I referred to).
(i’ve taken only a very brief look at gaitsgory’s notes so far, not enough to tell whether i’ll be able to get anything useful out of them.)
I’ll try to look at your other questions from Pasadena (certainly what you say in the first couple of paragraphs is right, haven’t thought about the third yet).
ok, thanks.
Posted by: james dolan on October 25, 2007 9:09 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
Am I right that if one takes D to be the following n-box Young diagram:
then the resulting representation of n! is a regular representation? If I am, then the statement of the theorem about irreps of n! is a bit confusing - after all, all irreps are already included in
the regular represetation.
So, is it that the resulting rep is not a regular rep, or rather that the important part of the theorem is that there is some canonical bijection between irreps and Young diagrams?
Posted by: sirix on February 2, 2008 8:09 PM | Permalink | Reply to this
Re: Geometric Representation Theory (Lecture 1)
sirix wrote:
Am I right that if one takes D to be the [vertical strip] Young diagram, then the resulting representation of n! is a regular representation?
If I am, then the statement of the theorem about irreps of n! is a bit confusing - after all, all irreps are already included in the regular represetation.
That’s understandable, but if memory serves, I think what John said was that inside each $D$-flag representation there is a particular irrep “screaming to get out”. You have to go a little further
into the lecture series to find out what is meant, but there is a beautiful “categorified Gram-Schmidt procedure” which gives a recipe for extracting that screaming irrep out of the $D$-flag
Posted by: Todd Trimble on February 3, 2008 3:27 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2007/10/geometric_representation_theor.html","timestamp":"2014-04-19T05:00:20Z","content_type":null,"content_length":"209608","record_id":"<urn:uuid:b6de808d-d52f-4421-b6d6-178af8531db9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sir Isaac Newton has been described by some as "one of the greatest names in human thought" (Cohen, 1985). Newton was responsible for discovering many outstanding scientific and mathematical
concepts. Among those discoveries were his theories of motion and gravitation, the components of light and color and his development of the foundations of calculus. There were many interesting
aspects of Newtons life which seemed at times to contradict each other.
Newton, was born on Christmas day in 1642 to a family of farmers in the east central portion of England in Linconshire. Surprisingly young Isaac was not an exceptional student. He enjoyed spending
much of his time making contraptions such as a windmill used to grind grain, a clock which was powered by water and other various inventions. Unfortunately because of the time he spent on his
projects he did very poorly in school. His teachers described him as "idle" and "inattentive". His father died before he was born and mother remarried leaving him in the care of his grandmother. At
the age of fourteen Newton was forced to leave school to help his mother with farming.
Isaac spent much of his time on the farm reading and ended up returning to school. At the bidding of an uncle, Newton began furthering his education in June of 1661, when he entered Trinity College,
Cambridge. He set out to get a degree in law and this limited his field of study was very during his first few years of college. However, by the third year he was allowed more freedom to pursue other
interests. During this time he was able to study new mathematical and scientific methods from such scientists and mathematicians as Galileo, and Wallis. Newton graduated from Cambridge in 1665,
without any particular honors.
In the summer of 1665, Newton who had not been an exceptional student and appeared at times very average, seemed to under go a change. During an eighteen month period, in which the school he was at
was shut down because of the plague, Newton came up with his theories of motion and gravitation, the components of white light, and calculus. The often told story of how Newton discovered gravity
goes as follows: Newton was drinking tea as the British often do, and he observed an apple falling from a tree. He deduced that the same force which caused the apple to fall to the ground causes the
moon to orbit the earth (Cohen, 1985). As stated earlier, Newton helped developed what he called fluxions, which is now called calculus ( Burton, 1997). This branch of mathematics Newton discovered,
could be used find the answers to such problems as finding the speed of a ball that has been thrown in the air at any moment in the balls flight. During the same time period a German mathematician
named Gottfried Leibniz also discovered calculus. With Newtons's and Leibniz's new discoveries mathematicians and scientists were able to enter into new regions of discovery.
As if that wasn't enough Newton made a third important discovery. He used a prism to show that white light is made up of many different colors. Before this scientists had thought that white light was
a single entity. While Isaac was looking through a telescope, one day he noted how the light reflected many different colors and led him to this discovery.
Newton was very sensitive to negative comments and had to be convinced by another scientist Edmond Halley to publish his findings. After his book Principia Mathematica in which his various
discoveries and ideas were presented, Newton enjoyed success in other realms. He became a member of the British Parliament and was a member of various mathematical organizations such as the Royal
Society council to which he was later elected president. He died on March 31, 1727 in London.
Newton had many interesting characteristics such as his study alchemy. Which is a blend of chemistry, magic and religion. Achlemists' goal was to find a way to produce gold out of different metals
and also to find a magic potion which could cure ills and increase ones life. Isaac was modest, and generous to his family and those who helped him along the way. Some of Newton's discoveries were
later refuted by Albert Einstein in reference to his theories of gravitational pull. However, Einstein and others still contend that Newton was indeed a very important force in man's quest for
knowledge and is highly regarded for his contributions in many different areas of science. | {"url":"http://www.math.wichita.edu/history/men/newton.html","timestamp":"2014-04-20T18:24:30Z","content_type":null,"content_length":"6148","record_id":"<urn:uuid:33827886-8141-47b6-80a9-b89e28d7a65e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |