content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Science and Hypothesis/PART I
From Wikisource
←Introduction Science and Hypothesis by PART II →
Part I: Number and Magnitude
SCIENCE AND HYPOTHESIS.
PART I.
NUMBER AND MAGNITUDE.
CHAPTER I.
ON THE NATURE OF MATHEMATICAL REASONING.
THE very possibility of mathematical science seems an insoluble contradiction. If this science is only deductive in appearance, from whence is derived that perfect rigour which is challenged by none?
If, on the contrary, all the propositions which it enunciates may be derived in order by the rules of formal logic, how is it that mathematics is not reduced to a gigantic tautology? The syllogism
can teach us nothing essentially new, and if everything must spring from the principle of identity, then everything should be capable of being reduced to that principle. Are we then to admit that the
enunciations of all the theorems with which so many volumes are filled, are only indirect ways of saying that A is A?
No doubt we may refer back to axioms which are at the source of all these reasonings. If it is felt that they cannot be reduced to the principle of contradiction, if we decline to see in them any
more than experimental facts which have no part or lot in mathematical necessity, there is still one resource left to us: we may class them among à priori synthetic views. But this is no solution of
the difficulty it is merely giving it a name; and even if the nature of the synthetic views had no longer for us any mystery, the contradiction would not have disappeared; it would have only been
shirked. Syllogistic reasoning remains incapable of adding anything to the data that are given it; the data are reduced to axioms, and that is all we should find in the conclusions.
No theorem can be new unless a new axiom intervenes in its demonstration; reasoning can only give us immediately evident truths borrowed from direct intuition; it would only be an intermediary
parasite. Should we not therefore have reason for asking if the syllogistic apparatus serves only to disguise what we have borrowed?
The contradiction will strike us the more if we open any book on mathematics; on every page the author announces his intention of generalising some proposition already known. Does the mathematical
method proceed from the particular to the general, and, if so, how can it be called deductive?
Finally, if the science of number were merely analytical, or could be analytically derived from a few synthetic intuitions, it seems that a sufficiently powerful mind could with a single glance
perceive all its truths; nay, one might even hope that some day a language would be invented simple enough for these truths to be made evident to any person of ordinary intelligence.
Even if these consequences are challenged, it must be granted that mathematical reasoning has of itself a kind of creative virtue, and is therefore to be distinguished from the syllogism. The
difference must be profound. We shall not, for instance, find the key to the mystery in the frequent use of the rule by which the same uniform operation applied to two equal numbers will give
identical results. All these modes of reasoning, whether or not reducible to the syllogism, properly so called, retain the analytical character, and ipso facto, lose their power.
The argument is an old one. Let us see how Leibnitz tried to show that two and two make four. I assume the number one to be defined, and also the operation x+1 — i.e., the adding of unity to a given
number x. These definitions, whatever they may be, do not enter into the subsequent reasoning. I next define the numbers 2, 3, 4 by the equalities: —
(1) 1 + 1 = 2; (2) 2 + 1 = 3; (3) 3 + 1 = 4 and in the same way I define the operation x + 2 by the relation; (4) x + 2 = (x + 1) + 1. Given this, we have:
2 + 2 =(2 + 1) + 1; (def. 4).
(2 + 1) + 1 = 3 + 1 (def. 2).
3 + 1 =4 (def. 3).
whence 2 + 2 = 4 Q.E.D.
It cannot be denied that this reasoning is purely analytical. But if we ask a mathematician, he will reply: "This is not a demonstration properly so called; it is a verification." We have confined
ourselves to bringing together one or other of two purely conventional definitions, and we have verified their identity; nothing new has been learned. Verification differs from proof precisely
because it is analytical, and because it leads to nothing. It leads to nothing because the conclusion is nothing but the premisses translated into another language. A real proof, on the other hand,
is fruitful, because the conclusion is in a sense more general than the premisses. The equality 2 + 2 = 4 can be verified because it is particular. Each individual enunciation in mathematics may be
always verified in the same way. But if mathematics could be reduced to a series of such verifications it would not be a science. A chess-player, for instance, does not create a science by winning a
piece. There is no science but the science of the general. It may even be said that the object of the exact sciences is to dispense with these direct verifications.
Let us now see the geometer at work, and try to surprise some of his methods. The task is not without difficulty; it is not enough to open a book at random and to analyse any proof we may come
across. First of all, geometry must be ex eluded, or the question becomes complicated by difficult problems relating to the role of the postulates, the nature and the origin of the idea of space. For
analogous reasons we cannot avail ourselves of the infinitesimal calculus. We must seek mathematical thought where it has remained pure — i.e., in Arithmetic. But we still have to choose; in the
higher parts of the theory of numbers the primitive mathematical ideas have already undergone so profound an elaboration that it becomes difficult to analyse them.
It is therefore at the beginning of Arithmetic that we must expect to find the explanation we seek; but it happens that it is precisely in the proofs of the most elementary theorems that the authors
of classic treatises have displayed the least precision and rigour. We may not impute this to them as a crime; they have obeyed a necessity. Beginners are not prepared for real mathematical rigour;
they would see in it nothing but empty, tedious subtleties. It would be waste of time to try to make them more exacting; they have to pass rapidly and without stopping over the road which was trodden
slowly by the founders of the science.
Why is so long a preparation necessary to habituate oneself to this perfect rigour, which it would seem should naturally be imposed on all minds? This is a logical and psychological problem which is
well worthy of study. But we shall not dwell on it; it is foreign to our subject. All I wish to insist on is, that we shall fail in our purpose unless we reconstruct the proofs of the elementary
theorems, and give them, not the rough form in which they are left so as not to weary the beginner, but the form which will satisfy the skilled geometer.
DEFINITION OF ADDITION.
I assume that the operation x + 1 has been defined; it consists in adding the number 1 to a given number x. Whatever may be said of this definition, it does not enter into the subsequent reasoning.
We now have to define the operation x + a, which consists in adding the number a to any given number x. Suppose that we have defined the operation x+(a-1); the operation x+a will be defined by the
equality: (1) x + a = [x + (a — 1)] + 1. We shall know what x + a is when we know what x + (a — 1) is, and as I have assumed that to start with we know what x+i is, we can define successively and "by
recurrence" the operations x + 2, x + 3, etc. This definition deserves a moment's it is of a particular nature which distinguishes it even at this stage from the purely logical definition; the
equality (1), in fact, contains an infinite number of distinct definitions, each having only one meaning when we know the meaning of its predecessor.
PROPERTIES OF ADDITION.
Associative. — I say that a + (b + c) = (a + b) + c; in fact, the theorem is true for c = 1. It may then be written a + (b + 1) = (a + b) + 1; which, remembering the difference of notation, is
nothing but the equality (1) by which I have just defined addition. Assume the theorem true for c=γ, I say that it will be true for c = γ + 1. Let (a+b)+γ=a+(b+γ), it follows that [(a+b)+γ]+1=[a+
(b+γ)]+1; or by def. (1) — (a+b)+(γ+1)=a+(b+γ+1)=a+[b+(γ+1)], which shows by a series of purely analytical deductions that the theorem is true for γ + 1. Being true for c = 1, we see that it is
successively true for c = 2, c = 3, etc.
Commutative. (1) I say that a + 1 = 1 + a. The theorem is evidently true for a = 1; we can verify by purely analytical reasoning that if it is true for a = γ it will be true for a = γ + 1.^[1] Now,
it is true for a=1, and therefore is true for a=2, a=3, and so on. This is what is meant by saying that the proof is demonstrated "by recurrence."
(2) I say that a+b=b+a. The theorem has just been shown to hold good for b=1, and it may be verified analytically that if it is true for b=β it will be true for b=β+1. The proposition is thus
established by recurrence.
DEFINITION OF MULTIPLICATION.
We shall define multiplication by the equalities: (1) $a\times=a$. (2) $a\times b=[a\times (b-1)]+a$. Both of these include an infinite number of definitions; having defined $a\times 1$, it enables
us to define in succession $a \times 2$, $a \times 3$, and so on.
PROPERTIES OF MULTIPLICATION.
Distributive. — I say that $(a+b)\times c=(a\times c)+(b\times c)$. We can verify analytically that the theorem is true for c = 1; then if it is true for c = γ, it will be true for c = γ +1. The
proposition is then proved by recurrence.
Commutative. — (1) I say that $a\times 1=1\times a$. The theorem is obvious for a = 1. We can verify analytically that if it is true for a=α, it will be true for a=α+1.
(2) I say that $a\times b=b\times a$. The theorem has just been proved for b=1. We can verify analytically that if it be true for b=β it will be true for b=β+1.
This monotonous series of reasonings may now be laid aside; but their very monotony brings vividly to light the process, which is uniform, and is met again at every step. The process is proof by
recurrence. We first show that a theorem is true for n=1; we then show that if it is true for n-1 it is true for n, and we conclude that it is true for all integers. We have now seen how it may be
used for the proof of the rules of addition and multiplication — that is to say, for the rules of the algebraical calculus. This calculus is an instrument of transformation which lends itself to many
more different combinations than the simple syllogism; but it is still a purely analytical instrument, and is incapable of teaching us anything new. If mathematics had no other instrument, it would
immediately be arrested in its development; but it has recourse anew to the same process — i.e., to reasoning by recurrence, and it can continue its forward march. Then if we look carefully, \ve find
this mode of reasoning at every step, either under the simple form which we have just given to it, or under a more or less modified form. It is therefore mathematical reasoning par excellence, and we
must examine it closer.
The essential characteristic of reasoning by recurrence is that it contains, condensed, so to speak, in a single formula, an infinite number of syllogisms. We shall see this more clearly if we
enunciate the syllogisms one after another. They follow one another, if one may use the expression, in a cascade. The following are the hypothetical syllogisms: — The theorem is true of the number 1.
Now, if it is true of 1, it is true of 2; therefore it is true of 2. Now, if it is true of 2, it is true of 3; hence it is true of 3, and so on. We see that the conclusion of each syllogism serves as
the minor of its successor. Further, the majors of all our syllogisms may be reduced to a single form. If the theorem is true of n-1, it is true of n.
We see, then, that in reasoning by recurrence we confine ourselves to the enunciation of the minor of the first syllogism, and the general formula which contains as particular cases all the majors.
This unending series of syllogisms is thus reduced to a phrase of a few lines.
It is now easy to understand why every particular consequence of a theorem may, as I have above explained, be verified by purely analytical processes. If, instead of proving that our theorem is true
for all numbers, we only wish to show that it is true for the number 6 for instance, it will be enough to establish the first five syllogisms in our cascade. We shall require 9 if we wish to prove it
for the number 10; for a greater number we shall require more still; but however great the number may be we shall always reach it, and the analytical verification will always be possible. But however
far we went we should never reach the general theorem applicable to all numbers, which alone is the object of science. To reach it we should require an infinite number of syllogisms, and we should
have to cross an abyss which the patience of the analyst, restricted to the resources of formal logic, will never succeed in crossing.
I asked at the outset why we cannot conceive of a mind powerful enough to see at a glance the whole body of mathematical truth. The answer is now easy. A chess-player can combine for four or five
moves ahead; but, however extraordinary a player he may be, he cannot prepare for more than a finite number of moves. If he applies his faculties to Arithmetic, he cannot conceive its general truths
by direct intuition alone; to prove even the smallest theorem he must use reasoning by recurrence, for that is the only instrument which enables us to pass from the finite to the infinite. This
instrument is always useful, for it enables us to leap over as many stages as we wish; it frees us from the necessity of long, tedious, and monotonous verifications which would rapidly become
impracticable. Then when we take in hand the general theorem it becomes indispensable, for otherwise we should ever be approaching the analytical verification without ever actually reaching it. In
this domain of Arithmetic we may think ourselves very far from the infinitesimal analysis, but the idea of mathematical infinity is already playing a preponderating part, and without it there would
be no science at all, because there would be nothing general.
The views upon which reasoning by recurrence is based may be exhibited in other forms; we may say, for instance, that in any finite collection of different integers there is always one which is
smaller than any other. We may readily pass from one enunciation to another, and thus give our selves the illusion of having proved that reasoning by recurrence is legitimate. But we shall always be
brought to a full stop — we shall always come to an indemonstrable axiom, which will at bottom be but the proposition we had to prove translated into another language. We cannot therefore escape the
conclusion that the rule of reasoning by recurrence is irreducible to the principle of contradiction. Nor can the rule come to us from experiment. Experiment may teach us that the rule is true for
the first ten or the first hundred numbers, for instance; it will not bring us to the indefinite series of numbers, but only to a more or less long, but always limited, portion of the series.
Now, if that were all that is in question, the principle of contradiction would be sufficient, it would always enable us to develop as many syllogisms as we wished. It is only when it is a question
of a single formula to embrace an infinite number of syllogisms that this principle breaks down, and there, too, experiment is powerless to aid. This rule, inaccessible to analytical proof and to
experiment, is the exact type of the à priori synthetic intuition. On the other hand, we cannot see in it a convention as in the case of the postulates of geometry.
Why then is this view imposed upon us with such an irresistible weight of evidence? It is because it is only the affirmation of the power of the mind which knows it can conceive of the indefinite
repetition of the same act, when the act is once possible. The mind has a direct intuition of this power, and experiment can only be for it an opportunity of using it, and thereby of becoming
conscious of it.
But it will be said, if the legitimacy of reasoning by recurrence cannot be established by experiment alone, is it so with experiment aided by induction? We see successively that a theorem is true of
the number I, of the number 2, of the number 3, and so on — the law is manifest, we say, and it is so on the same ground that every physical law is true which is based on a very large but limited
number of observations.
It cannot escape our notice that here is a striking analogy with the usual processes of induction. But an essential difference exists. Induction applied to the physical sciences is always uncertain,
because it is based on the belief in a general order of the universe, an order which is external to us. Mathematical induction — i.e., proof by recurrence — is, on the contrary, necessarily imposed
on us, because it is only the affirmation of a property of the mind itself.
Mathematicians, as I have said before, always endeavour to generalise the propositions they have obtained. To seek no further example, we have just shown the equality, a+1=1+a, and we then used it to
establish the equality, a+b=b+a, which is obviously more general. Mathematics may, therefore, like the other sciences, proceed from the particular to the general. This is a fact which might otherwise
have appeared incomprehensible to us at the beginning of this study, but which has no longer anything mysterious about it, since we have ascertained the analogies between proof by recurrence and
ordinary induction.
No doubt mathematical recurrent reasoning and physical inductive reasoning are based on different foundations, but they move in parallel lines and in the same direction — namely, from the particular
to the general.
Let us examine the case a little more closely. To prove the equality a+2=2+a (I), we need only apply the rule a+1=1+a, twice, and write a+2=a+1+1=1+a+1=1+1+a=2+a......(2).
The equality thus deduced by purely analytical means is not, however, a simple particular case. It is something quite different. We may not therefore even say in the really analytical and deductive
part of mathematical reasoning that we proceed from the general to the particular in the ordinary sense of the words. The two sides of the equality (2) are merely more complicated combinations than
the two sides of the equality (1), and analysis only serves to separate the elements which enter into these combinations and to study their relations.
Mathematicians therefore proceed "by construction," they "construct" more complicated combinations. When they analyse these combinations, these aggregates, so to speak, into their primitive elements,
they see the relations of the elements and deduce the relations of the aggregates themselves. The process is purely analytical, but it is not a passing from the general to the particular, for the
aggregates obviously cannot be regarded as more particular than their elements.
Great importance has been rightly attached to this process of "construction," and some claim to see in it the necessary and sufficient condition of the progress of the exact sciences. Necessary, no
doubt, but not sufficient! For a construction to be useful and not mere waste of mental effort, for it to serve as a stepping-stone to higher things, it must first of all possess a kind of unity
enabling us to see something more than the juxtaposition of its elements. Or more accurately, there must be some advantage in considering the construction rather than the elements themselves. What
can this advantage be? Why reason on a polygon, for instance, which is always decomposable into triangles, and not on elementary triangles? It is because there are properties of polygons of any
number of sides, and they can be immediately applied to any particular kind of polygon. In most cases it is only after long efforts that those properties can be discovered, by directly studying the
relations of elementary triangles. If the quadrilateral is anything more than the juxtaposition of two triangles, it is because it is of the polygon type.
A construction only becomes interesting when it can be placed side by side with other analogous constructions for forming species of the same genus. To do this we must necessarily go back from the
particular to the general, ascending one or more steps. The analytical process "by construction" does not compel us to descend, but it leaves us at the same level. We can only ascend by mathematical
induction, for from it alone can we learn something new. Without the aid of this induction, which in certain respects differs from, but is as fruitful as, physical induction, construction would be
powerless to create science.
Let me observe, in conclusion, that this induction is only possible if the same operation can be repeated indefinitely. That is why the theory of chess can never become a science, for the different
moves of the same piece are limited and do not resemble each other.
CHAPTER II.
MATHEMATICAL MAGNITUDE AND EXPERIMENT.
IF we want to know what the mathematicians mean by a continuum, it is useless to appeal to geometry. The geometer is always seeking, more or less, to represent to himself the figures he is studying,
but his representations are only instruments to him; he uses space in his geometry just as he uses chalk; and further, too much importance must not be attached to accidents which are often nothing
more than the whiteness of the chalk.
The pure analyst has not to dread this pitfall. He has disengaged mathematics from all extraneous elements, and he is in a position to answer our question: — "Tell me exactly what this continuum is,
about which mathematicians reason." Many analysts who reflect on their art have already done so — M. Tannery, for instance, in his Introduction à la théorie des Fonctions d'une variable.
Let us start with the integers. Between any two consecutive sets, intercalate one or more intermediary sets, and then between these sets others again, and so on indefinitely. We thus get an unlimited
number of terms, and these will be the numbers which we call fractional, rational, or commensurable. But this is not yet all; between these terms, which, be it marked, are already infinite in number,
other terms are intercalated, and these are called irrational or incommensurable.
Before going any further, let me make a preliminary remark. The continuum thus conceived is no longer a collection of individuals arranged in a certain order, infinite in number, it is true, but
external the one to the other. This is not the ordinary conception in which it is supposed that between the elements of the continuum exists an intimate connection making of it one whole, in which
the point has no existence previous to the line, but the line does exist previous to the point. Multiplicity alone subsists, unity has disappeared — "the continuum is unity in multiplicity,“
according to the celebrated formula. The analysts have even less reason to define their continuum as they do, since it is always on this that they reason when they are particularly proud of their
rigour. It is enough to warn the reader that the real mathematical continuum is quite different from that of the physicists and from that of the metaphysicians.
It may also be said, perhaps, that mathematicians who are contented with this definition are the dupes of words, that the nature of each of these sets should be precisely indicated, that it should be
explained how they are to be intercalated, and that it should be shown how it is possible to do it. This, however, would be wrong; the only property of the sets which comes into the reasoning is that
of preceding or succeeding these or those other sets; this alone should therefore intervene in the definition. So we need not concern ourselves with the manner in which the sets are intercalated, and
no one will doubt the possibility of the operation if he only remembers that "possible" in the language of geometers simply means exempt from contradiction. But our definition is not yet complete,
and we come back to it after this rather long digression.
Definition of Incommensurable. — The mathematicians of the Berlin school, and Kronecker in particular, have devoted themselves to constructing this continuous scale of irrational and fractional
numbers without using any other materials than the integer. The mathematical continuum from this point of view would be a pure creation of the mind in which experiment would have no part.
The idea of rational number not seeming to present to them any difficulty, they have confined their attention mainly to defining incommensurable numbers. But before reproducing their definition here,
I must make an observation that will allay the astonishment which this will not fail to provoke in readers who are but little familiar with the habits of geometers.
Mathematicians do not study objects, but the relations between objects; to them it is a matter of indifference if these objects are replaced by others, provided that the relations do not change.
Matter does not engage their attention, they are interested by form alone.
If we did not remember it, we could hardly understand that Kronecker gives the name of incommensurable number to a simple symbol — that is to say, something very different from the idea we think we
ought to have of a quantity which should be measurable and almost tangible.
Let us see now what is Kronecker's definition. Commensurable numbers may be divided into classes in an infinite number of ways, subject to the condition that any number whatever of the first class is
greater than any number of the second. It may happen that among the numbers of the first class there is one which is smaller than all the rest; if, for instance, we arrange in the first class all the
numbers greater than 2, and 2 itself, and in the second class all the numbers smaller than 2, it is clear that 2 will be the smallest of all the numbers of the first class. The number 2 may therefore
be chosen as the symbol of this division.
It may happen, on the contrary, that in the second class there is one which is greater than all the rest. This is what takes place, for example, if the first class comprises all the numbers greater
than 2, and if, in the second, are all the numbers less than 2, and 2 itself. Here again the number 2 might be chosen as the symbol of this division.
But it may equally well happen that we can find neither in the first class a number smaller than all the rest, nor in the second class a number greater than all the rest. Suppose, for instance, we
place in the first class all the numbers whose squares are greater than 2, and in the second all the numbers whose squares are smaller than 2. We know that in neither of them is a number whose square
is equal to 2. Evidently there will be in the first class no number which is smaller than all the rest, for however near the square of a number may be to 2, we can always find a commensurable whose
square is still nearer to 2. From Kronecker s point of view, the incommensurable number $\sqrt{2}$ is nothing but the symbol of this particular method of division of commensurable numbers; and to
each mode of repartition corresponds in this way a number, commensurable or not, which serves as a symbol. But to be satisfied with this would be to forget the origin of these symbols; it remains to
explain how we have been led to attribute to them a kind of concrete existence, and on the other hand, does not the difficulty begin with fractions? Should we have the notion of these numbers if we
did not previously know a matter which we conceive as infinitely divisible — i.e., as a continuum?
The Physical Continuum. — We are next led to ask if the idea of the mathematical continuum is not simply drawn from experiment. If that be so, the rough data of experiment, which are our sensations,
could be measured. We might, indeed, be tempted to believe that this is so, for in recent times there has been an attempt to measure them, and a law has even been formulated, known as Fechner's law,
according to which sensation is proportional to the logarithm of the stimulus. But if we examine the experiments by which the endeavour has been made to establish this law, we shall be led to a
diametrically opposite conclusion. It has, for instance, been observed that a weight A of 10 grammes and a weight B of 11 grammes produced identical sensations, that the weight B could no longer be
distinguished from a weight C of 12 grammes, but that the weight A was readily distinguished from the weight C. Thus the rough results of the experiments may be expressed by the following relations:
A=B, B=C, A < C, which may be regarded as the formula of the physical continuum. But here is an intolerable disagreement with the law of contradiction, and the necessity of banishing this
disagreement has compelled us to invent the mathematical continuum. We are therefore forced to conclude that this notion has been created entirely by the mind, but it is experiment that has provided
the opportunity. We cannot believe that two quantities which are equal to a third are not equal to one another, and we are thus led to suppose that A is different from B, and B from C, and that if we
have not been aware of this, it is due to the imperfections of our senses.
The Creation of the Mathematical Continuum: First Stage. — So far it would suffice, in order to account for facts, to intercalate between A and B a small number of terms which would remain discrete.
What happens now if we have recourse to some instrument to make up for the weakness of our senses? If, for example, we use a microscope? Such terms as A and B, which before were indistinguishable
from one another, appear now to be distinct: but between A and B, which are distinct, is intercalated another new term D, which we can distinguish neither from A nor from B. Although we may use the
most delicate methods, the rough results of our experiments will always present the characters of the physical continuum with the contradiction which is inherent in it. We only escape from it by
incessantly intercalating new terms between the terms already distinguished, and this operation must be pursued indefinitely. We might conceive that it would be possible to stop if we could imagine
an instrument powerful enough to decompose the physical continuum into discrete elements, just as the telescope resolves the Milky Way into stars. But this we cannot imagine; it is always with our
senses that we use our instruments; it is with the eye that we observe the image magnified by the microscope, and this image must therefore always retain the characters of visual sensation, and
therefore those of the physical continuum.
Nothing distinguishes a length directly observed from half that length doubled by the microscope. The whole is homogeneous to the part; and there is a fresh contradiction — or rather there would be
one if the number of the terms were supposed to be finite; it is clear that the part containing less terms than the whole cannot be similar to the whole. The contradiction ceases as soon as the
number of terms is regarded as infinite. There is nothing, for example, to prevent us from regarding the aggregate of integers as similar to the aggregate of even numbers, which is however only a
part of it; in fact, to each integer corresponds another even number which is its double. But it is not only to escape this contradiction contained in the empiric data that the mind is led to create
the concept of a continuum formed of an indefinite number of terms.
Here everything takes place just as in the series of the integers. We have the faculty of conceiving that a unit may be added to a collection of units. Thanks to experiment, we have had the
opportunity of exercising this faculty and are conscious of it; but from this fact we feel that our power is unlimited, and that we can count indefinitely, although we have never had to count more
than a finite number of objects. In the same way, as soon as we have intercalated terms between two consecutive terms of a series, we feel that this operation may be continued without limit, and
that, so to speak, there is no intrinsic reason for stopping. As an abbreviation, I may give the name of a mathematical continuum of the first order to every aggregate of terms formed after the same
law as the scale of commensurable numbers. If. then, we intercalate new sets according to the laws of incommensurable numbers, we obtain what may be called a continuum of the second order.
Second Stage. — We have only taken our first step. We have explained the origin of continuums of the first order; we must now see why this is not sufficient, and why the incommensurable numbers had
to be invented.
If we try to imagine a line, it must have the characters of the physical continuum — that is to say, our representation must have a certain breadth. Two lines will therefore appear to us under the
form of two narrow bands, and if we are content with this rough image, it is clear that where two lines cross they must have some common part. But the pure geometer makes one further effort; without
entirely renouncing the aid of his senses, he tries to imagine a line without breadth and a point without size. This he can do only by imagining a line as the limit towards which tends a band that is
growing thinner and thinner, and the point as the limit towards which is tending an area that is growing smaller and smaller. Our two bands, however narrow they may be, will always have a common
area; the smaller they are the smaller it will be, and its limit is what the geometer calls a point. This is why it is said that the two lines which cross must have a common point, and this truth
seems intuitive.
But a contradiction would be implied if we conceived of lines as continuums of the first order — i.e., the lines traced by the geometer should only give us points, the co-ordinates of which are
rational numbers. The contradiction would be manifest if we were, for instance, to assert the existence of lines and circles. It is clear, in fact, that if the points whose co-ordinates are
commensurable were alone regarded as real, the in-circle of a square and the diagonal of the square would not intersect, since the co-ordinates of the point of intersection are incommensurable.
Even then we should have only certain incommensurable numbers, and not all these numbers.
But let us imagine a line divided into two half-rays (demi-droites). Each of these half-rays will appear to our minds as a band of a certain breadth; these bands will fit close together, because
there must be no interval between them. The common part will appear to us to be a point which will still remain as we imagine the bands to become thinner and thinner, so that we admit as an intuitive
truth that if a line be divided into two half-rays the common frontier of these half-rays is a point. Here we recognise the conception of Kronecker, in which an incommensurable number was regarded as
the common frontier of two classes of rational numbers. Such is the origin of the continuum of the second order, which is the mathematical continuum properly so called.
Summary. — To sum up, the mind has the faculty of creating symbols, and it is thus that it has constructed the mathematical continuum, which is only a particular system of symbols. The only limit to
its power is the necessity of avoiding all contradiction; but the mind only makes use of it when experiment gives a reason for it.
In the case with which we are concerned, the reason is given by the idea of the physical continuum, drawn from the rough data of the senses. But this idea leads to a series of contradictions from
each of which in turn we must be freed. In this way we are forced to imagine a more and more complicated system of symbols. That on which we shall dwell is not merely exempt from internal
contradiction, — it was so already at all the steps we have taken, — but it is no longer in contradiction with the various propositions which are called intuitive, and which are derived from more or
less elaborate empirical notions.
Measurable Magnitude. — So far we have not spoken of the measure of magnitudes; we can tell if any one of them is greater than any other, but we cannot say that it is two or three times as large.
So far, I have only considered the order in which the terms are arranged; but that is not sufficient for most applications. We must learn how to compare the interval which separates any two terms. On
this condition alone will the continuum become measurable, and the operations of arithmetic be applicable. This can only be done by the aid of a new and special convention; and this convention is,
that in such a case the interval between the terms A and B is equal to the interval which separates C and D. For instance, we started with the integers, and between two consecutive sets we
intercalated n intermediary sets; by convention we now assume these new sets to be equidistant. This is one of the ways of defining the addition of two magnitudes; for if the interval AB is by
definition equal to the interval CD, the interval AD will by definition be the sum of the intervals AB and AC. This definition is very largely, but not altogether, arbitrary. It must satisfy certain
conditions the commutative and associative laws of addition, for instance; but, provided the definition we choose satisfies these laws, the choice is indifferent, and we need not state it precisely.
Remarks. We are now in a position to discuss several important questions.
(1) Is the creative power of the mind exhausted by the creation of the mathematical continuum? The answer is in the negative, and this is shown in a very striking manner by the work of Du Bois
We know that mathematicians distinguish between infinitesimals of different orders, and that infinitesimals of the second order are infinitely small, not only absolutely so, but also in relation to
those of the first order. It is not difficult to imagine infinitesimals of fractional or even of irrational order, and here once more we find the mathematical continuum which has been dealt with in
the preceding pages. Further, there are infinitesimals which are infinitely small with reference to those of the first order, and infinitely large with respect to the order 1+ε, however small ε may
be. Here, then, are new terms intercalated in our series; and if I may be permitted to revert to the terminology used in the preceding pages, a terminology which is very convenient, although it has
not been consecrated by usage, I shall say that we have created a kind of continuum of the third order.
It is an easy matter to go further, but it is idle to do so, for we would only be imagining symbols without any possible application, and no one will dream of doing that. This continuum of the third
order, to which we are led by the consideration of the different orders of infinitesimals, is in itself of but little use and hardly worth quoting. Geometers look on it as a mere curiosity. The mind
only uses its creative faculty when experiment requires it.
(2) When we are once in possession of the conception of the mathematical continuum, are we protected from contradictions analogous to those which gave it birth? No, and the following is an instance:
He is a savant indeed who will not take it as evident that every curve has a tangent; and, in fact, if we think of a curve and a straight line as two narrow bands, we can always arrange them in such
a way that they have a common part without intersecting. Suppose now that the breadth of the bands diminishes indefinitely: the common part will still remain, and in the limit, so to speak, the two
lines will have a common point, although they do not intersect — i.e., they will touch. The geometer who reasons in this way is only doing what we have done when we proved that two lines which
intersect have a common point, and his intuition might also seem to be quite legitimate. But this is not the case. We can show that there are curves which have no tangent, if we define such a curve
as an analytical continuum of the second order. No doubt some artifice analogous to those we have discussed above would enable us to get rid of this contradiction, but as the latter is only met with
in very exceptional cases, we need not trouble to do so. Instead of endeavouring to reconcile intuition and analysis, we are content to sacrifice one of them, and as analysis must be flawless,
intuition must go to the wall.
The Physical Continuum of several Dimensions. — We have discussed above the physical continuum as it is derived from the immediate evidence of our senses — or, if the reader prefers, from the rough
results of Fechner's experiments; I have shown that these results are summed up in the contradictory formulae: — A=B, B=C, A < C.
Let us now see how this notion is generalised, and how from it may be derived the concept of continuums of several dimensions. Consider any two aggregates of sensations. We can either distinguish
between them, or we cannot; just as in Fechner's experiments the weight of 10 grammes could be distinguished from the weight of 12 grammes, but not from the weight of 11 grammes. This is all that is
required to construct the continuum of several dimensions.
Let us call one of these aggregates of sensations an element. It will be in a measure analogous to the point of the mathematicians, but will not be, however, the same thing. We cannot say that our
element has no size, for we cannot distinguish it from its immediate neighbours, and it is thus surrounded by a kind of fog. If the astronomical comparison may be allowed, our "elements" would be
like nebulae, whereas the mathematical points would be like stars.
If this be granted, a system of elements will form a continuum, if we can pass from any one of them to any other by a series of consecutive elements such that each cannot be distinguished from its
predecessor. This linear series is to the line of the mathematician what the isolated element was to the point.
Before going further, I must explain what is meant by a cut. Let us consider a continuum C, and remove from it certain of its elements, which for a moment we shall regard as no longer belonging to
the continuum. We shall call the aggregate of elements thus removed a cut. By means of this cut, the continuum C will be subdivided into several distinct continuums; the aggregate of elements which
remain will cease to form a single continuum. There will then be on C two elements, A and B, which we must look upon as belonging to two distinct continuums; and we see that this must be so, because
it will be impossible to find a linear series of consecutive elements of C (each of the elements indistinguishable from the preceding, the first being A and the last B), unless one of the elements of
this series is indistinguishable from one of the elements of the cut.
It may happen, on the contrary, that the cut may not be sufficient to subdivide the continuum C. To classify the physical continuums, we must first of all ascertain the nature of the cuts which must
be made in order to subdivide them. If a physical continuum, C, may be subdivided by a cut reducing to a finite number of elements, all distinguishable the one from the other (and therefore forming
neither one continuum nor several continuums), we shall call C a continuum of one dimension. If, on the contrary, C can only be subdivided by cuts which are themselves continuums, we shall say that C
is of several dimensions; if the cuts are continuums of one dimension, then we shall say that C has two dimensions; if cuts of two dimensions are sufficient, we shall say that C is of three
dimensions, and so on. Thus the notion of the physical continuum of several dimensions is defined, thanks to the very simple fact, that two aggregates of sensations may be distinguishable or
The Mathematical Continuum of Several Dimensions. — The conception of the mathematical continuum of n dimensions may be led up to quite naturally by a process similar to that which we discussed at
the beginning of this chapter. A point of such a continuum is defined by a system of n distinct magnitudes which we call its co-ordinates.
The magnitudes need not always be measurable; there is, for instance, one branch of geometry independent of the measure of magnitudes, in which we are only concerned with knowing, for example, if, on
a curve ABC, the point B is between the points A and C, and in which it is immaterial whether the arc A B is equal to or twice the arc B C. This branch is called Analysis Situs. It contains quite a
large body of doctrine which has attracted the attention of the greatest geometers, and from which are derived, one from another, a whole series of remarkable theorems. What distinguishes these
theorems from those of ordinary geometry is that they are purely qualitative. They are still true if the figures are copied by an unskilful draughtsman, with the result that the proportions are
distorted and the straight lines replaced by lines which are more or less curved.
As soon as measurement is introduced into the continuum we have just defined, the continuum becomes space, and geometry is born. But the discussion of this is reserved for Part II.
1. ↑ For (γ+1)+1=(1 + γ)+1=1+(γ+1}. — [TR.] | {"url":"https://en.wikisource.org/wiki/Science_and_Hypothesis/PART_I","timestamp":"2014-04-16T08:49:09Z","content_type":null,"content_length":"76662","record_id":"<urn:uuid:d4bbde81-3021-45e5-9bcd-909492f15840>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
So We Just Consider the Resistor's Tolerance, Right? | EE Times
So We Just Consider the Resistor's Tolerance, Right?
When designing precision electronics or performing a detailed worst-case analysis, one quickly learns to consider parameters that may not be so important in other applications. One of the more
interesting things to learn is that the tolerance of a resistor is just the starting point. It does not actually define the maximum or minimum value the resistor could be within your circuit.
The key parameters associated with a resistor are as follows.
• Tolerance: This defines how close to the nominal value is allowable for the resistor when it is manufactured. A nominal 1,000Ω resistor with a tolerance of ±5% will have a value ranging between
950 and 1,050Ω. This value will be fixed; the value of the resistor will not vary during its life due to the tolerance. However, the engineer has to consider the tolerance in design calculations
and ensure the circuit will function across the entire potential value range.
• Temperature coefficient: This describes how the value of the resistor changes as a function of temperature. It is defined as parts per million/Kelvin; common values are 5, 10, 20, and 100 PPM/K.
Actually, the best way to think of this is parts per million per ohm/Kelvin. A 1,000Ω resistor with a temperature coefficient of 100 PPM experiencing a ±60K temperature change over the operating
temperature range (240-360K, based on an ambient room temperature of 300K) will experience a resistance change of ±6Ω based on its nominal value. Obviously, the lower the temperature coefficient,
the more expensive the resistor will be. (This is the same for low-tolerance resistors.)
• Resistor self-heating: For really high-precision circuits, it is sometimes necessary to consider the power dissipation within the resistor. The resistor will have a specified thermal resistance
from the case to ambient, and this will be specified in °C/W. The engineer will know the power dissipation within the resistor; this can be used to determine the temperature rise and hence the
effect on the resistance.
To determine the maximum and minimum resistance applicable to your resistor, you must consider the tolerance, the temperature coefficient, and the self-heating effect. As you perform your analysis,
you may notice some of the parameters are negligible and can be discounted, but you have to consider them first to know whether or not you can discount them.
For some precision circuits (gain stages in amplifiers, for example) it may be necessary to match resistors to ensure their values are within a specified tolerance of each other and have the same
temperature coefficients.
In certain circuits, it is also important to make sure that critical resistors are positioned correctly to ensure both terminal ends of the resistor are subjected to the same heating or cooling
effects. Otherwise, the Seebeck effect may need to be considered. When using forced airflow, for example, it may be necessary to ensure that both resistor terminals are perpendicular to the airflow,
so the component is of uniform temperature.
To what level do you consider these effects in your own designs? Also, are there any other factors you take into consideration when selecting a resistor?
Related posts: | {"url":"http://www.eetimes.com/author.asp?section_id=36&doc_id=1320334","timestamp":"2014-04-19T04:23:12Z","content_type":null,"content_length":"163357","record_id":"<urn:uuid:fa89b791-3ef2-40fb-aa92-85a7c7f4d905>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
jet space
jet space
Differential geometry
Jet spaces
The notion of jet space or jet bundle is a generalization of the notion of tangent spaces and tangent bundles, respectively. While a tangent vector is an equivalence class of germs of curves with
order-$1$ tangency at a given point in the target, jet spaces are equivalence classes of germs of smooth maps with respect to (finite) order-$k$ tangency at some point in the target.
Related pages
Jet bundles were first introduced by Charles Ehresmann.
• wikipedia: jet, jet bundle
• Ivan Kolar, Jan Slovak, Peter W. Michor, Natural operations in differential geometry, book 1993, 1999, pdf, hyper-dvi, ps
• G. Sardanashvily, Fibre bundles, jet manifolds and Lagrangian theory, Lectures for theoreticians, arXiv:0908.1886
• Shihoko Ishii, Jet schemes, arc spaces and the Nash problem, arXiv:math.AG/0704.3327
• D. J. Saunders, The geometry of jet bundles, London Mathematical Society Lecture Note Series 142, Cambridge Univ. Press 1989.
• Arthemy Kiselev, The twelve lectures in the (non)commutative geometry of differential equations, preprint IHES M/12/13 pdf
Revised on May 2, 2013 21:43:59 by
Urs Schreiber | {"url":"http://ncatlab.org/nlab/show/jet+space","timestamp":"2014-04-18T13:16:16Z","content_type":null,"content_length":"23233","record_id":"<urn:uuid:56b9a161-c3f9-48b9-aa42-f60f7fce3f70>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity, Acceleration, Displacement, Time
October 1st 2013, 09:22 PM #1
Jan 2013
Velocity, Acceleration, Displacement, Time
I have missed many of my physics classes due to a formidable cold, and thus am in need of guidance on the following questions:
1) Bill is playing in the backyard when his mother asks him to run to the letterbox and collect the mail. Figure 10.16 shows the velocity-time graph for his run to the letterbox.
a. What is Bill's displacement: i. after 8.0 s? ii. between t = 10 s and t = 12 s.
b. How far is the letterbox from where Bill was playing?
c. What is the acceleration at: i. t = 6.0 s? ii. t = 14.0 s?
2) An object starts from rest and travels in a straight line for 5.0 s with a uniform acceleration of 4.0 ms^-2. It continues to move with the uniform velocity acquired for a further 12 s and it
is finally brought to rest with a
deceleration of 2.5 ms^-1.
a. Calculate the distance travelled and the time taken.
b. Draw a velocity-time graph and show how the result for part a could also be obtained from the graph. (If you could simply tell me how this would work, that would be great.)
Thanks in advance.
Re: Velocity, Acceleration, Displacement, Time
The graph you provided appears to be an inverted parabola. It is velocity vs time. That meas the motion is not uniformly accelerated. You cannot use the standard kinematics formulae. The maximum
time is about 19 s (the image is fuzzy at the point). Now you need to find the equation of the velocity v(t).
since v(0) = 0, and v(19) = 0, you could model the velocity as v(t) = -kt(t-19) (Why negative sign in front of k?)
Use the given data and find the equation.
The displacement from point A to point B is going to be the inegral from A to B of v(t) = dx/dt.
Work out that part and let me know if you need more help.
Re: Velocity, Acceleration, Displacement, Time
We haven't covered calculus yet, so I have no idea how to find the 'inegral from A to B of v(t) = dx/dt.'
Also, the graph ends at t = 20 s and doesn't appear to be a perfect parabola, making me unsure as to whether your equation would work.
Re: Velocity, Acceleration, Displacement, Time
Then procede that way: draw the grid as shown in attached image on your graph; the time axis on your image is not horizontal. The intersection point of a vertical line at time t, t=12 s,
intersets with the graph at a point that corresponds to a velocity v(t). Knowing that the displacement fromula at t is v(t)*t, the displacement between two times on the graph is the area under
the graph v(t) from t1 to t2 with t2 > t1 becomes the area of the trapezoid. You will have to add the areas of all the trapezoids between t1 and t2 to obtain the displacement from t1 to t2.
Let's see how far you could go.
Re: Velocity, Acceleration, Displacement, Time
Okay, so I drew the grid (albeit somewhat messily):
If I understand correctly, when t = 8 s, v(t) = 7.5 ms^-1 (roughly). Therefore, displacement = v(t)*t = 7.5 x 8 = 60 m. The book reads "18 m (approx)".
As for the second displacement question, I found the area of the trapezoid between t = 10 s and t = 12 s by doing ((8+7.5)/2)*2 = 15.5 m. The book reads "14 m (approx)".
EDIT: I had another look at the second question from my original post and found the correct answer. Well, half of it -- the answer for the total time taken wasn't in my textbook.
First, I found the velocity given the acceleration, which was 20 ms^-1. Next, I used the equation v^2 = u^2 + 2ad to find d. However, in order to get the right answer, I had to use 4 as the value
for a, which was the acceleration at the start of the object's movement. Why, then, is the deceleration not included somewhere in the equation? One would assume that you would need to include it
in order to find the total distance travelled, as the distance would decrease or increase depending on how quickly the objected stopped moving.
Also, I used to the equation t = (v-u)/a to calculate t during the deceleration stage. I got an answer of 8 and added it to 17 to get a total time of 25 s for the object's movement. Is this
Last edited by Fratricide; October 3rd 2013 at 07:00 PM.
Re: Velocity, Acceleration, Displacement, Time
Okay, so I drew the grid (albeit somewhat messily):
If I understand correctly, when t = 8 s, v(t) = 7.5 ms^-1 (roughly). Therefore, displacement = v(t)*t = 7.5 x 8 = 60 m. The book reads "18 m (approx)".
As for the second displacement question, I found the area of the trapezoid between t = 10 s and t = 12 s by doing ((8+7.5)/2)*2 = 15.5 m. The book reads "14 m (approx)".
EDIT: I had another look at the second question from my original post and found the correct answer. Well, half of it -- the answer for the total time taken wasn't in my textbook.
First, I found the velocity given the acceleration, which was 20 ms^-1. Next, I used the equation v^2 = u^2 + 2ad to find d. However, in order to get the right answer, I had to use 4 as the value
for a, which was the acceleration at the start of the object's movement. Why, then, is the deceleration not included somewhere in the equation? One would assume that you would need to include it
in order to find the total distance travelled, as the distance would decrease or increase depending on how quickly the objected stopped moving.
Also, I used to the equation t = (v-u)/a to calculate t during the deceleration stage. I got an answer of 8 and added it to 17 to get a total time of 25 s for the object's movement. Is this
v(t) is the instantaneous velocity, i.e. it is a function, not just one number on the graph. If you do not know the function v(t) the answer from v(t)*t is incorrect.
Draw a triangle base = 8 and height = 7.5, those are data from your graph. The displacement from 0 to 8 would be the area of the triangle which is 30. Because the graph is above the straight line
from 0 to 7.5, the area under the graph should be a little more than 30 m. I do not know where your book got 18 m. Either you provided incorrect question, or the 18 m of the book is incorrect.
In the second problem you will need to work out each portion of the motion separately. First portion of the motion is 5 s with constant acceleration 4 m/s^2. At that point his velocity is v(5).
At the 5th second a new portion of the motion begins, a uniform rectilinear motion with constant velocity exactly v(5) for another 12 s. At the end of the 12th of rectilinear motion its velocity
is still v(5). This is the initial velocity of the third portion of the motion. There the acceleration is negative (you call it deceleration). You don’t have the time for this portion of the
motion but you have the acceleration and the initial velocity v(5); the final velocity 0 (at rest). You find the time, then calculate the distance traveled in the third portion of the motion. Add
the distance from the three portions of the motion, and the times from the three portions of the motions to complete the answer to your problem
Re: Velocity, Acceleration, Displacement, Time
Owner description Casino Royale is the largest floating Casino in Goa
October 1st 2013, 10:02 PM #2
Senior Member
Sep 2013
October 2nd 2013, 12:00 AM #3
Jan 2013
October 2nd 2013, 06:28 AM #4
Senior Member
Sep 2013
October 3rd 2013, 06:34 PM #5
Jan 2013
October 3rd 2013, 10:06 PM #6
Senior Member
Sep 2013
October 3rd 2013, 10:58 PM #7
Oct 2013
united states | {"url":"http://mathhelpforum.com/math-topics/222496-velocity-acceleration-displacement-time.html","timestamp":"2014-04-20T12:02:37Z","content_type":null,"content_length":"56508","record_id":"<urn:uuid:6067114b-e205-4b32-b032-f10232790bc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Debye Potential
Debye Potential (BPM)
For finding electromagnetic fields in a cylindrical geometry it is convenient to use cylindrical coordinates r, θ and z, and Debye potentials parallel to the axis of rotation,
It is supposed that the permittivity is constant in the region in which the above equations apply. The fibers are defined as a series of concentric layers of constant dielectric, so for the fiber
mode solver, the region is the annulus between the layer boundaries. Equation 6 and Equation 7 are applied in a piecemeal way. Strictly speaking, the ψ , ϕ and ε should have subscripts to indicate to
which layer the solution applies, but they are dropped here to simplify the representation. The complete solution for the multilayer fiber will be constructed by using a separate pair of functions
for each layer, and by matching the tangential field components at the layer boundaries.
The divergence of the curl of any vector field is identically zero. Therefore the choice of (6) and (7) to represent the electric and magnetic fields means that the divergence equations (2) will
automatically be satisfied. In the remainder of this section, we show that if the two potentials ψ and ϕ are solutions to the scalar Helmholtz equation, then (6) will be a solution to the Maxwell
wave equation (5), at least within any given layer. In subsequent sections, the particular solution for the mode will be found by observing the boundary conditions imposed by physical considerations
on E and h.
Suppose ψ is a continuous function of position that satisfies the Helmholtz equation in 3 dimensions
Consider the following equation, true for any ψ satisfying (9) in a region of constant
Associate the first two terms together and apply the vector identity
to get
The right hand side of (11) is a gradient of a scalar function, so the curl of the left hand side must be zero. take the curl of (11)
and define E by the first term of (6). With E defined this way, the Maxwell electric wave equation (5) follows, and therefore this E is a possible solution for electric field.
Solutions defined from the ψ function solely (that is to say, solutions with ϕ = 0), will have
electric fields of the form
from which we can see there is no longitudinal (z) component to the electric field, i.e.
these are transverse electric fields. The magnetic field associated with these
transverse electric fields is constructed from the first Maxwell curl equation in (4):
which is the second term of (7). The same sequence (7) – (11) can be applied to the ϕ function to give transverse magnetic fields, and the electric components of these are given as the second term of
(6). A linear superposition of transverse magnetic and transverse electric fields will give solutions that are neither transverse magnetic nor transverse electric. In fact, most modes of the fiber
are of this hybrid kind. In the hybrid case, equations (6) and (7) are used as written, and the relative value of the ψ and ϕ is
now important, since it is a specific linear combination that matches the boundary conditions at the layer boundaries. | {"url":"http://optiwave.com/optibpm-manuals/bpm-debye-potential/","timestamp":"2014-04-18T13:06:48Z","content_type":null,"content_length":"92933","record_id":"<urn:uuid:cd299d25-258f-43d9-a4ba-e6409cb5788b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: rec.puzzles Hall of Fame
Usenet Newsgroups : rec.puzzles Hall of Fame
The rec.puzzles Hall of Fame is a compilation of over 500 of the most popular puzzles that have been posted and discussed in the rec.puzzles newsgroup. In most cases a detailed solution has been
Many of these puzzles also appear in Braingle's own collection.
Categories : logic : number.p
Mr. S. and Mr. P. are both perfect logicians, being able to correctly
deduce any truth from any set of axioms. Two integers (not necessarily
unique) are somehow chosen such that each is within some specified
range. Mr. S. is given the sum of these two integers; Mr. P. is given
the product of these two integers. After receiving these numbers, the
two logicians do not have any communication at all except the following
<<1>> Mr. P.: I do not know the two numbers.
<<2>> Mr. S.: I knew that you didn't know the two numbers.
<<3>> Mr. P.: Now I know the two numbers.
<<4>> Mr. S.: Now I know the two numbers.
Given that the above statements are true, what are the two numbers? | {"url":"http://www.braingle.com/news/hallfame.php?path=logic/number.p","timestamp":"2014-04-17T21:23:08Z","content_type":null,"content_length":"15573","record_id":"<urn:uuid:0a6938ec-b467-4932-af41-447cbf0c66a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial-time reduction (Probabilistic Automata)
December 6th 2011, 04:17 PM
Polynomial-time reduction (Probabilistic Automata)
The problem that I'm having trouble with is the following (please see the attached image before proceeding).
Give a polynomial-time reduction of the equivalence problem for closed arithmetic sequential program to the equivalence problem for probabilistic automaton.
I have thought that maybe $\Sigma_L = \{ 0,1 \}$ and $\Sigma_I = \{ +, -, * \}$. But now I'm not sure how to proceed. Please help!!
December 6th 2011, 11:35 PM
Re: Polynomial-time reduction (Probabilistic Automata)
To solve this you need to assume you have a solution to the equivalence problem for sequential arithmetic. You need to describe an effective procedure for using this solution to answer "are these
two probabilistic trees equivalent?".
You'll basically need to find a way to change any probabilistic tree into a unique (by equivalence) sequential arithmetic program, in polynomial time. That should be sufficient for the proof.
December 7th 2011, 02:41 AM
Re: Polynomial-time reduction (Probabilistic Automata)
First note that we are talking about the equivalence of "closed" programs. This is much easier as the output to the closed program is an integer (always, no matter how many times you run it,
since there is no input). The part I'm having trouble with is how one would encode a closed ASP (arithmetic sequential program) as a Probabilistic tree automaton.
Please help!
December 7th 2011, 10:33 AM
Re: Polynomial-time reduction (Probabilistic Automata)
I'm not certain from what you're saying, but you may have things backwards (which is quite a common mistake with reductions). If you're reducing from ASPE to PTAE, you have to write a program
that determines whether any two p-tree automata are equivalent by using a program that tells you if two arith-sequence programs are equivalent. The one you're reducing to is the one that you have
to write; the one you reduce from is the one you're given.
So you need to tell if two PA trees are equivalent, by using a machine that tells you if two AS programs are equivalent. You need to describe a function that takes a PA tree and turns it into an
arithmetic sequence program, and this function must preserve the equivalence of one to the other.
Look at how PA trees are evaluated. You want to describe how to turn each leaf into a sequence of one or more assignments (which should refer only to Dist(Q) and constants), and how to turn an
internal node into a sequence of one or more assignments (which will refer only to the variables that come from each branch, whether leaf or internal). Since I'm not 100% certain what Dist()
signifies I can't take this all the way, but it should be straightforward...
December 7th 2011, 10:35 AM
Re: Polynomial-time reduction (Probabilistic Automata)
Oh sorry about that, Dist(Q) is the set of probability distributions on the set of states Q, that is, the set of functions f:Q->[0,1] s.t. $\sum_{q\in Q} f(q) = 1$.
December 7th 2011, 11:26 AM
Re: Polynomial-time reduction (Probabilistic Automata)
I'm so sorry. I got this backwards, just like I warned you about. :P You are building the thing you reduce from, not the other way around. I was making an error when I said "reduce to the halting
problem" in another post.
The other way is impossible, which is how I figured this out. This way is doable. You have a solver for "do these trees give the same result", and you want to build a solver for "do these
sequences of addition, subtraction, and multiplication give the same result". To do this, you need to turn each sequence into a tree.
I'll respond more later, can't finish this now.
December 9th 2011, 02:22 AM
Re: Polynomial-time reduction (Probabilistic Automata)
I'm so sorry. I got this backwards, just like I warned you about. :P You are building the thing you reduce from, not the other way around. I was making an error when I said "reduce to the halting
problem" in another post.
The other way is impossible, which is how I figured this out. This way is doable. You have a solver for "do these trees give the same result", and you want to build a solver for "do these
sequences of addition, subtraction, and multiplication give the same result". To do this, you need to turn each sequence into a tree.
I'll respond more later, can't finish this now.
Hi Annatala, would it be possible if you try to help me do this question now? I've been trying to find this polynomial time reduction, but I find it quite hard and unable to do it.
Thanks alot! I reallly appreciate it.. | {"url":"http://mathhelpforum.com/discrete-math/193617-polynomial-time-reduction-probabilistic-automata-print.html","timestamp":"2014-04-21T11:06:30Z","content_type":null,"content_length":"10360","record_id":"<urn:uuid:49d1476f-190e-40c9-b153-bcf508a3d8c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does a significant interest in math change who you are?
January 25th 2013, 03:40 PM
Does a significant interest in math change who you are?
Firstly, I'm an engineer student by discipline. I had such an interest in math it has now taken over my life.
I'm now at a stage where, I find it hard to be involved with social communication?
I often find myself just staring at 'something' so as to keep my eyes occupied, when through mind-set, I'm like;
Are there any members out there like this? Is this natural?
January 26th 2013, 12:03 PM
Re: Does a significant interest in math change who you are?
I am an average mathematician who spends a lot of time thinking about math, I even write about math on my website dead end math. That being said, I don't have a problem talking to people about
anything. I think about what they say in a very analytical way, sometimes to the point it seems comical. But overall, I leave my compulsions on my whiteboards.
I think you should write about what you like to talk about, write up your thoughts in Latex. Then you can post it online for people to see, like on Scribd. See how many other people take interest
and make comments, heck you should also post your thoughts on here. It is a math forum after all. Then maybe you will not think about all these mathematical conjectures during your regular social
Thats my two cents anyway.
February 3rd 2013, 10:01 AM
Re: Does a significant interest in math change who you are?
Definitely changed me and I haven't even gone far in math. I'm not sure how it changed me. I don't have a problem talking to people but I do feel that I understand some things better than I did | {"url":"http://mathhelpforum.com/math/212038-does-significant-interest-math-change-who-you-print.html","timestamp":"2014-04-17T02:27:24Z","content_type":null,"content_length":"5386","record_id":"<urn:uuid:7aae95a9-3e1e-4d6a-8147-e97c79854690>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trend in Square roots question [Curious Math Newb question]
Notice also the same linearity in the simpler progression 9 - 4 = 5, 16 - 9 = 7, 25 - 16 = 9 ...
If you grasp visual concepts more easily, look at this:
The multicoloured square on the left represents your original series, that on the right represents the simpler one mentioned above. In each case the two red squares are the difference between
successive terms (in this case 6^2 and 7^2) - notice that in each large square the green and blue L-shapes are congruent. | {"url":"http://www.physicsforums.com/showthread.php?t=569096","timestamp":"2014-04-16T10:34:04Z","content_type":null,"content_length":"30429","record_id":"<urn:uuid:4c86a376-6ff4-4a56-af66-940399e60a17>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - QED and the Landau Pole
According to Wiki:
"If a beta function is positive, the corresponding coupling increases with increasing energy. An example is quantum electrodynamics (QED), where one finds by using perturbation theory that the beta
function is positive. In particular, at low energies, α ≈ 1/137, whereas at the scale of the Z boson, about 90 GeV, one measures α ≈ 1/127.
Moreover, the perturbative beta function tells us that the coupling continues to increase, and QED becomes strongly coupled at high energy. In fact the coupling apparently becomes infinite at some
finite energy. This phenomenon was first noted by Lev Landau, and is called the Landau pole."
I understood the above since years ago that probing with high energy causes the coupling constant to be larger. But BHobba was claiming that "As the cutoff is made larger and larger the coupling
constant gets larger and larger until in the limit it is infinite". He was saying that even at low energy, α ≈ 1/137 would become 1/127 if you increase the terms of the perturbation series. To put in
mathematical form.
\sum_{n=0}^\infty c_n g^n
[/tex] (where g is the coupling constant).
with one term
\sum_{n=0}^\infty c_n (1/137)^n .
with two terms or three terms
\sum_{n=0}^\infty c_n (1/15)^n .
with 1000 terms
\sum_{n=0}^\infty c_n (1/0)^n .
The above is true even at low energy (before I thought it's only when the probing is high energy). Can anyone science advisor please confirm if this is true and the context of this? Note I'm talking
about normal perturbation and let's not complicate things by including the Renormalization Group and the trick of regulator and stuff. Thanks. | {"url":"http://www.physicsforums.com/showpost.php?p=3776610&postcount=1","timestamp":"2014-04-17T03:53:20Z","content_type":null,"content_length":"10258","record_id":"<urn:uuid:34620a04-853f-4f8c-89a9-36ed6a38481a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sections 2, 3 and 4 are
extracted with permission
of the publishers from
op 319-323 and 325-341 of
"Muttiple Cropping Systems"
C.A. Francis (ed). Macmiltan.
New York, 1986.
['7 (f7
625- ~3(>
Training Working Document No. 6
Prepared by
Roger Mead
in collaboration
with CIMMYT staff
Lisboa 27
Apdo. Postal 6-641,
06600 M6xico, D.F., Mexico
This is one of a new series of publications from CIMMYT entitled Training Working
Documents. The purpose of these publications is to distribute, in a timely fashion,
training-related materials developed by CIMMYT staff and colleagues. Some Training
Working Documents will present new ideas that have not yet had the benefit of extensive
testing in the field while others will present information in a form that the authors hav
tested and found useful for teaching. Training Working Documents are intended for
distribution to participants in courses sponsored by CIMMYT and to other interested
scientists, trainers, and students. Users of these documents are encourage to provide
feedback as to their usefulness and suggestions on how they might be improved. These
documents may then be revised based on suggestions from readers and users and
published in a more formal fashion.
CIMMYT is pleased to begin this new series of publications with a set of six documents
developed by Professor Roger Mead of the Applied Statistics Department, University of
Reading, United Kingdom, in cooperation with CIMMYT staff. The first five documents
address various aspects of the use of statistics for on-farm research design and analysis,
and the sixth addresses statistical analysis of intercropping experiments. The documents
provide on-farm research practitioners with innovative information not yet available
elsewhere. Thanks goes out to the following CIMMYT staff for providing valuable input
into the development of this series: Mark Bell, Derek Byerlee, Jose Crossa, Gregory
Edmeades, Carlos Gonzalez, Renee Lafitte, Robert Tripp, Jonathan Woolley.
Any comments on the content of the documents or suggestions as to how they might be
improved should be sent to the following address:
CIMMYT Maize Training Coordinator
Apdo. Postal 6-641
06600 Mexico D.F., Mexico.
Document 6
1. Measurements and Analysis
The first point to recognize is that there is not a single form of statistical analysis which is appropriate to all
forms of intercropping data. Even for a single set of experimental data it will be important to use several
different forms of analysis. For the two components of an intercropping system the data may occur in
different structural forms. In general, data structures from intercropping experiments will be complex with
different forms of yield information available for different subsets of experimental units.
1.1 Valid Comparisons
In considering alternative possibilities for the analysis of data from intercropping experiments it is essential
that the principle of comparing "like with like" is obeyed. If yields are measured in different units, or over
different time periods, or for different species, then in general comparisons will not be valid and should not
be attempted. To illustrate the difficulties and possibilities we consider a set of ten "treatments". Any actual
experiment would be unlikely to include such a diverse set of treatments though there would typically be
several representatives of some of the "treatment types" illustrated. The structure for the ten treatments is
as follows.
Legume Crop Cereal Crop Monetary Relative
Species Yield Species Yield Value Performance
1) I yi r
2) II Z2 r
3) A a3 r3
4) -B b4 r4
5) I Y5 A a. r5 y-/yI + a5/a3
6) I Y6 A a6 r6 y6/y + ar/a3
7) I y7 B b7 r7 V7/y I + b7/b4
8) I y8 B bg rg ys/y I + bs/b4
9) II z9 A as rg zq/z2 + aq/a3
10) 11 zio B blo rio z1o/z2+blo/b4
A comparison is valid only when the units of measurement are identical. Thus it is valid to investigate the
effect of different cereal crops on legume yields of one species (y i, ys, y6. y7. y s or of the other species
(z2, Z9, zio). Similarly the effect of different legume environments on crop yield (a3, a5, a6, aq) or (b4, b7,
bs, blo). The effects of different treatment systems on pairs of yields may be assessed by comparing the
pair (y5, as) with (y6, a6) or (y7, b7) with (y8, b8). Particular combinations of the pair of yields may also be
compared so that (ys/yl + a5/a3) may be compared (y6/yl + a6/a3). However it is not valid to compare
(Y5/YI + as/a3) with (y7/yI + b7/b4) because the divisors are different. In interpretation of these sums of
ratios as "Land Equivalent Ratios" (Willey 1979, Mead and Riley 1981) the sum of ratios is thought of in
terms of land areas required to produce equivalent yields through sole crops. However land areas required
to grow crop A are not comparable with land areas to grow crop B. Comparison of biological efficiency
through LER's cannot be valid for different crop combinations.
The only measure by which all different component combinations can be compared must be a variable,
such as money, to which all component yields can be directly converted, and which has a practical
1.2 The Variety of Forms of Analysis
The only form of analysis which retains all the available information is multivariate. When the
performance of each component crop may be summarised in a single yield then a bivariate analysis of
variance is the most powerful technique available. However only those experimental units for which both
yields may be measured can be included in a bivariate analysis.
Analysis of each crop yield separately is also like y to be useful, though it is important to check that the
variability for monocrop yields is the same as that for intercrop yields. Analysis of crop indices may also
be useful.
2. General Principles of Statistical Analysis
2.1 Analysis of Variance
The initial stage for most analyses of experimental data is the analysis of variance for a single variate, or
measurement. The analysis of variance has two purposes. The first is to provide, from the error mean
square, an estimate of the background variance between the experimental units. This variance estimate is
essential for any further analysis and interpretation. It defines the precision of information about any mean
yields for different experimental treatments. One major requirement often neglected is that the error mean
square must be based on variation between the experimental units to which treatments are applied. If
treatments are applied to plots 10 x 3 m, then the variance estimate used for comparing treatments must be
that which measures the variation between whole plots. Measurements on subplots or on individual plants
are of no value for making comparisons between treatments applied to whole plots.
The second purpose of the analysis of variance is to identify the patterns of variability within the set of
experimental observations. The pattern is assessed through the division of the total sum of squares (SS)
into component sums of squares and the interpretation of the relative sizes of the component mean squares.
To illustrate the simple analysis of variance, and for illustration of other techniques, later in this chapter, I
shall use data from a maize/cowpea (Vigna unguiculata) intercropping experiment conducted by Dr.
Ezumah at IITA, Nigeria. The experimental treatments consisted of three maize varieties, two cowpea
varieties, and four nitrogen levels (0, 40, 80, 120 kg/ha) arranged in three randomized blocks of 24 plots
each. The data for cowpea and maize yields are given in Table 1. The analysis of variance and tables of
mean yields for the cowpea yields are shown in Table 2. The analysis of variance shows that there is very
substantial variation in cowpea yield for the different maize varieties: there is also a clearly significant (5
percent) interaction between cowpea variety and nitrogen level and a nearly significant variation between
mean yields for different nitrogen levels. The tables of means for cowpea yield that should be presented are
therefore for (1) maize varieties and (2) cowpea variety x nitrogen levels, with the mean yields for nitrogen
levels as a margin to the table. The analysis of variance implies strongly that no other means should be
The interpretation indicated by the analysis and mean yields is as follows. Yield of cowpea is substantially
determined by the maize variety grown with the cowpea. Higher cowpea yields are obtained when maize
variety 1 is grown. For cowpea variety B, cowpea yield is reduced as increasing amounts of nitrogen are
applied (presumably because of correspondingly improved maize yield). Yields for cowpea variety A are
not affected in this manner.
2.2 Assumptions in the Analysis of Variance
The interpretation of an analysis of variance and of the subsequent comparisons of treatment means
depends critically on the correctness of three assumptions made in the course of the analysis. If the
assumptions are not valid, the conclusions drawn may also be invalid and, therefore, misleading. Evidence
available from the analyses of intercropping experiments suggests that failure of the assumptions is at least
not less frequent than in monoculture experiments. It is therefore vital that the experimenter deliberately
consider the assumptions before completing the analysis. The three assumptions are:
1 That the variability of results does not vary between treatments
2 That treatment differences are consistent over blocks
3 That observations for any particular treatment for units within a single block would be approximately
normally distributed
Table 1. Cowpea and Maize Yields in intercrop Trial at IITA, Nigeria
Yield (kg/ha)a
Cowpea Nitrogen
variety level I II I1I I I HII I nI m
A NO 259 645 470 523 540 380 585 455 484
A NI 614 470 753 408 321 448 427 305 387
A N2 355 570 435 311 457 435 361 586 208
A N3 609 837 671 459 483 447 416 357 324
B NO 601 707 879 403 308 715 590 490 676
B NI 627 470 657 351 469 602 527 321 447
B N2 608 590 765 425 262 612 259 263 526
B N3 369 499 506 272 421 280 304 295 357
A NO 2121 2675 3162 2254 3628 4069 2395 2975 4576
A NI 3055 3262 3749 3989 3989 4429 4429 4135 4429
A N2 3922 3955 4095 4642 4135 4642 5589 4429 5156
A N3 4129 4129 4022 3975 4789 4282 5990 5336 5663
B NO 2535 2535 2288 4209 3989 2321 2901 4429 3482
B NI 2675 3402 3122 4789 4936 3342 3555 4936 4135
B N2 3855 3815 3535 5083 4496 3702 6023 5296 4069
B N3 3815 4202 3749 5656 5516 5223 5516 5083 5369
aYields grouped by maize variety (1, 2, 3) and planting block (I, II, III).
Source: Data from Dr. Ezumah, ITA, unpublished.
There is an element of subjectivity about the assessment of these assumptions. For a more extensive
discussion the reader is referred to Chap. 7 of Mead and Cumow (1983). In brief, the experimenter should
1 Does it seem reasonable, and do the data appear to confirm that the ranges of values for each
treatment are broadly similar and that there is no trend for treatments giving generally higher yields to
display a correspondingly greater range? In biological material it is more reasonable to suppose that
treatments with a high mean yield also have a rather higher variance of yield, and so an experimenter
should be prepared to recognize this occurrence and to use a transformation of yield before analysis.
2 Are treatment differences similar in the"good" blocks and in the "bad" blocks? Again if the pattern of
bigger differences in better blocks, which might reasonably be expected, is found, then a
transformation of yield is necessary.
3 Do I believe that an approximately normal distribution is a sensible assumption?
Table 2. Analysis of Variance and Tables of Means for Cowpea Data in Intercrop Trial at IITA,
Analysis of variance
Source SS df MS F
Blocks 73,000 2 36,500 2.8
Maize varieties (M) 409,400 2 204.700 15.7a
Cowpea varieties (C) 6,000 1 6,000 0.5
Nitrogen (N) 113,100 3 37,700 2.9
MxC 9,900 2 4.950 0.4
MxN 67,600 6 11.267 0.9
CxN 172,400 3 57.433 4.4b
MxCxN 135,400 6 22,567 1.7
Error 599,300 46 13.000
Table of means cowpeaa yield (kg/ha)
Nitrogen level
Cowpea variety 0 40 80 120 Maize variety Mean
A 482 459 413 511 1 582
B 597 497 479 367 2 430
Mean 539 478 446 439 3 415
SE of difference for N means = 50 SE of difference 43
SE of difference for combinations = 71
aSignificant at 0.1% level
bSignificant at 5% level.
Source: Data from Dr. Ezumah, IITA, unpublished.
For the data in Table 1 a visual inspection reveals no reason to doubt the assumptions. The only peculiarity
of the data is the repetition of some values in the set of maize yields, but since no obvious explanation
could be found the data were used for analysis and interpretation as shown in Table 2.
2.3 Comparisons of Treatment Means
Many sets of experimental results are wasted through an inadequate analysis of the results. In many cases
this results from the use of multiple comparison tests of which the most prevalent, and therefore the one
that causes most damage, is Duncan's multiple range test. The reason that multiple comparison tests lead to
a failure to interpret experimental data properly is that such tests ignore the structure of experimental
treatments and hence fail to provide answers to the questions that prompted the choice of experimental
Two particular situations in which multiple range tests should never be used are for factorial treatment
structures and if the treatments are a sequence of quantitative levels. In the former the results should be
interpreted through examination of main effects and interactions. In the second the use of regression to
describe the pattern of response to varying the level of the quantitative factor should be obligatory. Thus,
for the cowpea yield example, the effect of nitrogen on yield for cowpea variety B can best be summarized
by the regression equation
Yield = 591 1.77 N
where yield and N are both measured in kg/ha. The predicted yields for the four nitrogen levels (0, 40, 80,
120 kg/ha) are 591,520, 449, and 379, which obviously agree very closely with the observed means.
Examples of the failure of experiments to interpret their data properly occur regularly in all agricultural
research journals wherever multiple comparison methods are widely used. Examples of misuse and
discussion of alternative forms of analysis are given by Bryan-Jones and Finney (1983). Morse and
Thompson (1981), and many other authors. The only sensible rule to adopt when analyzing experimental
data is never use multiple range tests or other multiple comparison methods.
2.4 Presentation of Results
The prime consideration in presenting experimental results should be to provide the reader with all
necessary information for a proper interpretation of results, without unnecessary detail. This principle leads
to some particular advice:
1 Tables of mean yields should always be accompanied by standard errors for differences between
mean yields and the degrees of freedom for those standard errors.
2 When multiple levels of analysis are used, as for split plot designs then all the different standard
errors must be given.
3 When the results are presented in graphic form the data should always be shown (plotting mean
yields). A graph showing only a fitted line or curve deprives the reader of the opportunity to assess
the reasonableness of the fitted model.
4 Standard errors are much more effective with tables of means than with graphs where standard errors
are represented by bars.
5 All standard errors or other measures of precision should be defined unambiguously. The statement
below a set of means "standard error = 11.3" is ambiguous because it does not specify if it is for a
mean or a difference of means or, even, for a single value rather than a mean.
3. Bivariate Analysis
3.1 What is a Bivariate Analysis?
A bivariate analysis is a joint analysis of the pairs of yields for two crops intercropped on a set of
experimental plots. The philosophy is that because two yields are measured for each plot. and the yields
will be interrelated, they should be analyzed together. The interrelationship is important since it implies
that conclusions drawn independently from two separate analyses of the two sets of yields may be
misleading. There are two major causes of interdependence of yield of two crops grown on the same plot.
If the competition between the two crops is intense, then it might be expected that on those plots where
crop A performs unusually well, crop B will perform unusually badly and vice versa. This would lead to a
negative background correlation between the two crop yields, quite apart from any pattern of joint variation
caused by the applied treatments. Failure to take this negative correlation into account could lead to high
standard errors of means for each crop analyzed separately, which could mask real differences between
Alternatively it may be that on apparently identical plots, the two crops respond similarly to small
differences between plots producing a positive background correlation. Again looking at separate analyses
for the two crops distorts the assessment of the pattern of variation.
To see how consideration of this underlying pattern of joint random variation is essential to an
interpretation of differences in treatment mean yields some hypothetical data are shown in Fig. 1.
Individual plot yields are shown for two intercrop systems (X and 0), the mean crop yields for the two
systems being identical for three situations. In Fig. I a the pattern of background variation corresponds to a
strongly competitive situation (negative correlation), whereas for Fig. lb there is a positive correlation of
yields over the replicate plots for each treatment. In Fig. Ic there is no correlation between the two crop
yields. In all three cases the comparisons in terms of each crop yield separately would show no strong
evidence of a difference between the two systems. However the joint consideration of the pair of yields
against the background variation shows that the difference between the systems is clearly established in
Fig. la, that Fig. lb suggests strongly that the apparent effect is attributable to random variation, and that in
Fig. Ic the separation of the two systems is rather more clear than could be established by an analysis for
either crop considered alone.
x X 00 0
x 0
x xOX 0
x xx
oX %
x O
x x
x K
Figure 1. Different correlation patterns for yields with the same values of the individual crop yields:
(a) negative correlation, (b) positive correlation, (c) no correlation. The two axes are for the yields of
the two crops. Two intercrop systems give yields represented by x and o.
3.2 The Form of Bivariate Analysis
The calculations for a bivariate analysis are formally identical with those required for covariance analysis.
The difference is that, whereas in covariance analysis there is a major variable and a secondary variable
whose purpose is to improve the precision of comparisons of mean values of the major variable, in a
bivariate analysis the two variables are treated symmetrically. Bivariate analysis of variance consists of an
analysis of variance for XI, analysis of variance for X2, and a third analysis (of covariance) for the
products of Xi and X2. Computationally this third analysis of sums of products is most easily achieved by
performing three analyses of variance for Xi, X2, and Z = X1 + X2. The covariance terms are then
calculated by substracting corresponding SS for XI and for X2 from that for Z and dividing by 2. The
bivariate analysis including the intermediate analysis of variance for Z are given in Table 3 for the
maize/cowpea experiment discussed earlier.
The bivariate analysis of variance, like the analysis of variance, provides a structure for interpretation. In
addition to the sums of squares and products for each component of the design, the table includes an error
mean square line which provides a basis for assessing the importance of the various component sums of
squares and products. The general interpretation of this analysis is quite clear and is essentially similar to
the pattern of analysis of cowpea yield. There are large differences attributable to the different maize
varieties and to the variation of nitrogen level; there is also a suggestion that there may be an interaction
between cowpea variety and nitrogen level.
Table 3. Bivariate Analysis of Variance for Maize/Cowpea Yield Data (0.001 kg/ha) in Intercrop
Maize SS Cowpea SS SS for Sum of
Source df (Xl) (X2) (XI + X2) products F Correlation
Blocks 2 0.29 0.0730 0.247 -0.058 1.75 -0.40
M variety 2 17.52 0.4094 12.665 -2.632 11.90 -0.98
C variety 1 0.03 0.0060 0.062 0.013 0.44 1.00
Nitrogen 3 28.50 0.1131 25.081 -1.766 10.59 -0.98
MxC 2 1.11 0.0099 0.922 -0.099 0.82 -0.95
MxN 6 1.25 0.0676 0.920 -0.199 0.64 0.93
CxN 3 0.24 0.1724 0.152 -0.130 2.40 -0.64
MxCxN 6 1.28 0.1354 1.349 -0.033 1.40 -0.08
Error 46 15.90 0.5993 13.671 -1.414 -0.46
(MS) (0.346) (0.0130) (-0.031)
Total 71 66.13 1.5861 55.080 -6.318
Note: See Table 1
3.3 Diagrammatic Presentation
We have argued earlier that interpreting the patterns of variation in maize and cowpea yields without
allowing for the background pattern of random variation can be misleading. The primary advantage of the
bivariate analysis is that it leads to a simple form of graphic presentation of the mean yields for the pair of
crops making an appropriate allowance for the background correlation pattern. The graphic presentation
uses skew axes for the two yields instead of the usual perpendicular axes. If the yields are plotted on skew
axes with the angle between the axes determined by the error correlation, and if, in addition, the scales of
the two axes are appropriately chosen, then the resulting plot, such as Fig. 2. has the standard error for
comparing two mean yield pairs equal in all directions. The results in Fig. 2 are for the three maize
varieties from the example, and the size of the standard error of a difference between two mean pairs is
shown by the radius of the circle.
Maize Varieties
Figure 2. Bivariate plot of pairs of mean yields for three maize varieties (1,2,3). Maize and cowpea
yields are in kilograms per hectare. (Data from Table 1).
Construction of the skew axes diagram is based on the original papers of Pearce and Gilliver (1978, 1979)
and detailed instructions for construction are given by Dear and Mead (1983. 1984). The form of the
diagram given in Fig. 2 treats the two crops symmetrically, in contrast to the original suggestion of Pearce
and Gilliver, in which one yield axis is vertical and the other is diagonally above or below the horizontal
axis, depending on the sign of the error correlation. A summary of the method for construction of the
symmetric diagram is as follows:
If the error mean squares for the two crops are VI (= 0.346 in the example) and V2 (= 0.0130), and
the covariance is V12 (= -0.0310), then the angle between the axes 0 is defined by
cos 0 =
If the range of values for the two yields Xi and X2 are (Xo, Xi ) and (X2o, X2I) respectively, then
we define two new variables yi and yl,
y1 = Kjxl
V 2 V12X /V = VkXL
Y2 (V VT,/V,)'" V, )
and ranges
Vno = klxlo
Y20 = k2(X V- l)
Y21 = k, X2 VLi)
Plot the four pairs ofy values O(ylo. y2o), B(y1o, y21) and C(yi I. yY2) on standard rectangular axes,
using the same scale for yl and for Y2. The xi axis is constructed by joining the points O and A. the x2
axis by joining O and B. The xl scale is defined by O(xl = xio) and A(xl = xl I); the x2 scale is
defined by (0x2 = x20) and B(x2 = x2i). Further points on both axes may be marked using a ruler and
the two defining points. The rotation of the xi and x2 axes to achieve symmetry can be performed
subjectively or by simple trigonometry. Individual points for pairs of mean yields may be plotted by
first measuring xl along the xl axis, and x2 parallel to the x2 axis. More details of the diagram
construction are given by Dear and Mead (1983, 1984).
The interpretation of the diagrams is extremely straightforward. The results in Fig. 2 show that the
differences among the three maize varieties are important for both maize and cowpea yields, with the
difference between varieties 2 and 3 clearly less than between either variety and variety 1. There is a clear
consistency through the sequence of varieties 1 to 2 to 3, with the increase in maize yield being directly
reflected in a decrease in cowpea yield. The three points fall nearly on a line illustrating the strong relation
between the two crop yields over the three varieties. (Note also that the correlation for maize varieties,
shown in Table 3, is -0.98). Remember that random correlation between the two yields has been allowed
for by the skewness of the axes and the displayed pattern is additional tot he background correlation
The results for nitrogen main effects and the interaction of cowpea variety with nitrogen are shown in Figs.
3 and 4. The four nitrogen levels produce four pairs of mean yields in an almost straight line. The dominant
effect is on the yield of maize which increases consistently with increasing nitrogen. In addition there is a
clear pattern of compensation between the two crop yields with cowpea yield decreasing as maize yield
increases. The pattern of yields for the cowpea variety/nitrogen interaction emphasizes the two effects of
yield increase for one crop and compensation between crops. For variety A the effect of increasing nitrogen
is simply an increases of maize yield, the "line" of the nitrogen level means being almost exactly parallel to
the maize yield axis. In contrast for variety B the dominant effect is the change in the balance of
maize/cowpea yields with the maize yield increasing consistently with increasing nitrogen and the cowpea
yield showing a corresponding decline.
3.4 Significance Testing
There are two forms of test that are useful in bivariate analysis, and these correspond to the t and F tests
used in the analysis of a single variate. We have already mentioned in the discussion of the skew axes plot
that the standard error of a difference is the same in all directions in these diagrams. because of the scaling
of axes which is part of the instruction of the diagram the standard error per observation is 1 (measured in
the units of yl and Y2). The standard error of a mean of n observations is therefore 14n and the standard
error of a difference between two points is 4(2/n.)
Nitrate Levels
Figure 3. Bivariate plot of pairs of mean yields for four nitrogen levels (0, 40, 80, and 120 kg/ha).
Maize and cowpea yields are in kilograms per hectare. (Data from Table 1).
Confidence regions for individual treatment means can be constructed as circles with radius '(2Fln), where
F is the appropriate percentage point of the F distribution on 2 and e degrees of freedom (e is the error
degrees of freedom).
The analogue of the F test in a univariate analysis of variance is also an F test. The basic concept on which
the test is based is the determinant constructed from the two sums of squares and the sum of products.
Suppose that the error SSP are El, E2, and E12, then the determinant is
E1 x E2 E\2
and it reflects both the sizes of El and E2 and the strength of the linear relationship between xl and X2. To
asses the treatment variation for a treatment SSP with values TI, T2, and T12 we calculate a statistic, L,
which compares the determinant of treatment plus error with that for error
(T, + Ez)(T2 + E2) (T12 + E12)2
E1E2 E12
The test of significance then involves comparing
F = (V 1)
The most frequently used value index is that of financial return. Other value indices include protein and dry
matter. The main criticism made specifically of financial indices is that prices fluctuate and hence the ratio
of K\ to K2 may vary considerably. A partial answer to this criticism is to employ several price ratios. Thus
the results for the four treatments discussed earlier in this section might be presented for five price ratios as
Price Ratio for Maize/Cowpea
Treatment 1:1 1:2 1:3 1:4
While some comparison patterns, such as (2 vs. 1) or (2 vs. 3), remain consistent for this range of price
ratios others, such as (1 vs. 3) or (2 vs. 4), do not.
One other form of single measurement comparison which is exactly equivalent to the financial value index
is the crop equivalent. In calculating a crop equivalent, yield of one crop is "converted" into yield
equivalent of the other crop by using the ratio of prices of the two crops. The exact equivalence of crop
equivalent yield to financial index is immediately obvious algebraically but may be perceived clearly also
by considering the four treatments for a 1 : 3 price ratio. This ratio implies that a unit yield of cowpea is
worth 3 units of maize. We can therefore calculate yields as maize equivalents or cowpea equivalents as
Maize equivalent
2653 + 3(458)= 4027
5323 + 3(319) = 6280
4722 = 4722
3(1490) =4470
Cowpea equivalent
458 + 2653/3 = 1342
319 + 5323/3 = 2093
4722/3 = 1574
1490 = 1490
The relative comparisons are identical for the two equivalents.
4.3 Biological Indices of Advantage or Dominance
The most important index of biological advantage is the relative yield total (RYT) introduced by de Wit
and van den Bergh (1965) or land equivalent radio (LER) reviewed by Willey (1979). The index is based
on relating the yield of each crop in an intercrop treatment mixture to the yield of that crop grown as a sole
crop. If the two crop yields in the intercrop mixture are MA, MB, and the yields of the crops grown as sole
crops are SA, SB, then the combined index is
L = +- = LA + LB
SA S8
The interpretation embodied in LER is that L represents the land required for sole crops to produce the
yields achieved in the intercropping mixture. A value of L greater than I indicates an overall biological
advantage of intercropping. The two components of the total index, LA and LB represent the efficiency of
yield production of each crop when grown in a mixture, relative to sole crop performance. For the
maize/cowpea yields treatment 2 may be assessed relative to treatments 3 and 4 to give an LER
L = 5 + 3 = 1.13 + 0.21 = 1.34
Other indices have been proposed as measures of biological performance. There are two different
objectives for which such indices have been proposed. The first is the assessment of the benefit, or overall
advantage, of intercropping, or mixing. The second is the assessment of the relative performance of the two
crops, the concept of dominance or competitiveness. It is important not to confuse these two objectives,
which should be quite separate conceptually.
The RYT or LER is the main index of advantage currently used. The other index which has been used is
the relative crowding coefficient (de Wit, 1960), which can be defined in terms of the LER components as
LA^ LB
1 La 1 LB
The two main indices of dominance are the aggressivity coefficient, introduced by McGilchrist and
Trenbath (1971) defined essentially as
and the competition ratio proposed by Willey and Rao (1980) and defined essentially as
The full definition of each index as originally given involves proportions of the two crops in the mixture.
However, for applications in intercropping, this masks the underlying concepts involved in the ideas of
advantage or dominance. Each of these four indices is based clearly on the LER components LA and LB.
[Indeed since there are only four simple arithmetical operations (+, -, x, t) it could be argued that the set of
possible indices is now complete!] Crucially, however, the components LA and LB are ratios, and the value
of a ratio is determined as much by the divisor as by the number divided. Hence the interpretation of LA
and LB, and therefore of any index based on LA and LB, depends on the choice of divisor.
This question of interpretation is extremely important. and becomes even more important when comparison
of LERs is considered in the next section. For the LER to be interpreted as the efficiency of land use the
sole crop yields, SA and SB must represent some well-defined, achievable, optimal yields. It is therefore
necessary that the choice of sole crop yield used in the calculation of the LER be clearly defined and
justified as appropriate to the objective that the LER is intended to achieve. To illustrate this argument
consider the yields for several intercrop and sole crop treatments in the maize/cowpea experiment. The
mean yields for two maize varieties, two cowpea varieties and two nitrogen levels are shown in Table 4.
If we consider a particular intercrop combination, for example MI C\ No, we could assess the biological
advantage of intercropping as
L = + = 1.03 + 0.44 = 1.47
This is simply interpretable as the benefit in the situation where the only varieties available are M, and C1
and no nitrogen is available. It also implies that the sole crop yields of 2568 and 1036 could not be
improved by modifying the spatial arrangement or the management of the sole crop since we are assessing
the intercrop performance in relation to the land required to produce the same yields by sole cropping. No
one would deliberately use an inefficient method of sole cropping to try to match the intercropping
performance. Suppose the combination MICiNo is now considered. Since the sole crop yield for C1 is
Table 4. A Subset of Yields from the Maize/Cowpea Experiment
Intercrop yields Sole-crop yields
Treatment Maize Cowpea M1 M3 Ci C2
MICINo 2653 458
M3CiNo 3315 508
MIC2No 2453 731 2568 3555 1036 787
M3C2No 3604 585
MICIN3 4093 706
M3C1N2 5663 366 3651 4722 1795 1490
MIC2N3 3922 458
M3C2N3 5323 320
Note: Data from intercrop trial (Table 1).
better than that for C2, the advantage of intercropping might be argued to be overestimated if we compare
MIC2No with MI and C2 for which the LER would be
L = + = 0.96 + 0.93 = 1.89
If we measure MI C2 No against Mt and CI we obtain and LER value
L = + = 0.96 + 0.71 = 1.67
We could go further and argue that if M3 is available as an alternative to MI then we should compare MI
C2 No with the best available varieties, M3 and CI, which could be used as a sole cropping alternative. We
would then have
L + = 0.69 + 0.71 = 1.40
This last L value represents the most stringent assessment of advantage of the intercropping combination
MIC2No and alternative forms of L could all be criticized as presenting an illusory benefit of intercropping
as compared with sole cropping.
What about using sole crop yields for N3 rather than for No? Here the argument becomes more
complicated. It may well be that in the farming situation for which the conclusions drawn are to be
relevant, there is no real possibility of using extra nitrogen as required in N3. The advantage of 1.40 would
then be assessed in the most stringent manner possible for the practical situation considered.
The purpose of this example is not to define rules for calculating LER measures of advantage but to
demonstrate that the choice of divisors for the LER is a matter requiring careful thought. The divisor in
LER calculations cannot be assumed to be obvious, and discussions about LER values when the choice of
divisor is not clearly defined should be treated with suspicion.
One distinction that might usefully be made is between the LER or RYT as a measure of biological
sufficiency of a particular combination without any implications of agronomic benefit and the use of the
LER to assess the greater efficiency of the use of land resources. The former concept developed naturally
from competition studies and is a strictly nonagronomic idea. The latter is an inherently more complex
measure. Perhaps we should use RYT for the non-practical biological concept and LER for the agronomic
4.4 Comparison and Analysis of LER Values
The assessment of advantage of a single intercrop combination requires careful thought. When it is desired
to compare different intercrop treatments using LER values, the need to calculate the LER to produce
meaningful comparisons is accentuated. There are now two problems.
The first is the choice of divisor, and I believe that comparisons of LER values are valid in their practical
interpretation only if the divisors are constant for all the values to be compared. If different divisors are
used for different intercrop treatments then the quantities being compared may be considered as
MAI M MI
= MM +
L MA2 M+ 2
S SA2 SB2
The interpretation of any difference between L1 and L2 cannot be assumed to be the advantage of
intercropping treatment 1 compared with intercropping treatment 2, since the difference could equally well
be caused by differences between sole cropping treatments SBi and SB2 or between SAI and SA2.
Although LER values using different divisors are often compared, the concept that is being used as the
basis for comparison is the vague one of efficiency which is not interpretable in any practically measurable
form of yield difference between different intercropping treatments. We should recognize that such
comparisons are of a theoretical nature only and are not practically useful.
The form of the LER which is the sum of two ratios of yield measurements has prompted concern about the
possibility of using analysis of variance methods for LER values. More generally the question of the
precision and predictability of LER values has been felt by some to be a problem.
The comparison of LER values within an analysis of variance is. I believe, usually valid provided that a
single set of divisors is used over the entire set of intercropping plot values. Some statistical investigations
of the distributional properties of LERs were made by Oyejola and Mead (1981) and Oyejola (1983). They
considered various methods of choice of divisors including the use of different divisors for observations in
different blocks. Allowing divisors to vary between blocks provided no advantage in precision or in the
normal distributional assumptions: variation of divisors between treatments was clearly disadvantageous.
The recommendation arising from these studies is therefore that analysis of LER is generally appropriate,
provided that constant divisors are used, and with the usual caveat that the assumptions for the analysis of
variance for any data should always be checked by examination of the data before, during and after the
The question of precision of LERs and, by implication, their predictability, is an unnecessarily confusing
one. If LERs are being compared within experiments that standard errors of comparison of mean LERs are
appropriate for comparing the effects of different treatments. Experiments are inherently about
comparisons of the treatments included rather than about predictions of performance of a single treatment.
The precision of a single LER value must take into account the variability of the divisors used in
calculating the LER value. However a more appropriate question concerns the variation to be expected
over changing environments and this must be assessed by observation over changing environments. No
single experiment can provide direct information about the variability of results over conditions outside the
scope of the experiment. This, of course, does not imply that single experiments have no value since we
may reasonably expect that the precision of estimation of treatment differences will be informative for the
prediction of the differential effects of treatments.
4.5 Extensions of LER
In the last section it was mentioned that there were two problems in making comparisons of LER values for
different intercropping treatments. The second problem is that the concept of the LER as a measure of
advantage of intercropping assumes that the relative yields of the two crops are those that are required. The
calculation of the land required to achieve, with sole crops, the crop yields obtained from intercropping
makes this assumed ideal of the actual intercropping yields clear. However with two (or more)
intercropping treatments the relative yield performance LA : LB will inevitably vary and hence the
comparison of LER values for two different treatments can be argued to require that two different
assumptions about the ideal proportion LA : LB shall be simultaneously true.
This difficulty led to the proposed "effective LER" of Mead and Willey (1980) which allows modification
of the LER to provide the assessment of advantage of each intercropping treatment at any required ratio X =
LA(LA + LB). The principle is that to modify the achieved proportions of yield from the two crops we
consider a "dilution" of intercropping by sole cropping. The achieved proportion of crop A could be
increased by using the intercropping treatment on part of the land and sole crop A on the remainder, the
land proportions being chosen so as to achieve the required yield proportions. Details of the calculations
are given in Mead and Willey (1980). It is important if the use of a modification of the LER is proposed
that the reason for using the effective LER is clearly understood. It is not primarily a form of practical
adjustment but arises from the philosophical basis of the LER.
It may be that in using the LER as a basis for comparison of different treatments the emphasis is not on the
biological advantage of intercropping but on the combination of yields onto a single scale, in terms of yield
potential. In this view the LER becomes another form of value index, the two values being the reciprocals
of the sole crop yields. When a range of price ratio indices is used, it is almost invariably found that the
ratio of the LER values is well in the center of the price ratio range. The principle of the argument for using
an effective LER is no longer essential but there may still be advantages, in making practical comparisons
or treatments in terms of performance at a particular value of k. There are, however, other possible ways of
modifying the LER, and the most important of these is the calculation of combined yield performance to
achieve a required level of crop yield A. Arguments for, and details of, this alternative modified LER are
given by Reddy and Chetty (1984) and Oyejola (1983).
4.6 Implications for Design
The particular implications to be considered here concem the use of sole crop plots. If the arguments about
the choice of divisors are followed then it will not be necessary to include many sole crop treatments within
the designed experiment. The investigation of the agronomy of monocropping has been extensive and in
most intercropping experiments there should rarely be any need for an experimental investigation of the
optimal form of monocropping. Therefore, there should often be no need for more than a single, sole
cropping treatment for each crop.
The reduction in the number of sole crop plots in intercropping experiments would be of great benefit
because it would enable a greater part of the resources for an experiment on intercropping to be used for
investigating intercropping. Many intercropping experiments which I have seen have used between one-
third and one-half of the plots for sole crops. To some extent this reflects a propensity for continuing to ask
whether intercropping has an advantage, when this is widely established, instead of asking the practically
more important question of how to grow a crop mixture.
It is possible to take the reduction of sole crop treatments further. The analysis in this chapter and the
previous chapter do not require sole crop treatments within the experiment to be treated like other
treatments. For the bivariate analysis no sole crop information is essential though sole crop information
does provide a standard against which to compare the pairs of yields. For the analysis and interpretation of
LERs, estimates of mean yields fo the two sole crops are needed as divisors. However there is no need for
the sole crops to be randomized and grown on plots with the main experiment. Sufficient information for
the calculation and interpretation of LERs can be obtained from sole crop areas alongside the experimental
area. This will tend to improve the precision of the experiment by reducing block sizes and also simplifies
the pattern of plot size.
Bryan-Jones, J., and Finney, D.J. 1983. On an error in "Instructions to Authors," Hort. Sci. 18:279-282.
Dear, K.B.G., and Mead, R. 1983. The use of bivariate analysis techniques for the presentation, analysis
and interpretation of data, in: Statistics in Intercropping. Tech. Rep. 1, Dep. Applied Statistics,
University of Reading, Reading, U.K.
1984. Testing assumptions, and other topics in bivariate analysis, in Statistics in Intercropping,
Tech. Rep. 2, Dep. Applied Statistics, University of Reading, Reading, U.K.
de Wit, C.T., and Van den Bergh, J.P. 1965. Competition among herbage plants, Neth. J. Agric. Sci.
de Wit, C.T. 1960. On competition, Versl. Landbouwk. Onder:ook 66:(8):1-82.
McGilchrist, C.A., and Trenbath, B.R. 1971. A revised analysis of plant competition experiments.
Biometrics 27:659-671.
Mead, R., and Cumow, R.N. 1983. Statistical Methods in Agriculture and Experimental Biology, Chapman
and Hall, London, U.K.
Mead, R., and Riley, J. 1981. A review of statistical ideas relevant to intercropping research (with
discussion), J. Royal Stat. Soc. 144:462-509.
Mead, R., and Willey, R.W. 1980. The concept of a land equivalent ratio and advantages in yields from
intercropping, Exp. Agric. 16:217-228.
Morse, P.M.. and Thompson, B.K. 1981. Presentation of experimental results. Can. J. Plant Sci. 61:799-
Oyejola, B.A. 1983. Some statistical considerations in the use of the land equivalent ratio to assess yield
advantages in intercropping, Ph.D. thesis, University of Reading, Reading. U.K.
Oyejola, B.A., and Mead, R. 1981. Statistical assessment of different ways of calculating land equivalent
ratios (LER), Exp. Agric. 18:125-138.
Pearce, S.C., and Gilliver. B. 1978. The statistical analysis of data from intercropping experiments, J.
Agric. Sci. 91:625-632.
1979. Graphical assessment of intercropping methods. J. Agric. Sci. 93:51-58.
Reddy, M.N., and Chetty, C.K.R. 1984. Stable land equivalent ratio for assessing yield advantage from
intercropping. Exp. Agric. 20:171-77.
Willey, R.W. 1979. Intercropping-its importance and research needs. Parts I and II, Field Crop Abstr.
32:1-10, 73-85.
Willey, R.W., and Rao, M.R. 1980. A competitive ratio for quantifying competition between intercrops.
Erp. Agric. 16:117-125.
I(, MCI, If '111lict )I,, | {"url":"http://ufdc.ufl.edu/UF00080826/00001","timestamp":"2014-04-16T17:08:01Z","content_type":null,"content_length":"68639","record_id":"<urn:uuid:4fd8c640-2f9c-4e54-b187-e87f39149b80>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Get homework help at HomeworkMarket.com
Submitted by
on Mon, 2012-05-28 16:45
due on Mon, 2012-05-28 18:42
answered 1 time(s)
azi.vb is willing to pay $7.00
Show, by applying the limit test, that each of the following is true.
a) The functions f(n)= n(n-1)/2 and g(n)= n^2 grow asymptotically at equal rate.
b) The functions f(n)=log n grow asymptotically at slower rate than g(n)=n.
Show that log (n!) = Θ (nlog n);
Design an algorithm that uses comparisons to select the largest and the
second largest of n elements. Find the time complexity of your algroithm
(expressed using the big-O notation).
Given an a binary array or list of n elements, where each element is either
a 0 or 1, we would like to arrange the elements so that all of those that
are equal to 0's appear first followed by all the elements that are equal
to 1's.
a) Write an algroithm or a function that uses comparisons to arrange the
elements as given above. Do not use any extra arrays in your algorithm.
b) Find the time, T(n), needed by your algorithm in the worst-case and
then express it using the big-O notation.
c) Find the time, T(n), needed by your algorithm in the best-case and
then express it using the big-Ω notation.
d) Find the time, T(n), needed by your algorithm in the average-case
and express it using the big-Θ notation.
Submitted by
on Tue, 2012-06-05 13:18
price: $7.00
Answered all questions
body preview (529 words)
xxx xxxxx by applying the limit xxxxx that xxxx xx the xxxxxxxxx xx xxxxx
xx xxx xxxxxxxxx xxxxx n(n-1)/2 xxx g(n)= n^2 xxxx xxxxxxxxxxxxxx at equal rate.
xxxxx xxxxxxxxx = Limit n(n-1)/2)/n^2
x - ∞ n xx ∞
x = xxxxx xxxxxxxx
n - xxx
= Limit 1/2 x 1/2n
x x xx xxx
= 1/2 x 1/2 * xxxxx xxx
n xx ∞
= xxx x xxx * 0
= 1/2 x xxxxxxxx
xx Limit xxxxxxxxx = xxxxxxxxx F(n) xxx g(n) xxxx xxxx xxxxxxxxxxxxxx xx xxxxx xxxx
b) xxx functions f(n)=log n grow xxxxxxxxxxxxxx xx slower rate than xxxxxxx
xxxxx f(n)/g(n) x Limit xxxxxxxx x xxxxx xxxxxxx By I’Hopital’s xxxx
x xx xxx x xx ∞ x xx ∞
= x
xx Lim f(n)/g(n) = 0, xxxx xxxx asymptotically at slower rate than g(n)
n xx ∞
Q2) Show that log (n!) x Θ (nlog n);
xxxxxxx x Log x + Log 2 x Log 3 + xxxxxxxx Log x
Taking xxxxx xxxxx
<=Log n + Log n + ………………..+ Log n
= x xxx x
x O(n xxx xx
xxxxxxx = xxx x + xxx 2 x Log x + …….+ Log x
Taking lower bound
= xxx x + …..+ xxx xxx x ……..+ log x
= Log xxx + …….+ Log x (taking xxxx xxx half xx xxxxxxx
x Log xxx x xxxxxxxxx Log n/2
x xxx xxx n/2
x Ω xx xxx xx
As Log(n!) upper bound xxx lower bound are xxxx xx Log(n!)= xxxx log n )
Q3)Design an algorithm xxxx uses comparisons xx select xxx largest and the second xxxxxxx xx n
- - - more text follows - - -
Buy this answer Try it before you buy it
Check plagiarism for $2.00 | {"url":"http://www.homeworkmarket.com/content/algorithm-0","timestamp":"2014-04-23T20:46:43Z","content_type":null,"content_length":"51843","record_id":"<urn:uuid:7524f8cc-9672-4bba-8d2b-864eef6dfdae>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Functions Examples Page 2
We know what we have to find, so let's find it.
1. What are the intercepts?
First, the x-intercepts. We need to find the roots of the quadratic polynomial. If we find them, we can celebrate by drinking a root beer.
We need to find the solutions to the equation
0 = -x^2 + 2x + 3
= -(x^2 – 2x – 3).
This equation factors as
0 = -(x – 3)(x + 1),
so the solutions (and the x-intercepts) are
x = 3, x = -1.
We can graph these points:
The y-intercept is the constant term, 3, so we can graph that also:
2. What is the vertex?
The vertex occurs halfway between the x-intercepts -1 and 3, so at x = 1. When we plug x = 1 in to the quadratic equation, we find
-(1)^2 + 2(1) + 3 = 4,
so the vertex occurs at (1, 4).
3. Does the parabola open upwards or downwards?
Since the coefficient on the x^2 term is negative, the parabola opens downwards.
Putting together all the pieces, we find our graph:
We know this graphing stuff can be infectious, but be careful. We don't want you to get a graph infection. | {"url":"http://www.shmoop.com/functions/quadratic-functions-examples-2.html","timestamp":"2014-04-16T20:45:58Z","content_type":null,"content_length":"38075","record_id":"<urn:uuid:de2bcd07-b877-42dc-a5db-f12c59988410>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
the simplest form of options pricing involves a decision tree which maps possible outcomes at succeeding points in time : probabilities are assigned to each possible outcome, and the value of the
option is determined by the likelihood of each outcome at the time of the option's expiration : the value will depend on several factors, including the number of periods, or time, and the volatility,
or how far the price moves for each node on the decision tree : in effect, this method reduces all possible future outcomes, or choices, to the present moment using the statistical likelihood of each
possible future choice : the underlying, or spot market price, will have an actual value as time progresses, but for the present moment, the options price is the single value of all future movements
of the underlying : so options pricing is a map of future movements : it is equivalent to pythagoras' map of the future movements of harmony based on the overtone series : these maps are derivatives
of the underlying that are environments unto themselves : that is, the option is a market unto itself, traded as an underlying as is the underlying spot commodity : it is circular, that is, options
can be the underlying for further derivatives, options on options : similarly, pythagoras' map is circular, his overtone series divided the octave into 12 increments which created the scale and it is
the evolution of the use of the scale that fulfilled the prophecy of the tonal map | {"url":"http://patteren.com/2007/meta30.html","timestamp":"2014-04-18T05:31:04Z","content_type":null,"content_length":"3803","record_id":"<urn:uuid:e3b139fd-fe3b-49b8-967c-1548b52813b5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
G03EJF computes a cluster indicator variable from the results of
2 Specification
SUBROUTINE G03EJF ( N, CD, IORD, DORD, K, DLEVEL, IC, IFAIL)
INTEGER N, IORD(N), K, IC(N), IFAIL
REAL (KIND=nag_wp) CD(N-1), DORD(N), DLEVEL
3 Description
Given a distance or dissimilarity matrix for
objects, cluster analysis aims to group the
objects into a number of more or less homogeneous groups or clusters. With agglomerative clustering methods (see
), a hierarchical tree is produced by starting with
clusters each with a single object and then at each of
stages, merging two clusters to form a larger cluster until all objects are in a single cluster. G03EJF takes the information from the tree and produces the clusters that exist at a given distance.
This is equivalent to taking the dendrogram (see
) and drawing a line across at a given distance to produce clusters.
As an alternative to giving the distance at which clusters are required, you can specify the number of clusters required and G03EJF will compute the corresponding distance. However, it may not be
possible to compute the number of clusters required due to ties in the distance matrix.
If there are $k$ clusters then the indicator variable will assign a value between $1$ and $k$ to each object to indicate to which cluster it belongs. Object $1$ always belongs to cluster $1$.
4 References
Everitt B S (1974) Cluster Analysis Heinemann
Krzanowski W J (1990) Principles of Multivariate Analysis Oxford University Press
5 Parameters
1: N – INTEGERInput
On entry: $n$, the number of objects.
Constraint: ${\mathbf{N}}\ge 2$.
2: CD(${\mathbf{N}}-1$) – REAL (KIND=nag_wp) arrayInput
On entry
: the clustering distances in increasing order as returned by
Constraint: ${\mathbf{CD}}\left(\mathit{i}+1\right)\ge {\mathbf{CD}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{N}}-2$.
3: IORD(N) – INTEGER arrayInput
On entry
: the objects in dendrogram order as returned by
4: DORD(N) – REAL (KIND=nag_wp) arrayInput
On entry
: the clustering distances corresponding to the order in
5: K – INTEGERInput/Output
On entry
: indicates if a specified number of clusters is required.
then G03EJF will attempt to find
${\mathbf{K}}\le 0$
then G03EJF will find the clusters based on the distance given in
Constraint: ${\mathbf{K}}\le {\mathbf{N}}$.
On exit: the number of clusters produced, $k$.
6: DLEVEL – REAL (KIND=nag_wp)Input/Output
On entry
: if
${\mathbf{K}}\le 0$
must contain the distance at which clusters are produced. Otherwise
need not be set.
Constraint: if ${\mathbf{DLEVEL}}>0.0$, ${\mathbf{K}}\le 0$.
On exit
: if
on entry,
contains the distance at which the required number of clusters are found. Otherwise
remains unchanged.
7: IC(N) – INTEGER arrayOutput
On exit: ${\mathbf{IC}}\left(\mathit{i}\right)$ indicates to which of $k$ clusters the $\mathit{i}$th object belongs, for $\mathit{i}=1,2,\dots ,n$.
8: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, ${\mathbf{K}}>{\mathbf{N}}$,
or ${\mathbf{K}}\le 0$ and ${\mathbf{DLEVEL}}\le 0.0$.
or ${\mathbf{N}}<2$.
On entry, CD is not in increasing order,
or DORD is incompatible with CD.
On entry, ${\mathbf{K}}=1$,
or ${\mathbf{K}}={\mathbf{N}}$,
or ${\mathbf{DLEVEL}}\ge {\mathbf{CD}}\left({\mathbf{N}}-1\right)$,
or ${\mathbf{DLEVEL}}<{\mathbf{CD}}\left(1\right)$.
on exit with this value of
the trivial clustering solution is returned.
The precise number of clusters requested is not possible because of tied clustering distances. The actual number of clusters, less than the number requested, is returned in
7 Accuracy
The accuracy will depend upon the accuracy of the distances in
A fixed number of clusters can be found using the non-hierarchical method used in
9 Example
Data consisting of three variables on five objects are input. Euclidean squared distances are computed using
and median clustering performed using
. A dendrogram is produced by
and printed. G03EJF finds two clusters and the results are printed.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/G03/g03ejf.html","timestamp":"2014-04-21T10:43:54Z","content_type":null,"content_length":"19742","record_id":"<urn:uuid:4c3b4a22-7870-40ed-be53-fa2d40a1fabd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Aly
Total # Posts: 214
Evaluate: sigma notation of (3cos k(pi) +1) start value: k=0 end value: k=3
The 20th term of an arithmetic sequence is 27 and the first term is -11. What is the common difference??
PreCalc PLEASE HELP
Find the common ratio, r, for the geometric sequence that has a1= 100 and a8= 25/32
PreCalc PLEASE HELP
Ms Farnum was hired for a new job at a salary of $62,500 and was promised a 5% raise each year. What would Ms.Farnum's salary be, to the nearest dollar, after working there for 6 years?
what is a scenario in which REDUCING the time of impact would be beneficial?
Pre-Calc Help please
Solve for x: log base5 (4x) + log base5 (x) = log base5 (64)
Pre Calc
Solve the following inequality in terms of natural logarithms (ln). (e^6x)+2 is less than or equal to 3.
Thank you!
Solve the following equation for x in terms of ln and e. (2e^3)-6-(16e^-x)=0
Solve for x: log base27 (2x-1) = 4/3
Solve the following inequality in terms of natural logarithms (ln). (e^6x)+2 is less than or equal to 3.
Find N if log base 6 (6^7.8)=N
When was the first contact of the following first nations people of canada: First nations of the Mackenzie and Yukon River Basins Pacific coast first nations Plateau Fist Nations Plains first nations
Lord of the flies questions 1-7 Although it appears that Ralph and jack would become good friends strain is being placed on their friendship. The boys seem to be from two different world, "There was
the brilliant world of hunting, tactics, fierce exhilaration, skill: and ...
social studies
during the early 1800s the United States tried to make peace with other countries in order to grow and develop. give an example of one of those peace efforts and briefly explain what it tried to
joey drives his car 7 kilometers north. he stops for lunch and then drives 5 kilometers south. what distance did he cover?
a baseball is popped straight up. the ball leaves the bat moving at 37.8 m/s. at what time is the ball 20 m above the ground?
physics. again.
a baseball is popped straight up. the ball leaves the bat moving at 37.8 m/s. at what time after being struck is the ball moving at 10 m/s upwards? where is the baseball at this time? answer= 67.9m I
need to know how to get this answer
a baseball is popped straight up. the ball leaves the bat moving at 37.8 m/s. to what height did the ball rise. t=7.72 s the answer is 73 m, but how do you get that answer
physics URGENT
3. Using conceptual meaning of g = Fg/m [N/kg], answer the following: a) What does the spring scale read on the surface of the Moon?
The students sold children's ticket for $30 per ticket and adult tickets for $40 for a total of $930. Write an equation for the scenario and determine the x and y intercept.
1) what does the y intercept on a force vs acceleration graph represent? 2)what is the relationship of mass and acceleration?
PLS HELP, BEEN STUCK FOREVER ON TWO QUESTIONS, THIS IS ONE : There are usually no costs for the first 3 years, but thereafter maintenance is re- quired for restriping, weed control, light
replacement, shoulder repairs, etc. For one section of a particular highway, these costs ...
A building is priced at 125,000. If a down payment of 25,000 is made and a payment of 1,000 every month thereafter is required, how many months will it take to pay for the building? Interest is
charged at a rate of 9% compounded monthly.
Omega Instruments has budgeted $300,000 per year to pay for certain ce- ramic parts over the next 5 years. If the company expects the cost of the parts to increase uniformly according to an arith-
metic gradient of $1 0,000 per year, what is it expecting the cost to be in year...
practice question
How do i go about solving this one? A cash flow sequence starts in year 1 at $3000 and decreases by $200 each year through year 10. (a) Determine the value of the gradient G; (b) determine the amount
of cash flow in year 8; and (c) determine the value of n for the gradient. al...
Pls Help. Not too sure how to do this question. Someone wants to set aside money for their newborn daughters college funds. He estimates she would need 25,000 on her 17, 18, 19 and 20, birthdays. If
he plans to make uniform deposits starting 3 years from now and through her 16...
Anyone for B_0 for both cases? Problem 3) b)?
Sarah was cutting fabric for a quilt. She cut a strip of fabric that was 19 1/8 inches long into 5 equal pieces. When she was finished cutting Sarah had a piece that was 2 1/4 inches long. How long
was each piece that she cut for the quilt?
The elevation of the surface of the Dead Sea is -424.3 meters. In 2005, the height of Mt. Everest was 8,844.33 meters. How much highernwas the summit of Mt. Everest?
lewis structure of HOCH2S-
(16xy^3/-24x^4y)^-3 simplify the rational expression in the form A/B. Single fraction with only positive exponents.
*cubic square root*125h^9s^1/*cubic square root*h^-6s^-1 The expression above equals kh^rs^t where r, the exponent of h, is..? and t, the exponent of s, is...? and k, the leading coefficient is...?
all answers should be rational numbers
Simplify the rational expression in the form A/B. X^-2+X^-4. Express the final result in a single fraction using positive exponents only.
multiply both sides by 2 and add them together... were you given formulas?
You can use the pythagorean theorem (c^2=a^2+b^2, where "c" is the hypotenuse and "a" and "b" are the other sides) to find the length of the other side. Manipulate it to get c^2-a^2=b^2, then take
the square root of the answer to get the length of...
The statement is true, because f'(x) deals with the slope of a tangent line. At any extrema (maximum or minimum point), the tangent line will be horizontal, meaning that it has a slope of 0. Since f'
(x) is the slope of the tangent line, we can say that f'(x)=0 at t...
fixed typo ((sinx + cosx)/(1 + tanx))^2 + ((sinx - cosx)/(1 - cotx))^2 = 1
thanks :)
please help i can't find the answer for this question. i can only manipulate one side of the equation and the end result has to equal the other side. Problem 1. ((1 + sinx)/cosx) + (cosx/(1 + sinx))
= 2secx
Is this the correct equatioin for the reaction of cyclohexene with MnO4- ? C6H10 + MnO4- ----> C6H10(OH)2 + MnO2
Physics Help
I am having trouble with part b myself but for part c the only force is gravity. So F=ma =.0008kg*9.8m/s^2 =0.00784N
I'm stuck on this SAME question.
A skier of mass 71.0 kg is pulled up a slope by a motor-driven cable. How much work is required to pull the skier 62.2 m up a 37.0 slope (assumed to be fric- tionless) at a constant speed of 2.0 m/s?
The acceleration of gravity is 9.81 m/s2 .
One of the by-products of a reaction is 2-methyl-2-butene. A) Suggest a mechanistic pathway for the formation of this alkene. B) How is this by-product removed during the purification?
An average diameter of a star is approximately 1 X 10^9 m and an average diameter of a galaxy is 1 X 10^21 m. Considering the given data, apply a sense of scale calculation to determine if when a
galaxy s diameter is scaled down to the length of Manhattan (21.6 km long), ...
An average diameter of a star is approximately 109 m and an average diameter of a galaxy is 1021 m. Considering the given data, apply a sense of scale calculation to determine if when a galaxy s
diameter is scaled down to the length of Manhattan (21.6 km long), would the ...
Follow this link please! It is an interactive question that I just cannot figure out webwork.math.nau.edu/webwork2/CFisher_136/Continuity/3/?user=aed73&effectiveUser=aed73&key=
thank you! I actually figuredit out right after I submitted this lol
For what value of the constant c is the function f continuous on the interval negative infinity to positive infinity? f(x)={cx2+7x if x<2 x3−cx if x> or =2
For what value of the constant c is the function f continuous on the interval negative infinity to positive infinity? f(x)={cx2+7x if x<2 x3−cx x2 x2
medical billing and coding
which of the following is an expanded option for participation in private health care plans? A-medicare part A. B-Medicare part B. C- medicare advantage or D- medigap
The physics of electric potential energy at one corner of a triangle. Two charges, q1 = +5.0 ìC and q2 = -2.0 ìC are 0.50 m apart. A third charge, q3 = +6.0 ìC is brought from infinity to a spot 0.50
m from both charges so that the three charges form an eq...
The physics of an electron moving from one plate to another. An electron acquires 4.2 x 10-16 J of kinetic energy when it is accelerated by an electric field from Plate A to Plate B. What is the
potential difference between the plates, and which plate is at a higher potential?
The physics of an electron falling through a potential difference. How much kinetic energy will an electron gain (joules) if it falls through a potential difference of 350 V?
The physics of moving a small charge through a potential difference. How much work is needed to move a -8.0 ìC charge from ground to a point whose potential is +75 V?
The physics of electric field between two charges. What is the magnitude and direction of the electric field at a point midway between a -8.0 ìC charge and a +6.0 ìC charge? The -8.0 ìC charge is 4.0
cm to the left of the +6.0 ìC charge. If a charge...
The physics of the DNA molecule. The two strands of the DNA molecule are held together by electrostatic forces. There are four bases which make up the DNA molecule, thymine (T), adenine (A), cytosine
(C), and guanine (G). In two of the bases, thymine and adenine, an oxide ion ...
of the 900 students 585 went to college what percent is that? a)150% b)65% c)90% d)58.5%
THE second one.
One of the moons of Jupiter is Io. The distance between the centre of Jupiter and the centre of Io is 4.22 x 10^8 m. If the force of gravitational attraction between Io and Jupiter is 6.35 x 10^22 N,
what must be the mass of Io?
The physics of satellite motion around Jupiter. A satellite of mass 2.00 x 104 kg is placed in orbit 6.00 x 105 m above the surface of Jupiter. Please refer to the data table for planetary motion
included in this lesson. a) Determine the force of gravitational attraction betwe...
The physics of energy and Earth's moon as a satellite. The moon is an Earth satellite of mass 7.35 x 10^22 kg, whose average distance from the centre of Earth is 3.85 x 10^8 m. a) What is the
gravitational potential energy of the moon with respect to Earth? b) What is the ...
The physics of gravity and satellite motion around Jupiter. A satellite of mass 2.00 x 104 kg is placed in orbit around Jupiter. The mass of Jupiter is 1.90 x 1027 kg. The distance between the
satellite and the centre of Jupiter is 7.24 x 107 m. a) Determine the force of gravi...
The physics of orbital radius and orbital period of a planet. If a small planet were discovered whose orbital period was twice that of Earth, how many times farther from the sun would this planet be?
A roller coaster of mass 1000.0 kg passes point A with a speed of 1.80 m/s? point a= 30m point b= 0m point c= 25m point d= 12m 1. What is the total mechanical energy of the roller coaster at point A?
2. What is the speed of the roller coaster at point B? 3. What is the potenti...
The physics of a golf ball and kinetic energy. When a 0.045 kg golf ball takes off after being hit, its speed is 41 m/s. a) What is the kinetic energy of the ball after it has been hit? b) How much
work is done on the ball by the club? c) Assume that the force of the club acts...
1. The physics of Mars and its two moons. Mars has two moons, Phobos and Deimos (Fear and Panic, the companions of Mars, the god of war). Deimos has a period of 30 h, 18 min and a mean distance from
the centre of Mars of 2.3 x 104 km. If the period of Phobos is 7 h, 39 min, wh...
dont say im ancient im just old
English 101
I forgot to mention what questions I need to answer. 1. With what kind of couples is the speaker comparing himself and his partner. What is his hypothesis about why they are different? 2. There are
other smaller comparisons that appear in the poem. What are they and what to th...
English 101
I am pretty good at writing papers, but poetry is my weakness. I have to read "For C" by Richard Wilbur and I am having the hardest time analyzing it. Please help me!
A 10.00g sample of a mixture of CH4 and C2H4react with O2 at 25C and 1 atm to produce CO2 and water. If the reaction produces 520kJ of heat what is the mass percent of CH4 in the mixture? I don't
want the answer just how to o
A 10.00g sample of a mixture of CH4 and C2H4react with O2
what are you wearing tonight?
math formula?
Mary would like to save $10 000 at the end of 5 years for a future down payment on a car. How much should she deposit at the end of each week in a savings account that pays 12%/a, compounded monthly,
to meet her goal? d. Determine the weekly deposit without technology. Can som...
3. Find the exact value of the following, show all work. a. sin^2 (75^0)/sin45, cos^2, 30^0 b. tan(135) + sin(30)cot(120)
A 7.6 kg chair is pushed across a frictionless floor with a force of 42 N that is applied at an angle of 22° downward from the horizontal. What is the magnitude of the acceleration of the chair?
A giant excavator (used in road construction) can apply a maximum vertical force of 2.25×105 N. If it can vertically accelerate a load of dirt at 0.200 m/s2, what is the mass of that load? Ignore the
mass of the excavator itself.
A giant excavator (used in road construction) can apply a maximum vertical force of 2.25×105 N. If it can vertically accelerate a load of dirt at 0.200 m/s2, what is the mass of that load? Ignore the
mass of the excavator itself.
In moving a standard computer mouse, a user applies a horizontal force of 6.00×10−2 N. The mouse has a mass of 125 g. (a) What is the acceleration of the mouse? Ignore forces like friction that
oppose its motion. (b) Assuming it starts from rest, what is its speed ...
Hi, I just want to make sure I am doing this assignment correctly.The question is: How are medical coding, physician, and payer fees related to the compliance process? I believe that they are related
because the medical coding specialist is required to know what the patient...
Hi, I just want to make sure I am doing this assignment correctly.The question is: How are medical coding, physician, and payer fees related to the compliance process? I believe that they are related
because the medical coding specialist is required to know what the patient...
Paul needs 2 1/4 yards of fabric to make a tablecloth. How many tablecloths can he make from 18 1/2 yards of fabric
45 is 93% of x round to the nearest tenth where necessary.
Yes so should the final answer be 6 or?
round to the nearest tenth if necessary 6 is 98% of x
6 is 98% of x which is 6.122448979 and I have to round to the nearest whole amount where necessary
40,571 is x % of 76,550
-25/30, -10/30, -9/30, 20/30?
haha nvm!! :)
The birth weight of childeren were 7 pounds 2 ounces, 6 pounds 10 ounces, 8 pounds 5 ounces, 7 pounds 9 ounces. Luis estiamted the total weight of the babies at birth was 30 pounds. Which best
describes his estimate? A. More than the actual weight because he rounded to the nea...
thank you!! :) can you answer my math correct or no question that I posted?
Houston - 2.01*
Most populated U.S cities City Population(millions) New york - 8.10 Los Angeles - 3.85 Chicago - 2.86 Houston - 2 01 Between which two cities was the difference in population about 800,000 people? A.
Chicago and Los Angeles B. New york and Los Angeles C. Chicago and Houston D....
A train travels at rate of 40 miles per hour for 3 hours. The train enigineer wants to make a trip back in 2 1/2 hours. Which equation can be used for r , the rate the train has to travel on the trip
back? A. r= 40 x 3/2.5 B. r= 40 x 3 x 2.5 C. r= 40 x 2.5/3 D. r= 40 x 2.5
Tami is cutting an 8 1/2 inch wide piece of paper in strips for an art project. How many 1/2 inch strips can she cut?
A book is 1 3/4 inches thick. There are 8 copies of the same book stacked in a bookstore. How tall is the stack in inches?
A book is 1 3/4 inches thick. There are 8 copies of the same book stacked in a bookstore. How tall is the stack in inches?
Maureen forgets to turn a faucet off. After a minute, 1.5 gallons of water comes out of the faucet. Which equation can bw used to find the number of gallons, g, that comes out of the faucet after 2
hours? A. g= 1.5 x 1/30 B. g=1.5 x 1/120 C. g=1.5 x 120 D. g= 1.5 x 604
math correct or no?
The birth weights of 4 childeren were 7 pounds 2 ounces, 6 pounds 10 ounces, 8 pounds 5 ounces, 7 pounds 9 ounces. Luis estimated the totall wieight of te babies at birth was 30 pounds. Which best
describes his estiamte? A. More than the actual weight because he round to the n...
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Aly","timestamp":"2014-04-16T09:12:11Z","content_type":null,"content_length":"29342","record_id":"<urn:uuid:2b91da85-bfef-4629-8b8e-4bf5a06d33ea>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Methods: Numerical Differentiation
February 25th 2009, 02:20 AM #1
Jan 2009
Numerical Methods: Numerical Differentiation
Hi everyone. I have a homework on numerical differentiation and I just want to know if my answers are correct.
1.) The partial derivative, $f_x(x,y)$ of $f(x,y)$ with respect to x is obtained by holding y fixed and differentiating with respect to x. Similarly, $f_y(x,y)$ is found by holding x fixed and
differentiating with respect to y. The equation below is adapted to partial derivatives:
$f_x(x,y) =\frac{f(x+h,y) - f(x-h,y)}{2h}+O(h^2)<br />$
$f_y(x,y) =\frac{f(x,y+h) - f(x,y-h)}{2h}+O(h^2)<br />$
[The two equations above are denoted as equation (i)]
(a). Let $f(x,y)=\frac{xy}{(x+y)}$. Calculate the approximations to $f_x(2,3)$ and $f_y(2,3)$ using the formulas in (i) with h = 0.1, 0.01, and 0.001. Compare with the values obtained by
differentiating $f(x,y).$
MY ANSWERS:
First, I solved for the derivative of $f(x,y)$ with respect to x. I got:
Solving for $f_x(x,y)$ using equation (i)
h_____ $f(2+h,3)$_ $f(2-h,3)$_____2h______ $f_x(2,3)$
Solving for $f_y(x,y)$
Derivative with respect to y: $f_y(x,y)=\frac{x(x+y)-yx^2}{(x+y)^2}=\frac{2(2+3)-3(2)^2}{(2+3)^2}=-0.08$
Solving for $f_y(x,y)$ using equation (i)
h_____ $f(2,3+h)$_ $f(2,3-h)$_____2h______ $f_y(2,3)$
Am I doing this right??
================================================== =======
2.) The distance $D = D(t)$ traveled by an object is given in the table below:
(a) Find the velocity $V(10)$ by numerical differentiation
(b) Compare your answer with $D(t)=-70+7t+70e^\frac{-t}{10}$.
My Answers:
(a). Is it okay if I use central-difference to solve for the velocity? Or should I use forward of backward difference? If so, should I assume different step values (h)? Using central-difference,
my answer is:
(b). $D(t)=-70+7t+70e^(-t/10)=-70+7(10.0)+70e^\frac{-10.0}{10}=25.752$.
$V(10)=\frac{D(10.0)}{10.0}=\frac{25.752}{10.0}=2.5 752$
Last edited by zeugma; February 25th 2009 at 06:00 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/75664-numerical-methods-numerical-differentiation.html","timestamp":"2014-04-20T09:39:30Z","content_type":null,"content_length":"36498","record_id":"<urn:uuid:7de8ac56-0392-491e-bd14-dd60551e97c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
DevMaster - game development news, discussions, and resources
Ok this is just thinking aloud… but
d = A sin (alpha); // d is the third side of the triangle formed by A and C, alpha is the angle between A and C
e = B sin (beta); // e is the third side of the triangle formed by B and C, beta is the angle between B and C
c1 = 10 / tan(alpha); // c1 is x coordinate of the intersection point
C - c1 = 10 / tan(beta);
so ….
C = 10/tan(beta) + 10/tan(alpha);
That should be enough to get you started | {"url":"http://devmaster.net/?page=15&tab=replies","timestamp":"2014-04-21T12:09:54Z","content_type":null,"content_length":"53698","record_id":"<urn:uuid:df617975-dcbd-400f-b671-f24087bb56ae>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Mathematical Introduction to Conformal Field Theory
A Mathematical Introduction to Conformal Field Theory Martin Schottenloher is available to download
A Mathematical Introduction to Conformal Field Theory
Martin Schottenloher
Type: eBook
Released: 2008
Publisher: Springer
Page Count: 249
Format: pdf
Language: English
ISBN-10: 3540686258
ISBN-13: 9783540686255
The first part of this book gives a detailed, self-contained and mathematically rigorous exposition of classical conformal symmetry in n dimensions and its quantization in two dimensions.
A Mathematical Introduction to ...
Textbook In particular, the conformal groups are determined and the appearance of the Virasoro algebra in the context of the quantization of two-dimensional conformal symmetry is explained via the
classification of central extensions of Lie algebras and groups. The second part presents several different approaches to conformal field theory and surveys more advanced topics, such as the
representation theory of the Virasoro algebra, conformal symmetry within string theory, an axiomatic approach to Euclidean conformally covariant quantum field theory and a mathematical interpretation
of the Verlinde formula in the context of moduli spaces of holomorphic vector bundles on a Riemann surface. The substantially revised and enlarged second edition makes in particular the second part
of the book more self-contained and tutorial, with many more examples given. Furthermore, two new chapters on Wightman's axioms for quantum field theory and vertex algebras broaden the survey of
advanced topics. An outlook making the connection with recent developments has also been added.
A Mathematical Introduction to Conformal Field Theory
You should be logged in to Download this Document. Membership is Required. Register here
Related Books on A Mathematical Introduction to Conformal Field Theory
Comments (0)
Currently,no comments for this book! | {"url":"http://bookmoving.com/book/a-mathematical-introduction-conformal-field-theory_131511.html","timestamp":"2014-04-20T16:33:39Z","content_type":null,"content_length":"14860","record_id":"<urn:uuid:54373cf8-9b00-4951-a829-a64d6f01804c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lancaster, TX Algebra 1 Tutor
Find a Lancaster, TX Algebra 1 Tutor
...I have experience working in both corporate finance and financial markets, and generally have loved my experience in the field. I am a graduate of the Vlerick Business School and am a
practicing financial trader. Previously I worked in a corporate finance role, working on both finance and general business projects and was able to significantly impact the company’s bottom-line.
11 Subjects: including algebra 1, accounting, German, Microsoft Word
...While in college, I spent 3 years tutoring high school students in math, from algebra to AP Calculus. I also tutored elementary students in reading and spent 6 months homeschooling first and
third grade. When I was in high school, I would help my classmates in every subject from English to government to calculus.
40 Subjects: including algebra 1, chemistry, reading, elementary math
...Thanks for WyzAnt, I could help kids around me in this area, hope I could reach you too. I have always worked with kids, And I have been enjoying working with children. So, please feel free to
let me know if I can help you.
13 Subjects: including algebra 1, calculus, vocabulary, autism
...If you are... A student looking for an extra edge in your classes, or... A junior or senior prepping for the ACT or SAT, or... An adult seeking to express yourself more clearly in the
workplace, then... Email me and let’s get started!I love algebra and have tutored over 45 hours of Algebra 1....
15 Subjects: including algebra 1, reading, algebra 2, geometry
...At Michigan I graduated with a dual degree in mathematics and economics. I am an experienced tutor in all high school/middle school subjects. I began tutoring my friends in high school and it
quickly turned into a job in college while in Ann Arbor!
21 Subjects: including algebra 1, reading, calculus, geometry
Related Lancaster, TX Tutors
Lancaster, TX Accounting Tutors
Lancaster, TX ACT Tutors
Lancaster, TX Algebra Tutors
Lancaster, TX Algebra 2 Tutors
Lancaster, TX Calculus Tutors
Lancaster, TX Geometry Tutors
Lancaster, TX Math Tutors
Lancaster, TX Prealgebra Tutors
Lancaster, TX Precalculus Tutors
Lancaster, TX SAT Tutors
Lancaster, TX SAT Math Tutors
Lancaster, TX Science Tutors
Lancaster, TX Statistics Tutors
Lancaster, TX Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Balch Springs, TX algebra 1 Tutors
Cedar Hill, TX algebra 1 Tutors
Desoto algebra 1 Tutors
Duncanville, TX algebra 1 Tutors
Euless algebra 1 Tutors
Farmers Branch, TX algebra 1 Tutors
Glenn Heights, TX algebra 1 Tutors
Highland Park, TX algebra 1 Tutors
Hurst, TX algebra 1 Tutors
Hutchins algebra 1 Tutors
Ovilla, TX algebra 1 Tutors
Red Oak, TX algebra 1 Tutors
Rockwall algebra 1 Tutors
Watauga, TX algebra 1 Tutors
Wilmer, TX algebra 1 Tutors | {"url":"http://www.purplemath.com/lancaster_tx_algebra_1_tutors.php","timestamp":"2014-04-20T01:49:27Z","content_type":null,"content_length":"24167","record_id":"<urn:uuid:3d1f7ee2-b05d-497a-9b79-60967c290f41>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Straight Line
August 31st 2008, 08:10 AM
The Straight Line
Three lines have the equations 2x+3y-4=0
Show wether these lines are cocurrent or not.
Anyone know how to do this, darn im rubbish at maths (Wait)
August 31st 2008, 08:29 AM
Hello, mattymaths!
Given three lines: . $\begin{array}{cccc}2x+3y-4&=&0 &{\color{blue}[1]}\\ 3x-y-17&=&0 & {\color{blue}[2]}\\ x-3y-10&=&0 &{\color{blue}[3]}\end{array}$
Show whether these lines are concurrent or not.
"Concurrent" means they intersect at one point.
Find the intersection of [1] and [3].
Find the intersection of [2] and [3].
Are they the same point?
August 31st 2008, 09:54 AM
Thanks for the reply!
Would i be right then rearranging the equations [1] and [2] so they look like
and then solving simultaneously?
Then doing the same for [2] and [3]?
August 31st 2008, 10:03 AM
Yes . . . absolutely right!
August 31st 2008, 10:19 AM
I thought that, but i just cant seem to get it right, can you see where im going wrong?
[1] 2x + 3y = 4
[2] 3x -y = 17 X3
[1] 2x + 3y=4
[2] 9x - 3y=51
doesnt seem to work out right, i'm not too sure what ive done wrong (Worried)
SORRY I'VE NOTICED MY INCREDIBLY SILLY MISTAKE!
Anyway i done 2(5)+3y=4
so 10+3y=4
so (5,-2) is this right? i hope so ha!
Thank you very much for the replies btw | {"url":"http://mathhelpforum.com/pre-calculus/47233-straight-line-print.html","timestamp":"2014-04-18T16:01:04Z","content_type":null,"content_length":"6228","record_id":"<urn:uuid:82eb0794-ba5e-4787-aa5d-d35e2d0814b5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining Residues and using Cauchy Integral Formula
December 20th 2012, 11:55 AM
Determining Residues and using Cauchy Integral Formula
I have a problem with this integral:
$\int_{-\infty}^{\infty}dE\frac{1}{E^{2}-\mathbf{p}^{2}-m^{2}+i\epsilon}\: ,\: l^{2}=\mathbf{p}^{2}+m^{2}$
The integration over all energies (arising in the loop function for calculating the scattering), I understand we write the above in this form:
Where ε is small and so the factor arising from it multiplying by itself can be neglected. It seems to evaluate this we can either calculate the residues of the two poles and sum them up and
multiply by 2pi*i or we can use Cauchy Integral's formula - though I think it's the same thing... not really sure.
Our poles are at
$-(l-\frac{i\epsilon}{2l})\: ,(l-\frac{i\epsilon}{2l})$
and we find the residues to be
But I'm not sure how we see this or do this exactly...
Any help is appreciated,
December 20th 2012, 07:58 PM
Re: Determining Residues and using Cauchy Integral Formula
Hey Sekonda.
Did you try expanding out 1/(z-a) * 1/(z-b) into partial fractions and then using the results for residues?
December 21st 2012, 01:24 AM
Re: Determining Residues and using Cauchy Integral Formula
Indeed I did just last night, but I'm glad you've confirmed what I've done is correct. Still confused at why the 1/z contribution is the one we want, will need to look this up - unless someone
can explain!
Thanks chiro, | {"url":"http://mathhelpforum.com/advanced-math-topics/210178-determining-residues-using-cauchy-integral-formula-print.html","timestamp":"2014-04-19T11:17:39Z","content_type":null,"content_length":"6269","record_id":"<urn:uuid:4ed5a081-d260-44aa-9f0d-4d022f631dac>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakland East Bay Math Circle
How many divisors are there in 72? How do you know this is right? Do you know what a divisor is? Today began with students defining divisors and non-divisors.
Then came the real task at hand: how many divisors are there in a number and how did we know we had them all? We started with trying to find the divisors of 72. We listed out all the possible numbers
we could find, a “proof by exhaustion”, Emily defined. Ok. Now how many divisors are there in 144?
Pablo came up with a theory that we could just double the list, then remove the repeated numbers. This seemed to work, but we all agreed that the list was getting too long — proof by exhaustion was
exhausting — so we put the factors linked together into a table.
We soon had the following: a table with its columns the powers of one of the factors, 3, and the rows with the powers of the other, 2.
The blue numbers are the factors of 72, and the red are the additional divisors of 144.
Can you extend this to quickly come up with d(1000), d(375) or d(10,000) where d(n) is the number of divisors of n?
More next week. In the meantime, here are the challenge questions Emily wrote up.
Can you do the same with numbers that have 3 prime factors such as d(30), d(350), and d(360)?
Skyline Math Circle: Puppies, Pizza, and Paul
How are these three things related? Why are these things related? What are puppies doing at Skyline? And where are the kittens?
Last Tuesday Paul Zeitz led his “Puppies and Kittens” math circle lesson. Puppies and Kittens is a variation of the Nim game also called “take away”. Most of you know it. Puppies and kittens
distinguishes itself from Nim by having two piles rather than one. This makes the game becomes much more complex.
Paul starts by challenging students to play him and letting us know that he is going to beat us every time. And he does. And then promises that we will all be able to beat even those guys that play
chess for money in SF. But before he leads us to a method of mapping out the puppies and kittens game, we begin with a simpler version of take-away. (This is a commonly used problem-solving strategy
— work with a smaller problem first).
The first version of take-away we play is with 16 hypothetical pennies. We are allowed to take away 1 to 4 pennies each turn, and the last person to make a legal move wins. We play this for a while,
and begin exploring strategies. Some people have a few ideas, but no one yet describes a strategy that works with any number of pennies. Paul suggests that we start with even a simpler problem: what
is the winning strategy with 1 penny? 2 pennies? Three pennies? Four? Five? Ah-ha, with 5 pennies you can win, that is if you go second.
You can represent this strategy on a number line. The number circled in blue is the Poisoned position.
From there your opponent cannot stop you from winning. But how can you be sure that you leave your opponent with 5 coins? She must leave you with 6, 7, 8, or 9 coins, you will take away whatever is
needed to leave her with 5, then you win.
Turns out as long as you leave your opponent with a multiple of 5, you will always win.
Now on to Puppies and Kittens. The rules are you can take away any number of puppies, or you can take away any number of kittens, or you can take away equal numbers of both. Start with 12 puppies and
16 kittens.
It quickly becomes clear that if you leave your opponent with 2 of one baby animal and one of the other, you win. But how can you make sure you do that?
Now it gets really fun! Paul creates a graph with one axis as the number of puppies, and the other the number of kittens. He circles the Poison position we know of either 2 puppies and one kitten, or
two kittens and one puppy. All positions one move to finish
are crossed out. Then all moves to the Poison position are crossed out — you don’t want your opponent to be able to get there. An interesting symmetry takes place. And quickly you can see where the
next Poison position is. Here is my quick and rather poor sketch.
I hope this gives you an idea of how we can use graphs to see winning strategies. Try it yourself! See how far you can go.
Where does the pizza fit in?
Lesson Handout:
Four Mathematical Games
A discussion of winning strategies and graphs:
Winning at Puppies and Kittens
There is a relationship between the Golden Ration and winning strategies:
Puppies and Kittens and the Golden Ratio | {"url":"http://oebmc.org/","timestamp":"2014-04-16T13:02:14Z","content_type":null,"content_length":"30713","record_id":"<urn:uuid:0bb19689-f3ca-4103-b110-4a725d48c7e3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Juergen Bierbrauer
Department of Mathematical Sciences Michigan Technological University 1400 Townsend Drive, 226B Fisher Hall Houghton, MI 49931-1295, U.S.A. Phone: (906) 487 3362 Fax: (906) 487 3133 E-mail:
│ │ My book: │
│ │ │
│ │ Introduction to Coding Theory appeared in 2004 │
│ │ │
│ │ Publisher: Chapman & Hall/CRC │
I am on the Editorial Board of Designs, Codes and Cryptography.
Mathematical Interests
My interest is in Discrete Mathematics: coding theory, finite geometries, algebra.
List of publications
Coding Theory ...
A family of highly symmetric codes (coauthors S. Marcugini, F. Pambianco)
(a slightly extended version of what appeared in the IEEE IT Transactions, October 2005)
New codes via the lengthening of BCH codes with UEP codes , (coauthors Yves Edel and Ludo Tolhuizen).
A family of 2-weight codes related to BCH-codes , (coauthor Yves Edel).
New code parameters from Reed Solomon subfield codes , (coauthor Yves Edel).
Extending and lengthening BCH-codes , (coauthor Yves Edel).
Lengthening and the Gilbert-Varshamov bound , (coauthor Yves Edel).
Some codes related to BCH-codes of low dimension , (coauthor Yves Edel).
Inverting construction Y1 , (coauthor Yves Edel).
Parameters for binary codes , (coauthor Yves Edel).
Parameters for ternary codes , (coauthor Yves Edel).
Parameters for quaternary codes , (coauthor Yves Edel).
The structure of some good codes ,
... in particular additive/quantum codes
A geometric non-existence proof of an extremal additive code (coauthors S.Marcugini, F.Pambianco)
Geometric Constructions of Quantum Codes (coauthors D.Bartoli, S.Marcugini, F.Pambianco)
The spectrum of stabilizer quantum codes of distance 3
The geometry of quantum codes (coauthors G. Faina, M. Giulietti, S. Marcugini, F. Pambianco)
Short additive quaternary codes (coauthors E. Edel, G. Faina, S. Marcugini, F. Pambianco)
Additive quaternary codes of small length (coauthors G. Faina, S. Marcugini, F. Pambianco)
Cyclic additive and quantum stabilizer codes
Quantum twisted codes (coauthor Yves Edel).
Direct constructions of additive codes
The theory of cyclic codes and a generalization to additive codes
... and further Applications
Constructing good covering codes for applications in Steganography (coauthor Jessica Fridrich)
Universal hashing and geometric codes
Authentication via algebraic-geometric codes
Dense sphere packings from new codes, (coauthor Yves Edel).
Orthogonal arrays and the like
Construction of orthogonal arrays
Theory of perpendicular arrays (coauthor Yves Edel).
APN functions and crooked functions
Crooked binomials (coauthor Gohar Kyureghyan).
Planar functions and semifields
New semifields, PN and APN functions
New commutative semifields and their nuclei .
Limited bias, weak dependence, covering arrays and cryptologic applications
Constructive asymptotic lower bounds on linear codes and weakly biased arrays (coauthor Holger Schellwat).
Efficient constructions of $\epsilon -$biased arrays, $\epsilon -$dependent arrays and authentication codes (coauthor Holger Schellwat).
Weakly biased arrays, weakly dependent arrays and error-correcting codes (coauthor Holger Schellwat).
Almost independent and weakly biased arrays: efficient constructions and cryptologic applications (coauthor Holger Schellwat).
Galois geometries
A family of caps in projective $4$-space in odd characteristic (coauthor Yves Edel).
Recursive constructions for large caps (coauthor Yves Edel).
$41$ is the largest size of a cap in $PG(4,4)$(coauthor Yves Edel).
The maximal size of a $3$-arc in $PG(2,8).$
Large caps in small spaces (coauthor Yves Edel).
Caps on classical varieties and their projections (coauthors Antonello Cossidente and Yves Edel).
The largest cap in $AG(4,4)$ and its uniqueness (coauthor Yves Edel).
Caps of order $3q^2$ in affine $4$-space in characteristic $2$ (coauthor Yves Edel).
The sporadic $A_7$-geometry and the Nordstrom-Robinson code
Material on (t,m,s)-nets
Construction of digital nets from BCH-codes , (coauthor Yves Edel).
Families of ternary (t,m,s)-nets related to BCH-codes , (coauthor Yves Edel).
Good nets in other characteristics
Coding-theoretic constructions for $(t,m,s)$-nets and ordered orthogonal arrays (coauthors Yves Edel and W. Ch. Schmid)
A family of binary $(t,m,s)$-nets of strength $5,$ (coauthor Yves Edel)
An asymptotic Gilbert-Varshamov bound for tms-nets (coauthor Wolfgang Schmid)
Families of nets of low and medium strength (coauthor Yves Edel)
A little $(t,m,s)$-bibliography
A direct approach to linear programming bounds
Bounds on affine caps (coauthor Yves Edel).
Bounds on orthogonal arrays and resilient functions
A note on the duality of linear programming bounds for orthogonal arrays and codes (coauthors K.Gopalakrishnan, D.R.Stinson).
Orthogonal Arrays, Resilient Functions, Error Correcting Codes and Linear Programming Bounds (coauthors K.Gopalakrishnan, D.R.Stinson).
Combinatorial Designs
An infinite family of $7$-designs
Projective planes, coverings and a network problem (coauthors S. Marcugini and F. Pambianco)
Some friends of Alltop's designs
A family of 4-designs with block-size 9
Lecture Notes
MA5301: Finite groups and fields (Fall 2008)
MA3210: Introduction to Combinatorics
MA 4211: Information Theory/ Data Compression
MA 462: Introduction to Group Theory
Group Theory and Applications (lecture 2005)
Introduction to Group Theory and Applications (lecture 2000)
MA 572,COMBINATORICS II:Permutation groups, designs and enumeration
More information is on Yves Edel's homepage.
Here is the link to MinT, the Salzburg database for net parameters
And here a link to A.E.Brouwer's data base of bounds for linear codes. | {"url":"http://www.math.mtu.edu/~jbierbra/","timestamp":"2014-04-20T08:28:08Z","content_type":null,"content_length":"13409","record_id":"<urn:uuid:32bd9e56-b4f6-4c09-a7f7-c4ab9432d46f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
A note on the semiprimitivity of Ore extensions
Jordan, C. R. and Jordan, D. A. (1976). A note on the semiprimitivity of Ore extensions. Communications in Algebra, 4(7) pp. 647–656.
A well known result on polynomial rings states that, for a given ring
Item Type: Journal Article
Copyright Holders: 1976 Marcel Dekker
ISSN: 1532-4125
Extra Information: MR number MR0404314
Academic Unit/Department: Mathematics, Computing and Technology > Mathematics and Statistics
Item ID: 31530
Depositing User: Camilla Jordan
Date Deposited: 09 Feb 2012 11:37
Last Modified: 09 Feb 2012 11:48
URI: http://oro.open.ac.uk/id/eprint/31530
Share this page:
Actions (login may be required)
View Item
Report issue / request change | {"url":"http://oro.open.ac.uk/31530/","timestamp":"2014-04-17T06:42:32Z","content_type":null,"content_length":"26614","record_id":"<urn:uuid:1d0ca2bd-f338-4ac8-9b13-d63f5a27a580>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
DevMaster - game development news, discussions, and resources
Ok this is just thinking aloud… but
d = A sin (alpha); // d is the third side of the triangle formed by A and C, alpha is the angle between A and C
e = B sin (beta); // e is the third side of the triangle formed by B and C, beta is the angle between B and C
c1 = 10 / tan(alpha); // c1 is x coordinate of the intersection point
C - c1 = 10 / tan(beta);
so ….
C = 10/tan(beta) + 10/tan(alpha);
That should be enough to get you started | {"url":"http://devmaster.net/?page=15&tab=replies","timestamp":"2014-04-21T12:09:54Z","content_type":null,"content_length":"53698","record_id":"<urn:uuid:df617975-dcbd-400f-b671-f24087bb56ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michael Starbird
The Heart of Mathematics: An invitation to effective thinking
The Heart of Mathematics introduces students to the most important and interesting ideas in mathematics while inspiring them to actively engage in mathematical thinking.
Publisher's Website
Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas
Coincidences fuses a professor’s understanding of the hidden mathematical skeleton of the universe with the sensibility of a stand-up comedian, making life’s big questions accessible and compelling.
Publisher's Website
Number Theory Through Inquiry
By: David C. Marshall, Edward Odell & Michael Starbird
Publisher's Website
Books In Progress
Dr. Starbird is working with two Ph.D. students (Brian Katz and David Paige, respectively) to complete the books on the Introduction to Higher Mathematics and Algebraic Topology. Thus, the notes for
those topics will be updated soon.
Please remember that these versions are just drafts and therefore have errors and weaknesses that will be fixed. | {"url":"http://www.ma.utexas.edu/users/starbird/books.html","timestamp":"2014-04-20T18:25:50Z","content_type":null,"content_length":"5539","record_id":"<urn:uuid:53cf756c-2342-4a4f-b822-82f62f5ca470>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can you interpret this divergent integral?
up vote 0 down vote favorite
In this ArXiv paper by Wilk and Wlodarczyk (published in Physical Review Letters), equation 16 has essentially the following definition of a function: $$\text{f(x)=}\frac{c}{2Dx^2}\exp[\int^x_0 \frac
{\mu}{t^2}+\frac{1-2\alpha}{t} dt]$$
They claim that with the normalization condition
$$\int^\infty_0 f(x)\,dx = 1$$
it becomes (with some variable changes)
I can't understand the integral in the exponential function, because it appears to be infinite. Can you help me by telling how can I solve this type of integral?
mp.mathematical-physics pr.probability st.statistics fa.functional-analysis
1 This question was a repeat of mathoverflow.net/questions/55714. Do not post the same question multiple times - if you think your question should be reopened you can edit it and flag it for
moderator attention, or start a thread on tea.mathoverflow.net. – Zev Chonoles Feb 17 '11 at 16:26
1 SMH was unable to edit the previous question because two user accounts were created. @SMH: you should register, so your work is less volatile. – S. Carnahan♦ Feb 17 '11 at 16:35
2 Can you just email the authors asking what exactly they mean by the function $f$, which---as noted above---is not defined in a traditional sense, at any (!) point? – Mariano Suárez-Alvarez♦ Feb 17
'11 at 17:33
1 Why do you expect mathematics published in physics journals to make sense to mathematicians? – Gerald Edgar Feb 17 '11 at 18:22
2 @Gerald: Indeed. On the other hand, Talagrand... – Did Feb 17 '11 at 18:37
show 5 more comments
1 Answer
active oldest votes
The trouble (as was already explained to you) lies in the starting point $t=0$ of the integral in the exponential. Fortunately, W+W are only interested in steady solutions of equation
(15) of their arXiv preprint and these can be written as the function in their equation (16) provided one replaces the starting point $t=0$ of the integral by any starting point $t=x_0$
with $x_0>0$. Changing $x_0$ only modifies $f$ by a multiplicative factor so the normalisation condition saves the day.
Assuming for example that $x_0=1$, one gets $$f(x)=\frac{c}{x^2}\exp\left(\int^x_1 \left(\frac{\mu}{t^2}+\frac{1-2\alpha}{t}\right) \mathrm{d}t\right)=\frac{c}{x^2}\exp\left(\mu-\frac{\
mu}{x}+(1-2\alpha)\log(x)\right), $$ hence $$ f(x)=c\mathrm{e}^{\mu}x^{-1-2\alpha}\mathrm{e}^{-\mu/x}. $$ This is not W+W's formula (18) so either I made a mistake in this post or there
up vote 8 is a misprint in W+W's preprint. Note that the function $f_{0}$ written in (18) of W+W's preprint and in your question here is not integrable if $\alpha\le2$ because $f_{0}(x)$ behaves
down vote like a multiple of $x^{1-\alpha}$ when $x\to\infty$, hence for such values of $\alpha$, $f_{0}$ cannot be normalized to get a probability density function (even assuming that one got rid
accepted of the problem of the starting point $t=0$ as I explained). The function I obtained above is integrable for every positive $\alpha$ and $\mu$.
If $f$ is the density of the distribution of a random variable $X$, the distribution of $Y=\mu/X$ has density $$ g(y)=c\mathrm{e}^{\mu}\mu^{-2\alpha}y^{2\alpha-1}\mathrm{e}^{-y}, $$ hence
$Y$ is a standard Gamma random variable of exponent $2\alpha$ and $$ c=\mathrm{e}^{-\mu}\mu^{2\alpha}/\Gamma(2\alpha). $$ (As I said before, this question belongs to math.SE.)
All this might mean that (20) in W+W's preprint should be corrected to $q=1+2\tau D$. Or maybe not. – Did Feb 17 '11 at 17:55
I want to thank Didier Piau for the answer. It is really helpful. but I think there is a mistake in g(y). g(y) is proportional to $\y^{2\alpha+1}$. I believe that in W+W's article $\2\
alpha$ must be just $\alpha$ in obtaining the coefficients of the Fokker-Planck eqn (17). so with the point of evaluationg the integrals that you said, I obtain the distribution like
this :$$\text{f(x)=}\frac{1}{\Gamma\text{(}\alpha\text{)}}\mu\text{(}\frac{\mu}{x}\text{)}^{\text{(}\alpha+1\text{)}}\exp\text{(-}\frac{\mu}{x}\text{)}.$$ I have $\alpha+1$ not $\
alpha-1$. – SMH Feb 17 '11 at 19:17
@SMH: No, $y^{2\alpha-1}$ is correct. – Did Feb 17 '11 at 19:22
@Didier: sorry! Can you explain it a little more please? – SMH Feb 17 '11 at 19:45
2 @SMH: The empiricist in me finds rather limited the size of your sample, to reach any valuable conclusion about ISI papers in general. The cynic in me finds rather naive your reaction
about the mathematical accuracy of physics papers. Best wishes for your future studies. – Did Feb 18 '11 at 11:33
show 2 more comments
Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics pr.probability st.statistics fa.functional-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/55733/can-you-interpret-this-divergent-integral","timestamp":"2014-04-21T07:50:28Z","content_type":null,"content_length":"66118","record_id":"<urn:uuid:8319bd62-6397-4887-8def-59d24f8f018a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Encyclopaedia Index
(a) What is divergence
Because the solution procedures built into PHOENICS are essentially iterative, it is usual that many adjustment cycles must be carried out before the residuals have been reduced sufficiently for the
computation to be terminated. Sometimes however, the residuals in one or more equations start and continue to increase rather than decrease as the sweeps proceed. This phenomenon is known as
"divergence"; and it is often accompanied by the appearance of unphysically large values in some of the dependent variables. This process can be watched at the VDU screen. (See PHENC entries UWATCH
and TSTSWP).
In hydrodynamic problems, it is often the values of the pressure which depart farthest from physically-realistic values; because of this, PHOENICS has a built-in trap which causes the run to be
terminated when excessive values of pressure are encountered. A message explaining this reason for the abandonment is then printed.
Sometimes the problem is not that residuals grow but that they fail to diminish sufficiently as far as has been expected. Often this behaviour is associated with finite-amplitude fluctuations in the
values of some of the dependent variables, which continue indefinitely. Such behaviour may be quite localised, as when a pressure fluctuation close to an aperture connected with the outside causes
inflows and outflows there, which alternate from sweep to sweep.
However, it must also be recognised that, since all computations are subject to round-off error, there is often a limit beyond which no reduction of the residuals is possible. Such computations can
fairly be regarded as fully converged. It is therefore always wise to set SELREF = T and RESFAC = a number which is not unreasonably small (eg 0.01). (See SELREF and RESFAC)
(b) Identifying the cause
When divergence occurs, it is necessary to establish the cause; this if it is not faulty posing of the problem, is usually to be found in the strength of the linkages between two or more sets of
equations, which are being solved in turn rather than simultaneously.
For example, if a problem of free-convection heat transfer is being solved, the two-way interaction between the temperature field and the velocity field, whereby each influences the other, is a
common source of divergence.
Whether it is the source in a particular case is easily established by 'freezing' the temperature field before the divergence has progressed too far, and then seeing if the divergence persists. If it
does not, the velocity-temperature link can be regarded as the source of the divergence; otherwise, the cause must be sought in some other linkage.
How is the 'freezing' to be effected? The simplest means is to under-relax heavily. If this is done through the SATELLITE, an appropriate setting is:
RELAX(H1,FALSDT,1.E-10) , or alternatively
RELAX(H1,LINRLX,1.E-10) .
In this way, one can investigate the contributions to divergence of linkages between: individual velocity components; the volume fractions and the velocity field in a two-phase process; the
chemical-species concentrations in a reacting flow; the pressure and the density in a compressible flow; the turbulence energy and its dissipation rate in a turbulent flow; and many other variables.
If the non-convergence is thought to originate in the linkage between the flow field and a boundary condition, as is often true of the fluctuations mentioned above, it may be beneficial to impose a
LOCAL 'freezing'. This can be effected by defining a 'source' patch for the region of interest, and then setting the third argument of the associated COVAL as a large number such as 1.E10, and the
fourth argument as SAME.
(c) Under-relaxation as a cure for divergence
If freezing by very severe under-relaxation restores convergent behaviour, it is obviously possible that modest under-relaxation will have the same qualitative tendency, while still allowing the
solution to proceed so that all residuals do finally diminish (the residuals of 'frozen' variables, incidentally, do not diminish).
The use of under-relaxation is by far the most common means of securing convergence in practice. If employed indiscriminately, it can lead to waste of computer time; however, when it is applied to
just those equations that have been identified as potential causes of divergence, and in just the amount that is necessary to procure convergence, it is probably the best first-recourse remedy to
PHOENICS is equipped with a device called EXPERT for adjusting under relaxation factors automatically. It cannot be expected to make the correct choices in extremely complex circumstances; but it is
always worth trying. (See PHENC entry EXPERT).
(d) The use of limits
The VARMAX and VARMIN quantities can also be useful in preventing divergence, particularly when this is associated with a poor starting field for the iterative process, which allows some variables at
first to stray outside the physically meaningful range.
VARMAX and VARMIN can be employed in two distinct ways: they may set upper and lower bounds to the values of the dependent variables themselves; or they may set limits to the changes in these values
in a single adjustment cycle. (See PHENC entry VARMIN)
The values of VARMAX and VARMIN can of course be chosen differently for each dependent and auxiliary variable; but there is no built-in way of applying them over restricted regions at present.
However, it is easy for a user who is willing to introduce coding in GROUND to do this for himself; for the functions FN22 and FN23 are available for applying limits to any variable over regions
defined locally by IXF, IXL, IYF, etc.
(e) Linearisation of sources
Source terms can be introduced in GROUND either directly or in a linearised manner. The first involves ascribing a single quantity for each cell, viz the source itself; the second involves ascription
of two quantities, the "coefficient" and "value". (See PHENC entries: COVAL and BOUNDARY CONDITIONS).
Suppose that the source of variable j, S, does depend upon j, so that it is possible to express it in a linearised manner as:
S = S# + (j-j#)*(dS/dj)#
wherein the # appended to a symbol signifies "current best estimate" of its value.
This expresion can be reformulated in terms of CO and VAL (See COVAL) ie as:
S = C*(V-j)
if C = - (dS/dj)#
V = j# + S#/C
If C is positive (ie S decreases as j increases), convergence is promoted by using the linearised-source form for S.
If however C is negative (ie S increases as j increases), use of this form may cause divergence. It is then better to use the first-mentioned mode, ie to employ simply:
S = S#
for otherwise the denominator of the expressions for j may fall through zero, causing unrealistically large values of j to be computed.
(f) The time-dependent approach
Many computer codes for fluid-flow simulation operate wholly in the transient mode, even when it is the ultimately-approached steady state that is of practical interest; and one reason is that, if
the transient development of a flow process is followed realistically, the non-physical values which characterise divergence can hardly occur.
The fully-implicit formulations built into PHOENICS permit the steady state to be simulated more directly, albeit in an iterative way which does have some of the properties of 'time-marching'.
Nevertheless, if convergence proves hard to procure, there is no reason why the user should not, after setting STEADY to F, watch the flow phenomenon develop in time.
Doing so, as well as yielding indirectly the desired steady-state solution, may well reveal the cause of the difficulty encountered by the direct approach. Thus, one feature of the flow may prove to
require very small time steps for its adequate resolution; in the direct approach, small 'false' time steps would correspondingly have been needed.
Of course, if convergence still proves impossible to achieve by way of the time-dependent approach, no matter how small are the time steps which are employed, the making of some fundamental in the
problem set-up should be suspected. After all, if the flow that it i desired to simulate is physically possible, there must be some way o simulating it numerically.
(g) Changing the problem formulation
A recalcitrant divergence problem is sometimes best solved by changing its formulation. For example, high-Mach-number steady-state compressible flows are difficult to simulate when the density is
obtained from the pressure and the temperature, the latter being deduced from the stagnation enthalpy and the kinetic energy.
The reason is that the speeds of approach of the enthalpy and the velocities to their true-solution values may be very different, with the result that densities may be temporarily computed which are
very far from the true-solution values; moreover, the "staggering" of the velocity fields with respect to the scalar storage locations introduces uncertainties as to how the kinetic energy to be
linked with the stagnation enthalpy is to be deduced by interpolation.
When however an equation is solved for the specific enthalpy, and the density is deduced from this quantity and the pressure, this problem may disappear entirely. | {"url":"http://www.cham.co.uk/phoenics/d_polis/d_enc/diverg.htm","timestamp":"2014-04-17T00:54:26Z","content_type":null,"content_length":"11546","record_id":"<urn:uuid:bdecc8a3-00cc-4f7d-a5a3-520e8c6cc8f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Smyth, Clifford - Department of Mathematical Sciences, Carnegie Mellon University
• A Dual Version of Reimer's Inequality and a Proof of Rudich's Conjecture JEFF KAHN #
• 21441, Exam1, 09.22.2003, (revised) solutions Prove your answers. If you quote a theorem you must show that all the
• 21701 HW03 Solutions 1. (a) Let's say a permutation respects A if its first |A| elements are
• 21701 HW 05 solutions (a) By Chebyshev's inequality we have: Pr(X = 0) Pr(|X -EX|
• Discrete Math (21701), Fall 2004 Cliff Smyth (csmyth@andrew), office WEH 6218 MF 11:30-12:30 (or by appt)
• A Dual Version of Reimer's Inequality and a Proof of Rudich's Conjecture
• Research Statement Clifford D. Smyth csmyth@andrew.cmu.edu
• Reimer's inequality and Tardos' conjecture Clifford Smyth #
• Publication List Clifford D. Smyth csmyth@andrew.cmu.edu
• 21701 HW02 Solutions 1. (a) According to construction, a threshold graph consists of the dis-
• 21701 HW 7 solutions (a) Write x y if xi yi for all i [n]. Thus 1A 1B if and only if
• Graph Theory, Spring 2004, (S04-21484) MWF 12:30-1:20PM, BH-A53 Clifford Smyth, csmyth@andrew.cmu.edu, WEH6218, MF 1:30-2:30PM
• 21-484, Spring 2004, Test 1 Solutions 1. (a) For which m and n is Km,n Eulerian? (State the theorem you are
• 21-484, Spring 2004, Test 2 solutions 1. Let the following complete bipartite graph have the following prefer-
• 21-484, Spring 2004, Test 3, Solutions Problem Points Score
• 21441, Exam2, 10.22.2003 (revised) Justify your answers (except on question 1, of course). If you are using a
• 21441, Test 3, (revised) solutions 1. For this problem x, y 0 always. We'll prove the following claim: if
• Clifford D. Smyth csmyth@andrew.cmu.edu
• Equilateral or 1distance sets and Kusner's Clifford Smyth #
• 21701 HW01 Solutions 1. (a) Fix l. We prove the statement by induction on k. The case k = 1
• 21701 HW04 solutions (a) If E, F [d], and x = 1E, y = 1F {0, 1}d
• 21701 HW 6 solutions 1. [3] If there are R red cards and B black cards, let ER,B be the maximum
• Extra Credit Problems: You may hand in a solution at any time. The solution must be correct for it to count (no partial credit), and is then worth | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/20/657.html","timestamp":"2014-04-19T20:11:54Z","content_type":null,"content_length":"10320","record_id":"<urn:uuid:713add10-b918-4d04-9bfa-673c9736923e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bowdon Junction Statistics Tutor
Find a Bowdon Junction Statistics Tutor
...I am however, highly qualified to tutor in Study Skills and Test Preparation for the CRCT, ACT, and SAT. As a tutor for your child, I am dedicated to their academic improvement and success. I
recognize and accept that each child has their own learning style.
47 Subjects: including statistics, reading, English, biology
...At age 16 I coached my fellow students in history, math, and theater. My friends and I won the state History Fair and participated in the national competition in Washington DC. Later I was
awarded an acting scholarship to three different universities.
28 Subjects: including statistics, calculus, GRE, physics
I have 33 years of Mathematics teaching experience. During my career, I tutored any students in the school who wanted or needed help with their math class. I usually tutored before and after
school, but I've even tutored during my lunch break and planning times when transportation was an issue.
13 Subjects: including statistics, calculus, geometry, algebra 1
...I tutor students from elementary school through Algebra II and Trigonometry. I get to know students, their strengths and their challenges. I help them to see how what is new builds on what they
have already learned.
8 Subjects: including statistics, algebra 1, trigonometry, algebra 2
...In high school, I graduated as valedictorian, and I have experience tutoring math subjects ranging from pre-algebra to AP Calculus. Additionally, during my time at MIT, I ran a free science
summer camp for local disadvantaged middle school students for two years. In addition to helping my young...
18 Subjects: including statistics, English, algebra 2, calculus | {"url":"http://www.purplemath.com/Bowdon_Junction_Statistics_tutors.php","timestamp":"2014-04-16T13:29:56Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:0d7478de-ddcf-4d8d-a9e4-1f787f6642c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometric bearing airplane problem
November 13th 2011, 06:20 PM
Trigonometric bearing airplane problem
A plane's air speed is 520 mi/hr. Find the bearing (theta) the plane should fly to get to Seattle and the new speed (r) when the wind is blowing from a bearing of 120degrees with a speed of 120
mi/hr. The bearing from Hawaii to Seattle is 50 degrees and the distance is 2677 mi. Find the time it takes to fly to Seattle with the new plane speed.
November 13th 2011, 06:22 PM
Re: HELP! Trigonometric bearing airplane problem
Have you any ideas on what method to use?
Start by drawing a picture of the scenario described.
November 13th 2011, 07:03 PM
Re: HELP! Trigonometric bearing airplane problem
Hello, hubbabubba590!
I have a question . . .
A plane's air speed is 520 mph.
Find the bearing (theta) the plane should fly to get to Seattle and the new speed (r)
when the wind is blowing from a bearing of 120 degrees with a speed of 120 mph.
The bearing from Hawaii to Seattle is 50 degrees and the distance is 2677 miles.
Find the time it takes to fly to Seattle with the new plane speed.
What "new speed"?
With the given data, we have a 'normal' bearing problem.
S| 120d
N *
| * *
| * 110d *
| 2677 * *
| * * A
| * *
| * *
|50d* *
| * *
H *
The plane flies from $H$ to $A$ at 520 mph.
The wind blows from $A$ to $S$ at 120 mph.
We can find the bearing $(\angle N\!H\!A)$ and the distance $\overline{HA}.$
What purpose does the "new speed" serve?
November 13th 2011, 07:21 PM
Re: HELP! Trigonometric bearing airplane problem
Perhaps it's because the speed and direction of the wind causes the plane to slow down? I'm not quite sure, it's a practice problem my trigonometry teacher gave us. We are currently studying the
applications of vectors in the plane.
November 13th 2011, 07:23 PM
Re: HELP! Trigonometric bearing airplane problem
Was there any other pieces of information?
November 13th 2011, 07:36 PM
Re: HELP! Trigonometric bearing airplane problem
the attatchment is a similar problem with different speeds for the air plane and wind, this is how my teacher wants us to solve it | {"url":"http://mathhelpforum.com/trigonometry/191840-trigonometric-bearing-airplane-problem-print.html","timestamp":"2014-04-18T22:20:46Z","content_type":null,"content_length":"8719","record_id":"<urn:uuid:c080efe0-996f-49fc-86fd-d921009a4647>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: AP Calculus Question
Replies: 7 Last Post: Oct 25, 2013 12:12 PM
Messages: [ Previous | Next ]
Re: AP Calculus Question
Posted: Oct 9, 2013 6:25 PM
We allow students to double up in Precalc and AP Calc (with teacher
recommendation). We have 1 to 3 do this each year. Only in rare cases
(twice in 18 years?) have we allowed students to skip Precalc (for
example, can't fit Precalc into their schedule AND he or she is a very
able student). Even doubling up, they have gaps, and as the AP teacher
every so often I have to tell the doubler(s) they need to come in to
learn a topic they haven't seen yet -- and as students, they have to
be willing to come in for help when needed, even when there isn't a
clear-cut gap but they feel they're not getting something.
Evan Romer
Susquehanna Valley HS
Conklin NY
On Oct 9, 2013, at 6:07PM, luisahaw@aol.com wrote:
> Hi. I know many of you are busy and overwhelmed with the Common
> Core so I do appreciate input or advice.
> I was wondering under what circumstances do other NY schools allow
> students to take Algebra 2 Trig , skip a Pre Calculus course, and
> take AP Calculus either as a junior or senior? My school offers 2
> different Pre Calculus ( I teach the IB one) course so there is a
> math course students can take that really prepares students for
> advanced algebra and allow them to take AP Calc in their senior
> year. There seems to be a debate at my school and, as a pre
> calculus teacher, I am on the loosing end of it . I tend to worry
> often, but with the Common Core, won't their be gaps in concepts
> (such as Trigonometry) to prepare students for the rigor of
> Calculus. Thanks for your input!
> Luisa Duerr
> Binghamton High School
> an IB Diploma School
* To unsubscribe from this mailing list, email the message
* "unsubscribe nyshsmath" to majordomo@mathforum.org
* Read prior posts and download attachments from the web archives at
* http://mathforum.org/kb/forum.jspa?forumID=671 | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2601429&messageID=9296013","timestamp":"2014-04-21T15:38:26Z","content_type":null,"content_length":"26266","record_id":"<urn:uuid:1e9dc02b-d195-465d-8a00-9b3726617d57>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Field and vector space
December 2nd 2009, 02:44 PM #1
Field and vector space
Let F be a field with $q$ elements and let V be a vector space above F with dimension of $n$.
1. How many different basis do V have?
2. How many subspaces of dimension of $k$ do V have?
Last edited by Also sprach Zarathustra; December 2nd 2009 at 03:23 PM.
It is not "who" but "how"...!נו באמת
Anyway: how many elements in V can be the first element in the basis? (all but the zero vector): how many elements can be the second element in a basis? (all but the scalar multiples of the first
one, and there are q multiples like these), etc.
Try now to do the second part by yourself.
Shalom! I am mitbaesh beatzmi...
Can you please write the whole answer for the question.
1. q^n-1 ?
What is MHF expert mean?
No, not the whole answer but some highlights: there are $q^n -1$ choices for the first vector (since zero cannot be part of a basis), then there are $q^n-q$ choices for the second vector (all the
available vectors minus those that are a scalar multiple of the first one), then there are $q^n-q^2$ choices for the third one (all the possible vectors minus all the scalar multiples of the
first two), etc.
All in, there are $(q^n-1)(q^n-q)(q^n-q^2)$...try to end the argument by yourself now.
Ps ...הרבה יותר כיף לגלות את הפתרון בעצמך, אפילו אם זה קשה, מאשר לקבל את הכל מוכן
And what about 2?
no, not the whole answer but some highlights: There are $q^n -1$ choices for the first vector (since zero cannot be part of a basis), then there are $q^n-q$ choices for the second vector (all the
available vectors minus those that are a scalar multiple of the first one), then there are $q^n-q^2$ choices for the third one (all the possible vectors minus all the scalar multiples of the
first two), etc.
All in, there are $(q^n-1)(q^n-q)(q^n-q^2)$...try to end the argument by yourself now.
ps ...הרבה יותר כיף לגלות את הפתרון בעצמך, אפילו אם זה קשה, מאשר לקבל את הכל מוכן
גם אתה מישראל?
יפה יפה... אני מניח שכבר סיימת תואר ראשון?
December 2nd 2009, 03:13 PM #2
Oct 2009
December 2nd 2009, 03:28 PM #3
December 2nd 2009, 03:32 PM #4
December 2nd 2009, 07:15 PM #5
Oct 2009
December 2nd 2009, 07:21 PM #6
Oct 2009
December 3rd 2009, 12:47 AM #7
December 3rd 2009, 03:04 AM #8
Super Member
Aug 2009
December 3rd 2009, 03:59 PM #9
Oct 2009
December 3rd 2009, 04:09 PM #10
Super Member
Aug 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/118067-field-vector-space.html","timestamp":"2014-04-16T04:46:16Z","content_type":null,"content_length":"65756","record_id":"<urn:uuid:6a06f208-0d1e-4284-92b0-973b9e48225b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
FPGA Blog
Hilbert Transform in FPGA
Most of you familiar with signal processing would’ve heard the term Hilbert Transform. We’ll call it HT from here on. There are other transform in DSP like Laplace, Fourier, Z etc which are more
popular & used across multiple domains. Basically transform, as the word suggests is a mathematical process which converts one form of signal to another. Some are used only for purely analytic
purposes (Fourier) while some are used for real-time signal processing like Hilbert.
HT ‘transforms’ the phase of the signal. It performs a phase shift of -90°. Yes you read it right, it is negative 90° phase shift. Generally, it is mentioned as a 90° phase shift without explicitly
mentioning positive or negative shift. The reason is due to the concept of positive & negative frequencies. HT would phase shift positive frequency component by -90° while a negative frequency would
be shifted by +90°.
Where could we use HT?
I’m sure there are multiple uses but the one I’m aware of is Quadrature processing. The term quadrature processing refers to dealing with In-phase, Quadrature-phase signals, i.e I,Q signals. I,Q
signals are used for modulation & demodulation techniques. Quadrature modulation/demodulation offers many advantages over conventional approaches which can be discussed in another post.
For example, SSB demodulation by phasing method uses Hilbert transform to phase shift the incoming modulated signal to -90°.
FPGA Implementation (Xilinx)
Hilbert transformers can be easily implemented in Xilinx FPGA’s using FIR compiler IP. HT coefficients can be generated using Matlab FDA Tool. It is important to note that the Hilbert coefficient
structure should contain alternate zeroes for FIR compiler to infer the filter as a Hilbert Structure.
Snapshot shows Hilbert Transformer using FDA Tool.
Generated Coefficients for Hilbert transformer
As you can see, every alternate coefficient is zero valued, which the FIR compiler infers as a Hilbert Filter structure.
FIR IP Core generates I, Q output (dout_i, dout_q in IP symbol) if the Hilbert structure is inferred properly. The output is -90° phase shifted. HT of cosine is sine and HT of sine is -cosine
HT is predominantly used in quadrature Signal Processing.
FPGA FFT Architectures
This post has deliberately come after a long gap as there are other blogs like the recently launched Programmable Planet which is far superior in content and substance.
Fast Fourier Transforms are almost ubiquitous for anyone dealing with signal processing & communications systems. Time domain analysis doesn’t provide us much required information as signal
processing mostly relies on frequency domain techniques like modulation, up/downconversion, filtering. In short FFT is required in almost every design for either on-line or off-line analysis. For
example, Peak search/scan is generally performed in spectral domain.
Xilinx Fast Fourier Transform IP Core provides 4 architectures. There is obviously a trade-off between speed (performance) & area.
I’ve considered an example of 64k (65536) transform length clocked at 100 MSPS & target data rate of 100 MSPS. Table shows the theoretical latency & resource estimates provided by Xilinx IP core.
From the table, the trade-off vis-a-vis FPGA Architectures is clear.
Random trivia about Computing
We know some computers use 64-bit words.
2^64 is approximately 1.8 x 10^18 - that’s a pretty large number.
So in fact if we started incrementing a 64-bit counter once per second at the beginning of the universe (20 billion yrs ago), the MSB’s of the counter would still be all zeroes.
~ Excerpts from a book on Digital Signal Processing
FPGA DSP Slices
The concept of utilizing FPGAs for DSP operations is fairly well understood, established, and recognized within the signal processing industry.
FPGA’s have DSP slices to implement signal processing functions. The DSP operation most commonly used is Multiply-Accumulate or MAC operation. A MAC block is also used as a building block for more
complex DSP applications like filtering.
FPGA DSP slice essentially implements a MAC operation.
Xilinx calls this slice “XtremeDSP DSP48”. (probably because it handle a maximum of 48-bit addition)
Image illustrates DSP48E used in Virtex-5 devices & its features (Courtesy of Xilinx Inc)
There are 3 variants of DSP slices used in Xilinx FPGA’s- DSP48A, DSP48 & DSP48E.
(Image Courtesy of Xilinx Inc)
Xilinx has a good number of DSP elements in FPGA devices with SX series.
For more info on Comparison of conventional DSP processing vis-a-vis FPGA-based DSP watch this space.
Fascinating Trivia on Decimation
The original meaning of “decimation” was that the Roman general killed one out of every ten centurions, as a way to instill order through fear.
The same concept is used in the concept of Decimation used often in Signal Processing & VLSI terminologies. In signal processing decimation implies downsampling. In decimation we take 1 out of ‘N’
samples where N is the decimation factor. For example if we have a 100 MHz sampled data decimated by a factor of 10, then we take 1 sample for every 10 clocks of 100 MHz so the output is 10 times
slower at 10 MHz. It is also used while describing clock domains. | {"url":"http://fpga-blog.tumblr.com/tagged/DSP","timestamp":"2014-04-17T15:25:58Z","content_type":null,"content_length":"56049","record_id":"<urn:uuid:9c82d4c2-9c2d-4925-a724-1f527138bc73>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4dd02a629fe58b0b744738f7","timestamp":"2014-04-16T04:37:37Z","content_type":null,"content_length":"46457","record_id":"<urn:uuid:0ff1ac30-1a12-49f8-aec9-967b097309a6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
e a
QUANTITATIVE CURRICULUM FOR LIFE SCIENCE STUDENTS - MATLAB FILES DEVELOPED FOR MATH 151-2 Courses
Supported by the National Science Foundation's Undergraduate Course and Curriculum Program through Grant #USE-9150354 to the University of Tennessee, Knoxville
This page contains a listing of all MATLAB Files and copies of the code
of each of them. These were developed for students to use as templates for a
variety of projects, as well as to provide demos in class. Note that all the
below files are designed for use with the DOS version of MATLAB - for use on
Windows,UNIX or Macintosh, simply remove the !c: lines in each file. | {"url":"http://www.tiem.utk.edu/~gross/matlabfiles/quant.lifesci.matlab.html","timestamp":"2014-04-16T22:12:32Z","content_type":null,"content_length":"8997","record_id":"<urn:uuid:cbde674a-56cc-418b-8dab-ac84305242f6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
C: Implicit casting and interger overflowing in the evaluation of expressions
up vote 2 down vote favorite
Lets take the code
int a, b, c;
if ((a + b) > C)
If the values of a and b add to exceed the max value of an int will the integrity of the comparison be compromised? I was thinking that there might be an implicit up cast or overflow bit checked and
factored into the evaluation of this expression.
add comment
5 Answers
active oldest votes
C will do no such thing. It will silently overflow and lead to a possibly incorrect comparison. You can up-cast yourself, but it will not be done automatically.
up vote 8 down
vote accepted
Agree! No self-respecting C compiler will ever do this. SILENTLY OVERFLOW BABY! And we wouldn't have it any other way. This is what's great about C, after all. – James Devlin Sep
8 '08 at 20:19
Overflow is undefined behavior, so one way a compiler could treat it is by silently creating a larger type and evaluating the rest of the expression using that larger type. I know
of no compiler that does, and doing so would only encourage people to write bad code with undefined behavior.. :-) – R.. Nov 26 '10 at 1:46
add comment
A test confirms that GCC 4.2.3 will simply compare with the overflowed result:
#include <stdio.h>
int main()
int a, b, c;
a = 2000000000;
b = 2000000000;
c = 2100000000;
printf("%d + %d = %d\n", a, b, a+b);
if ((a + b) > c)
up vote 2 down vote printf("%d + %d > %d\n", a, b, c);
printf("%d + %d < %d\n", a, b, c);
return 0;
Displays the following:
2000000000 + 2000000000 = -294967296
2000000000 + 2000000000 < 2100000000
add comment
I believe this might be platform specific. Check the C documentation on how overflows are handled...
up vote 0 down vote Ah, yes, and the upcast will not happen automatically...
add comment
See section 2.7, Type Conversions in the K&R book
up vote 0 down vote
add comment
If upcasting doesn't gain you any bits (there's no guarantee that sizeof(long)>sizeof(int) in C), you can use conditions like the ones below to compare and check for
overflow—upcasting is almost certainly faster if you can use it, though.
#if !defined(__GNUC__) || __GNUC__<2 || (__GNUC__==2 && __GNUC_MINOR__<96)
# define unlikely(x) (x)
# define unlikely(x) (__extension__ (__builtin_expect(!!(x), 0)))
/* ----------
* Signed comparison (signed char, short, int, long, long long)
* Checks for overflow off the top end of the range, in which case a+b must
* be >c. If it overflows off the bottom, a+b < everything in the range. */
if(a+b>c || unlikely(a>=0 && b>=0 && unlikely(a+b<0)))
up vote 0 down
vote /* ----------
* Unsigned comparison (unsigned char, unsigned short, unsigned, etc.)
* Checks to see if the sum wrapped around, since the sum of any two natural
* numbers must be >= both numbers. */
if(a+b>c || unlikely(a+b<a))
/* ----------
* To generate code for the above only when necessary: */
if(sizeof(long)>sizeof(int) ? ((long)a+b>c)
: (a+b>c || unlikely(a>=0 && b>=0 && unlikely(a+b<0)))
Great candidates for macros or inline functions. You can pull the "unlikely"s if you want, but they can help shrink and speed up the code GCC generates.
add comment
Not the answer you're looking for? Browse other questions tagged c or ask your own question. | {"url":"http://stackoverflow.com/questions/50525/c-implicit-casting-and-interger-overflowing-in-the-evaluation-of-expressions","timestamp":"2014-04-18T00:36:27Z","content_type":null,"content_length":"79723","record_id":"<urn:uuid:92b7bef2-584d-4644-a4d3-7eeb413e8906>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Points on x-y plots
can also be called:
Ordered pairs
(Cartesian) coordinates
How do I plot points on a graph?
Plotting geologic data in x-y space
Why should I plot points?
Atmospheric Carbon Dioxide levels measured in the atmosphere above Mauna Loa, Hawaii (modified from Keeling and Whorf, 2003).
In the geosciences, we deal with large volumes of data, both observational and measured. This may be in the form of climate data, rock chemistry, elevation measurements, seismic data, etc. We
generally compile data into tables and when we want to know the relationship of one variable to another, one of the easiest ways to do that is to put that data on a plot. Take the table of data to
the right. Just looking at it (you can click on the image to open a bigger version in a new window), can you tell what the general trend in CO[2] values has been for the past 50 years? Has the trend
changed in the past 10 years? How about the last 5 years of record? Does the data vary from month to month? Are there seasonal cycles? So many questions! And they can all be answered with a simple
x-y plot.
Monthly Mauna Loa CO[2] data (table above) from January 2000-December 2006 plotted on a x-y graph showing trends and patterns.
Bivariate (x-y) graphs help us to visualize and categorize large volumes of data without having to sort through cumbersome data tables. Imagine having to look at the Mauna Loa table as pairs of data
(each month for 48 years makes 576 pairs of data!) and trying to figure out the relationship of one variable to another! Or, even worse, a table that isn't organized by date or in numerical
order...It's much easier to see on a graph that CO
has generally increased over the 7 years shown here. You can also see distinct seasonal changes where carbon dioxide is high in May and low in October when the data is plotted on a graph! The rate of
change seems to be pretty constant over the time period shown here.
Where is graphing used in the geosciences?
Geoscientists use graphs to illustrate all kinds of issues in the science. In introductory geoscience courses, you may be asked to plot data in conjunction with units that deal with:
• rock compositions
• topographic maps
• streams and floods
• and almost any topic that might be covered in your course
□ groundwater
□ geologic hazards
□ glacial advance or retreat
□ climate and climate change
□ deserts and dune migration
□ geologic time and radioactive decay
□ earthquakes and seismic data
□ plate tectonics
If you are struggling to remember how to plot points, this page is for you! Below you will find some simple steps for plotting points on an x-y graph and links to pages to help you with the next
Simple rules for plotting points
Any plot or graph that has two axes is an x-y (or bivariate) plot. One axis (generally, the horizontal one) is the "x-axis" and the other (the vertical one) is considered the "y-axis". But, you can
use any variable for either one, all you need is a data set that has two sets of related data. Below there is an example of a set of data points for how basalt melting temperatures change deeper in
the Earth.
Table showing how melting temperature of basalt changes with depth in the Earth. Modified from Tarbuck et al., 2008, Applications and Investigations in Earth Science, 6th edition,.
When we plot data on a graph, there are several steps that you can follow to make sure that you don't forget anything:
1. Make sure that you have two variables to work with (two columns of data). In the case of the table above, the two variables are depth (km) and basalt melting temperature (°C).
2. Decide which of the variable is going to be represented on the x-axis and which will be on the y-axis. In some cases, you will be provided with a graph that has the axes labeled. A general rule
of thumb (and one that many spreadsheet and graphing programs use) is that numbers in the first column of a table will on the x-axis. However, geologists do not always follow this rule, so make
sure you check.
3. Label the axes on your plot and determine the appropriate scale (if the graph is not already labeled).
4. Begin by plotting the first two pairs of numbers (the top row of numbers).
In the case of the basalt melting temperatures, the first two numbers are (0, 1100). In other words, we are going to plot a point at x=0, y=1100. How do we decide where to put a point? Follow
these simple steps:
1. First, find the value for x on the x-axis. In the case of basalt melting temperatures, x = 0; so, find 0 on the x axis.
2. Next, find the y-value - in this case, y=1100, so find 1100 on the y-axis.
3. Your point should be plotted at the intersection of x=0 and y=1100. (If you draw one line up vertically up from x=0 and another line horizontally from y=1100, where they cross is where you
should put your point!
4. Finally, plot the point on your graph at the appropriate spot.
5. Continue to plot pairs of points from the table (in rows) until you have plotted all the points.
6. Your final graph should have the same number of points as pairs of data in your table.
You can download and print a
sheet with the steps on it here (Acrobat (PDF) 35kB Sep10 08).
Some practice problems
When you are comfortable with the steps shown above, you can move on to the plotting points sample problems, which have worked answers. | {"url":"https://serc.carleton.edu/mathyouneed/graphing/plotting.html","timestamp":"2014-04-20T01:39:22Z","content_type":null,"content_length":"34883","record_id":"<urn:uuid:da709e70-96c8-4c1c-b9b7-5a38a7c492cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capacitor question
Hoping someone can double check my understanding on a simple capacitor function in a simple DC circuit question. I'm no electronics guy, just trying to bone up a little bit on small circuit design.
If it helps, I'm just trying to understand a really simple cap charge and discharge circuit I made with a switch, an led, a 9v, and a cap.
1- Once a cap is charged, it functions as an 'open' circuit, correct? Any direct current flowing through a cap is 'leakage' and is not intentional by design, correct?
2- In a "perfect" capacitor, electrons never actually cross from plate to plate (or pin to pin), just a charge from field effect, correct? | {"url":"http://forums.prosoundweb.com/index.php/topic,140340.msg1313642.html","timestamp":"2014-04-20T03:13:10Z","content_type":null,"content_length":"37264","record_id":"<urn:uuid:82fe4fe4-214b-4713-9be6-4c2859116316>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the length of the major axis of the following graph? 6 12 8 10
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505954dbe4b0cc122893426e","timestamp":"2014-04-17T09:47:39Z","content_type":null,"content_length":"45151","record_id":"<urn:uuid:e0bf1422-3733-49af-9e34-c575976098ca>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Danville, CA Math Tutor
Find a Danville, CA Math Tutor
I am a former university professor and have taught in the undergraduate and graduate levels for more than 10 years. In all my years of teaching, I have consistently received "Excellent"
rating in the university SET (Student Evaluation of Teachers). I am a very patient and conscien...
10 Subjects: including algebra 1, algebra 2, chemistry, geometry
I have been tutoring unofficially since third grade. Now over thirty, I can say with confidence that it is the one job in which I am most successful and most myself. Tutoring is an act of
22 Subjects: including SAT math, English, trigonometry, precalculus
...It needs a strong foundation but the concepts carry through the course that is basically differentiation and integration. After learning the basic skills, application becomes very important.
But the depth of understanding in the course by a student leads to a better prepared thinker on a higher level.
13 Subjects: including algebra 1, algebra 2, calculus, geometry
...I've sung in choral performances. Whether I'm explaining the very basics of using a computer or trading shortcuts and new ways to use tools in design software, using computers and the internet
is a prime example of continuous shared learning and teaching. Study skills and organization are criti...
34 Subjects: including algebra 1, English, writing, Spanish
...In addition he has over 30 years experience as a practicing atmospheric scientist and dispersion modeler. The concepts of Linear Algebra are at the heart of (1) numerical methods (used to
develop and evaluate solution techniques that are used by computers to solve large numerical systems such as...
13 Subjects: including discrete math, differential equations, algebra 1, algebra 2
Related Danville, CA Tutors
Danville, CA Accounting Tutors
Danville, CA ACT Tutors
Danville, CA Algebra Tutors
Danville, CA Algebra 2 Tutors
Danville, CA Calculus Tutors
Danville, CA Geometry Tutors
Danville, CA Math Tutors
Danville, CA Prealgebra Tutors
Danville, CA Precalculus Tutors
Danville, CA SAT Tutors
Danville, CA SAT Math Tutors
Danville, CA Science Tutors
Danville, CA Statistics Tutors
Danville, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/danville_ca_math_tutors.php","timestamp":"2014-04-18T08:43:17Z","content_type":null,"content_length":"23810","record_id":"<urn:uuid:4eb54a98-4dc4-4151-b129-ff7985516de4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
If a,b,c and p are rational numbers, and a+bp^1/3+cp^2/3=0 and p is not a perfect cube. prove that a=b=c=0
First proof (requires knowledge in fields extensions): the cubic rational extension $\mathbb{Q}(\sqrt[3]{p})/\mathbb{Q}$ is of degree 3 since $x^3-p$ is the minimal polynomial of $\sqrt[3]{p}$
(why?), and a basis for it is precisely $\{1,\,\sqrt[3]{p},\,\sqrt[3]{p^2}\}\Longrightarrow$ these three elements are linearly independent over $\mathbb{Q}$ , and this is precisely what had to be
proved. Second proof: $a+b\sqrt[3]{p}+c\sqrt[3]{p^2}=0\iff (a+b\sqrt[3]{p})^3=$$-c^3p^2\iff a^3+3a^2b\sqrt[3]{p}+3ab^2\sqrt[3]{p^2}+b^3p=-c^3p^2$$\Longrightarrow \sqrt[3]{p}(a+b\sqrt[3]{p})=\frac{-c^
3p^2-a^3-b^3p}{3ab}=q$ . Note that $q\,,\,-c^3p^2\in\mathbb{Q}$ (I assume $abeq 0$ ; if some of these two numbers is zero I leave to you to do the little changes necessary to fix the proof). Raise to
the 3rd power the expression $\sqrt[3]{p}(a+b\sqrt[3]{p})=q\Longrightarrow p(-c^3p^2)=q^3$ , and now make some order here and deduce a contradiction to p not being a perfect (rational, of course)
cube. Tonio
note that (i assume ; if some of these two numbers is zero i leave to you to do the little changes necessary to fix the proof). would you please help in fixing it?
Assume $a,b,c,p ~eq 0$ $a+ b \sqrt[3]{p} + c \sqrt[3]{p}^2 = 0$ or $a\sqrt[3]{p} + b\sqrt[3]{p}^2 + cp = 0$ Let $x = \sqrt[3]{p} ~, ~ y= \sqrt[3]{p}^2$ We obtain a system of linear equations : $bx+cy
= -a$ $ax + by = -cp$ Since $a,b,c,p$ are rational numbers , we must have ${x,y} \in \mathbb{Q}$ . However, as you mentioned $p$ is not a perfect cube , a contradiction .
When p is not a perfect square, one can use Eisenstein's criterion to prove x^3-p is irreducible. But it's nontrivial (at least I think) to show x^3-p is irreducible in general case, i.e. merely if p
is not a perfect cube of some element in the ground field ( $\mathbb{Q}$ here). And a warning if anyone want to generalize it: it's not true that if q is not a perfect 4-th power, then x^4-q is
irreducible. e.g. if q=-4, then one can factor x^4+4=(x^2-2x-2)(x^2+2x-2) EDIT: OK I forgot that a cubic polynomial is irreducible iff it doesn't have a root, so in this case it's trivial to show x^
3-p is irreducible. | {"url":"http://mathhelpforum.com/number-theory/142813-find.html","timestamp":"2014-04-18T21:53:22Z","content_type":null,"content_length":"47908","record_id":"<urn:uuid:0662f54f-b93f-4e57-a2aa-b6e72ffdf881>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need to create ePub from PDF with equations, diagrams and tables [Archive] - MobileRead Forums
04-24-2013, 07:14 AM
As the title suggests I would like to create ePub from PDF's which I have licensed. Problem is most of these PDF's are Engineering Books, which means they are full of mathematical equations, tables
and diagrams.
I have used Calibre but it ruins the formatting and doesn't display Mathematical equations at all.
I also tried using Adobe professional ---> Turning PDFs to RTF/HTML but in all cases equations and diagrams are badly affected!
Is there a way of dealing with this? I have heard that ePub3 natively supports mathematical equations? Is there an easy solution for conversion?
Any sort of help would be appreciated. Looking forward. | {"url":"http://www.mobileread.com/forums/archive/index.php/t-211472.html","timestamp":"2014-04-17T01:19:39Z","content_type":null,"content_length":"7062","record_id":"<urn:uuid:d386f184-ed69-4657-9513-5e4ecfb116ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH VOCAB (GREEN)
1 decimal point _____ Two lines that cross over each other
2 difference _____ Numbers multiplied together
3 place value _____ The value of a digit based upon the place of the digit
4 product _____ A number that uses place value and a decimal point to show
5 complimentary angle _____ Answer to an addition problem
6 intersecting lines _____ Two angles that have a sum that equals 90 degrees
7 perpendicular _____ To rewrite a whole number to the nearest ten, hundred,
8 protractor _____ Answer to a subtraction problem
9 rounding _____ A number that has only two factors
10 decimal _____ The third place value to the right of the decimal point
11 sum _____ An instrument that measures angles.
12 thousandths _____ A dot used to separate the ones and tenths place value
13 composite number _____ A number with three or more factors
14 factors _____ Answer to a multiplication problem
15 prime number _____ Two lines intersecting and making right angles. | {"url":"http://www.armoredpenguin.com/wordmatch/Data/2012.11/1116/11160044.795.html","timestamp":"2014-04-24T12:04:27Z","content_type":null,"content_length":"15936","record_id":"<urn:uuid:0566e5e5-c3e9-4d8d-9bc3-de4f7d4174ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discounted Cash Flow
Discounted Cash Flow Valuation:
The Inputs
Aswath Damodaran
The Key Inputs in DCF Valuation
l Discount Rate
– Cost of Equity, in valuing equity
– Cost of Capital, in valuing the firm
l Cash Flows
– Cash Flows to Equity
– Cash Flows to Firm
l Growth (to get future cash flows)
– Growth in Equity Earnings
– Growth in Firm Earnings (Operating Income)
I. Estimating Discount Rates
DCF Valuation
Estimating Inputs: Discount Rates
l Critical ingredient in discounted cashflow valuation. Errors in
estimating the discount rate or mismatching cashflows and discount
rates can lead to serious errors in valuation.
l At an intuitive level, the discount rate used should be consistent with
both the riskiness and the type of cashflow being discounted.
– Equity versus Firm: If the cash flows being discounted are cash flows to
equity, the appropriate discount rate is a cost of equity. If the cash flows
are cash flows to the firm, the appropriate discount rate is the cost of
– Currency: The currency in which the cash flows are estimated should also
be the currency in which the discount rate is estimated.
– Nominal versus Real: If the cash flows being discounted are nominal cash
flows (i.e., reflect expected inflation), the discount rate should be nominal
I. Cost of Equity
l The cost of equity is the rate of return that investors require to make an
equity investment in a firm. There are two approaches to estimating
the cost of equity;
– a dividend-growth model.
– a risk and return model
l The dividend growth model (which specifies the cost of equity to be
the sum of the dividend yield and the expected growth in earnings) is
based upon the premise that the current price is equal to the value. It
cannot be used in valuation, if the objective is to find out if an asset is
correctly valued.
l A risk and return model, on the other hand, tries to answer two
– How do you measure risk?
– How do you translate this risk measure into a risk premium?
What is Risk?
l Risk, in traditional terms, is viewed as a ‘negative’. Webster’s
dictionary, for instance, defines risk as “exposing to danger or hazard”.
The Chinese symbols for risk are reproduced below:
l The first symbol is the symbol for “danger”, while the second is the
symbol for “opportunity”, making risk a mix of danger and
Risk and Return Models
Step 1: Defining Risk
The risk in an investment can be measured by the variance in actual returns around an
expected return
Riskless Investment Low Risk Investment High Risk Investment
E(R) E(R) E(R)
Step 2: Differentiating between Rewarded and Unrewarded Risk
Risk that is specific to investment (Firm Specific) Risk that affects all investments (Market Risk)
Can be diversified away in a diversified portfolio Cannot be diversified away since most assets
1. each investment is a small proportion of portfolio are affected by it.
2. risk averages out across investments in portfolio
The marginal investor is assumed to hold a “diversified” portfolio. Thus, only market risk will
be rewarded and priced.
Step 3: Measuring Market Risk
The CAPM The APM Multi-Factor Models Proxy Models
If there is If there are no Since market risk affects In an efficient market,
1. no private information arbitrage opportunities most or all investments, differences in returns
2. no transactions cost then the market risk of it must come from across long periods must
the optimal diversified any asset must be macro economic factors. be due to market risk
portfolio includes every captured by betas Market Risk = Risk differences. Looking for
traded asset. Everyone relative to factors that exposures of any variables correlated with
will hold this market portfolio affect all investments. asset to macro returns should then give
Market Risk = Risk Market Risk = Risk economic factors. us proxies for this risk.
added by any investment exposures of any Market Risk =
to the market portfolio: asset to market Captured by the
factors Proxy Variable(s)
Beta of asset relative to Betas of asset relative Betas of assets relative Equation relating
Market portfolio (from to unspecified market to specified macro returns to proxy
a regression) factors (from a factor economic factors (from variables (from a
analysis) a regression) regression)
Comparing Risk Models
Model Expected Return Inputs Needed
CAPM E(R) = Rf + β (Rm- Rf) Riskfree Rate
Beta relative to market portfolio
Market Risk Premium
APM E(R) = Rf + Σj=1 βj (Rj- Rf) Riskfree Rate; # of Factors;
Betas relative to each factor
Factor risk premiums
Multi E(R) = Rf + Σj=1,,N βj (Rj- Rf) Riskfree Rate; Macro factors
factor Betas relative to macro factors
Macro economic risk premiums
Proxy E(R) = a + Σj=1..N bj Yj Proxies
Regression coefficients
Beta’s Properties
l Betas are standardized around one.
l If
β=1 ... Average risk investment
β>1 ... Above Average risk investment
β<1 ... Below Average risk investment
β=0 ... Riskless investment
l The average beta across all investments is one.
Limitations of the CAPM
l 1. The model makes unrealistic assumptions
l 2. The parameters of the model cannot be estimated precisely
– - Definition of a market index
– - Firm may have changed during the 'estimation' period'
l 3. The model does not work well
– - If the model is right, there should be
l * a linear relationship between returns and betas
l * the only variable that should explain returns is betas
– - The reality is that
l * the relationship between betas and returns is weak
l * Other variables (size, price/book value) seem to explain differences in
returns better.
Inputs required to use the CAPM -
(a) the current risk-free rate
(b) the expected return on the market index and
(c) the beta of the asset being analyzed.
Riskfree Rate in Valuation
l The correct risk free rate to use in a risk and return model is
o a short-term Government Security rate (eg. T.Bill), since it has no
default risk or price risk
o a long-term Government Security rate, since it has no default risk
o other: specify ->
The Riskfree Rate
l On a riskfree asset, the actual return is equal to the expected return.
l Therefore, there is no variance around the expected return.
l For an investment to be riskfree, i.e., to have an actual return be equal
to the expected return, two conditions have to be met –
– There has to be no default risk, which generally implies that the security
has to be issued by the government. Note, however, that not all
governments can be viewed as default free.
– There can be no uncertainty about reinvestment rates, which implies that it
is a zero coupon security with the same maturity as the cash flow being
Riskfree Rate in Practice
l The riskfree rate is the rate on a zero coupon government bond
matching the time horizon of the cash flow being analyzed.
l Theoretically, this translates into using different riskfree rates for each
cash flow - the 1 year zero coupon rate for the cash flow in year 2, the
2-year zero coupon rate for the cash flow in year 2 ...
l Practically speaking, if there is substantial uncertainty about expected
cash flows, the present value effect of using time varying riskfree rates
is small enough that it may not be worth it.
The Bottom Line on Riskfree Rates
l Using a long term government rate (even on a coupon bond) as the
riskfree rate on all of the cash flows in a long term analysis will yield a
close approximation of the true value.
l For short term analysis, it is entirely appropriate to use a short term
government security rate as the riskfree rate.
l If the analysis is being done in real terms (rather than nominal terms)
use a real riskfree rate, which can be obtained in one of two ways –
– from an inflation-indexed government bond, if one exists
– set equal, approximately, to the long term real growth rate of the economy
in which the valuation is being done.
Riskfree Rate in Valuation
l You are valuing a Brazilian company in nominal U.S. dollars. The
correct riskfree rate to use in this valuation is:
o the U.S. treasury bond rate
o the Brazilian C-Bond rate (the rate on dollar denominated Brazilian
long term debt)
o the local riskless Brazilian Real rate (in nominal terms)
o the real riskless Brazilian Real rate
Measurement of the risk premium
l The risk premium is the premium that investors demand for investing
in an average risk investment, relative to the riskfree rate.
l As a general proposition, this premium should be
– greater than zero
– increase with the risk aversion of the investors in that market
– increase with the riskiness of the “average” risk investment
Risk Aversion and Risk Premiums
l If this were the capital market line, the risk premium would be a
weighted average of the risk premiums demanded by each and every
l The weights will be determined by the magnitude of wealth that each
investor has. Thus, Warren Bufffet’s risk aversion counts more
towards determining the “equilibrium” premium than yours’ and mine.
l As investors become more risk averse, you would expect the
“equilibrium” premium to increase.
Estimating Risk Premiums in Practice
l Survey investors on their desired risk premiums and use the average
premium from these surveys.
l Assume that the actual premium delivered over long time periods is
equal to the expected premium - i.e., use historical data
l Estimate the implied premium in today’s asset prices.
The Survey Approach
l Surveying all investors in a market place is impractical.
l However, you can survey a few investors (especially the larger
investors) and use these results. In practice, this translates into surveys
of money managers’ expectations of expected returns on stocks over
the next year.
l The limitations of this approach are:
– there are no constraints on reasonability (the survey could produce
negative risk premiums or risk premiums of 50%)
– they are extremely volatile
– they tend to be short term; even the longest surveys do not go beyond one
The Historical Premium Approach
l This is the default approach used by most to arrive at the premium to
use in the model
l In most cases, this approach does the following
– it defines a time period for the estimation (1926-Present, 1962-Present....)
– it calculates average returns on a stock index during the period
– it calculates average returns on a riskless security over the period
– it calculates the difference between the two
– and uses it as a premium looking forward
l The limitations of this approach are:
– it assumes that the risk aversion of investors has not changed in a
systematic way across time. (The risk aversion may change from year to
year, but it reverts back to historical averages)
– it assumes that the riskiness of the “risky” portfolio (stock index) has not
changed in a systematic way across time.
Historical Average Premiums for the United
Historical period Stocks - T.Bills Stocks - T.Bonds
Arith Geom Arith Geom
1926-1996 8.76% 6.95% 7.57% 5.91%
1962-1996 5.74% 4.63% 5.16% 4.46%
1981-1996 10.34% 9.72% 9.22% 8.02%
What is the right premium?
What about historical premiums for other
l Historical data for markets outside the United States tends to be sketch
and unreliable.
l Ibbotson, for instance, estimates the following premiums for major
markets from 1970-1990
Country Period Stocks Bonds Risk Premium
Australia 1970-90 9.60% 7.35% 2.25%
Canada 1970-90 10.50% 7.41% 3.09%
France 1970-90 11.90% 7.68% 4.22%
Germany 1970-90 7.40% 6.81% 0.59%
Italy 1970-90 9.40% 9.06% 0.34%
Japan 1970-90 13.70% 6.96% 6.74%
Netherlands 1970-90 11.20% 6.87% 4.33%
Switzerland 1970-90 5.30% 4.10% 1.20%
UK 1970-90 14.70% 8.45% 6.25%
Risk Premiums for Latin America
Country Rating Risk Premium
Argentina BBB 5.5% + 1.75% = 7.25%
Brazil BB 5.5% + 2% = 7.5%
Chile AA 5.5% + 0.75% = 6.25%
Columbia A+ 5.5% + 1.25% = 6.75%
Mexico BBB+ 5.5% + 1.5% = 7%
Paraguay BBB- 5.5% + 1.75% = 7.25%
Peru B 5.5% + 2.5% = 8%
Uruguay BBB 5.5% + 1.75% = 7.25%
Risk Premiums for Asia
Country Rating Risk Premium
China BBB+ 5.5% + 1.5% = 7.00%
Indonesia BBB 5.5% + 1.75% = 7.25%
India BB+ 5.5% + 2.00% = 7.50%
Japan AAA 5.5% + 0.00% = 5.50%
Korea AA- 5.5% + 1.00% = 6.50%
Malaysia A+ 5.5% + 1.25% = 6.75%
Pakistan B+ 5.5% + 2.75% = 8.25%
Phillipines BB+ 5.5% + 2.00% = 7.50%
Singapore AAA 5.5% + 0.00% = 7.50%
Taiwan AA+ 5.5% + 0.50% = 6.00%
Thailand A 5.5% + 1.35% = 6.85%
Implied Equity Premiums
l If we use a basic discounted cash flow model, we can estimate the
implied risk premium from the current level of stock prices.
l For instance, if stock prices are determined by the simple Gordon
Growth Model:
– Value = Expected Dividends next year/ (Required Returns on Stocks -
Expected Growth Rate)
– Plugging in the current level of the index, the dividends on the index and
expected growth rate will yield a “implied” expected return on stocks.
Subtracting out the riskfree rate will yield the implied premium.
l The problems with this approach are:
– the discounted cash flow model used to value the stock index has to be the
right one.
– the inputs on dividends and expected growth have to be correct
– it implicitly assumes that the market is currently correctly valued
Implied Risk Premiums in the US
Implied Risk Premium: U.S. Equities
Implied Premium (%)
Historical and Implied Premiums
l Assume that you use the historical risk premium of 5.5% in doing your
discounted cash flow valuations and that the implied premium in the
market is only 2.5%. As you value stocks, you will find
o more under valued than over valued stocks
o more over valued than under valued stocks
o about as many under and over valued stocks
Estimating Beta
l The standard procedure for estimating betas is to regress stock returns
(Rj) against market returns (Rm) -
Rj = a + b Rm
– where a is the intercept and b is the slope of the regression.
l The slope of the regression corresponds to the beta of the stock, and
measures the riskiness of the stock.
Beta Estimation in Practice
Estimating Expected Returns: September 30,
l Disney’s Beta = 140
l Riskfree Rate = 7.00% (Long term Government Bond rate)
l Risk Premium = 5.50% (Approximate historical premium)
l Expected Return = 7.00% + 1.40 (5.50%) = 14.70%
The Implications of an Expected Return
l Which of the following statements best describes what the expected
return of 14.70% that emerges from the capital asset pricing model is
telling you as an investor?
o This stock is a good investment since it will make a higher return than
the market (which is expected to make 12.50%)
o If the CAPM is the right model for risk and the beta is correctly
measured, this stock can be expected to make 14.70% over the long
o This stock is correctly valued
o None of the above
How investors use this expected return
l If the stock is correctly valued, the CAPM is the right model for risk
and the beta is correctly estimated, an investment in Disney stock can
be expected to earn a return of 14.70% over the long term.
l Investors in stock in Disney
– need to make 14.70% over time to break even
– will decide to invest or not invest in Disney based upon whether they think
they can make more or less than this hurdle rate
How managers use this expected return
l • Managers at Disney
– need to make at least 14.70% as a return for their equity investors to break
– this is the hurdle rate for projects, when the investment is analyzed from
an equity standpoint
l In other words, Disney’s cost of equity is 14.70%.
Beta Estimation and Index Choice
A Few Questions
l The R squared for Nestle is very high and the standard error is very
low, at least relative to U.S. firms. This implies that this beta estimate
is a better one than those for U.S. firms.
o True
o False
l The beta for Nestle is 0.97. This is the appropriate measure of risk to
what kind of investor (What has to be in his or her portfolio for this
beta to be an adequate measure of risk?)
l If you were an investor in primarily U.S. stocks, would this be an
appropriate measure of risk?
Nestle: To a U.S. Investor
Nestle: To a Global Investor
Telebras: The Index Effect Again
Brahma: The Contrast
Beta Differences
BETA AS A MEASURE OF RISK
High Risk
Minupar: Beta = 1.72
Beta > 1
Above-average Risk 9 stocks
Eletrobras: Beta = 1.22
Telebras: Beta = 1.11
Petrobras: Beta = 1.04
Beta = 1
Average Stock
Brahma: Beta = 0.84
CVRD: Beta=0.64
Brahma: Beta=0.50 169 stocks
Beta < 1
Below-average Risk
Government bonds: Beta = 0
Low Risk
The Problem with Regression Betas
l When analysts use the CAPM, they generally assume that the
regression is the only way to estimate betas.
l Regression betas are not necessarily good estimates of the “true” beta
because of
– the market index may be narrowly defined and dominated by a few stocks
– even if the market index is well defined, the standard error on the beta
estimate is usually large leading to a wide range for the true beta
– even if the market index is well defined and the standard error on the beta
is low, the regression estimate is a beta for the period of the analysis. To
the extent that the company has changed over the time period (in terms of
business or financial leverage), this may not be the right beta for the next
period or periods.
Solutions to the Regression Beta Problem
l Modify the regression beta by
– changing the index used to estimate the beta
– adjusting the regression beta estimate, by bringing in information about
the fundamentals of the company
l Estimate the beta for the firm using
– the standard deviation in stock prices instead of a regression against an
– accounting earnings or revenues, which are less noisy than market prices.
l Estimate the beta for the firm from the bottom up without employing
the regression technique. This will require
– understanding the business mix of the firm
– estimating the financial leverage of the firm
l Use an alternative measure of market risk that does not need a
Modified Regression Betas
l Adjusted Betas: When one or a few stocks dominate an index, the betas
might be better estimated relative to an equally weighted index. While
this approach may eliminate some of the more egregious problems
associated with indices dominated by a few stocks, it will still leave us
with beta estimates with large standard errors.
l Enhanced Betas: Adjust the beta to reflect the differences between
firms on other financial variables that are correlated with market risk
– Barra, which is one of the most respected beta estimation services in the
world, employs this technique. They adjust regression betas for
differences in a number of accounting variables.
– The variables to adjust for, and the extent of the adjustment, are obtained
by looking at variables that are correlated with returns over time.
Adjusted Beta Calculation: Brahma
l Consider the earlier regression done for Brahma against the Bovespa.
Given the problems with the Bovespa, we could consider running the
regression against alternative market indices:
Index Beta R squared Notes
Bovespa 0.23 0.07
I-Senn 0.26 0.08 Market Cap Wtd.
S&P 0.51 0.06 Could use ADR
MSCI 0.39 0.04 Could use ADR
l For many large non-US companies, with ADRs listed in the US, the
betas can be estimated relative to the U.S. or Global indices.
Betas and Fundamentals
l The earliest studies in the 1970s combined industry and company-
fundamental factors to predict betas.
l Income statement and balance sheet variables are important predictors
of beta
l The following is a regression relating the betas of NYSE and AMEX
stocks in 1996 to four variables - dividend yield, standard deviation in
operating income, market capitalization and book debt/equity ratio
yielded the following.
BETA = 0. 7997 + 2.28 Std Dev in Operating Income- 3.23 Dividend
Yield + 0.21 Debt/Equity Ratio - .000005 Market Capitalization
Market Cap: measured as market value of equity (in millions)
Using the Fundamentals to Estimate Betas
l To use these fundamentals to estimate a beta for Disney, for instance,
you would estimate the independent variables for Disney
– Standard Deviation in Operating Income = 20.60%
– Dividend Yield = 0.62%
– Debt/Equity Ratio (Book) = 77%
– Market Capitalization of Equity = $ 54,471(in mils)
l The estimated beta for Disney is:
BETA = 0. 7997 + 2.28 (0.206)- 3.23 (0.0062)+ 0.21 (0.77) - .000005
(54,471) = 1.14
l Alternatively, the regression beta could have been adjusted for
differences on these fundamentals.
Other Measures of Market Risk
l Relative Standard Deviation
= Standard Deviation of Firm j / Average Standard Deviation across all
– This approach steers clear of the index definition problems that betas
have, but is based on the implicit assumption that total risk (which is what
standard deviation measures) and market risk are highly correlated.
l Accounting Betas
– If the noise in market data is what makes the betas unreliable, estimates of
betas can be obtained using accounting earnings.
– This approach can be used for non-traded firms as well, but suffers from a
serious data limitation problem.
Relative Volatility
High Risk
Serrano: Rel Vol = 2.20
Beta > 1
Above-average Risk Sifco: Rel Vol = 1.50 83 stocks
Celesc: Rel Vol = 1.25
Usimanas: Rel Vol = 1.11
Acesita: Rel Vol = 1.01
Beta = 1
Average Stock
Telebras: Rel Vol= 0.84
Bradesco: Rel Vol=0.70
Brahma:Rel Vol=0.50 86 stocks
Beta < 1
Below-average Risk
Government bonds: Rel Vol = 0
Low Risk
Estimating Cost of Equity from Relative
Standard Deviation: Brazil
l The analysis is done in real terms
l The riskfree rate has to be a real riskfree rate.
– We will use the expected real growth rate in the Brazilian economy of
approximately 5%
– This assumption is largely self correcting since the expected real growth
rate in the valuation is also assumed to be 5%
l The risk premium used, based upon the country rating, is 7.5%.
– Should this be adjusted as we go into the future?
l Estimated Cost of Equity
– Company Beta Cost of Equity
Telebras 0.87 5%+0.87 (7.5%) = 11.53%
CVRD 0.85 5%+0.85 (7.5%) = 11.38%
Aracruz 0.72 5%+0.72 (7.5%) = 10.40%
Accounting Betas
l An accounting beta is estimated by regressing the changes in earnings
of a firm against changes in earnings on a market index.
∆ EarningsFirm = a + b ∆ EarningsMarket Index
l The slope of the regression is the accounting beta for this firm.
l The key limitation of this approach is that accounting data is not
measured very often. Thus, the regression’s power is limited by the
absence of data.
Estimating an Accounting Beta
Year Change in Disney EPS Change in S&P 500 Earnings
1980 -7.69% -2.10%
1981 -4.17% 6.70%
1982 -17.39% -45.50%
1983 11.76% 37.00%
1984 68.42% 41.80%
1985 -10.83% -11.80%
1986 43.75% 7.00%
1987 54.35% 41.50%
1988 33.80% 41.80%
1989 34.74% 2.60%
1990 17.19% -18.00%
1991 -20.00% -47.40%
1992 26.67% 64.50%
1993 7.24% 20.00%
1994 25.15% 25.30%
1995 24.02% 15.50%
1996 -11.86% 24.00%
The Accounting Beta
l Regressing Disney EPS against S&P 500 earnings, we get:
EarningsDisney = 0.10 + 0.54 ∆ EarningsS&P 500
l The accounting beta for Disney is 0.54.
Accounting Betas: The Effects of Smoothing
l Accountants tend to smooth out earnings, relative to value and market
prices. As a consequence, we would expect accounting betas for most
firms to be
o closer to zero
o less than one
o close to one
o greater than one
Alternative Measures of Market Risk
l Proxy Variables for Risk
– Use variables such as market capitalization as proxies for market risk
– Regression can be used to make relationship between return and these
variables explicit.
l Qualitative Risk Measures
– Divide firms into risk classes
– Assign a different cost of equity for each risk class
Using Proxy Variables for Risk
l Fama and French, in much quoted study on the efficacy (or the lack) of
the CAPM, looked at returns on stocks between 1963 and 1990. While
they found no relationship with differences in betas, they did find a
strong relationship between size, book/market ratios and returns.
l A regression off monthly returns on stocks on the NYSE, using data
from 1963 to 1990:
Rt = 1.77% - 0.0011 ln (MV) + 0.0035 ln (BV/MV)
MV = Market Value of Equity
BV/MV = Book Value of Equity / Market Value of Equity
l To get the cost of equity for Disney, you would plug in the values into
this regression. Since Disney has a market value of $ 54,471 million
and a book/market ratio of 0.30 its monthly return would have been:
Rt = .0177 - .0011 ln (54,471) + 0.0035 (.3) = 0.675% a month or 8.41% a
Korea: Proxies for Risk and Returns
Stock Returns: Low, Medium and High Classes: 1982-1993
MV of Equity
Bottom-up Betas
l The other approach to estimate betas is to build them up from the base,
by understanding the business that a firm is in, and estimating a beta
based upon this understanding.
l To use this approach, we need to
– deconstruct betas, and understand the fundamental determinants of betas
(i.e., why are betas high for some firms and low for others?)
– come up with a way of linking the fundamental characteristics of an asset
with a beta that can be used in valuation.
Determinant 1: Product or Service Type
l The beta value for a firm depends upon the sensitivity of the demand
for its products and services and of its costs to macroeconomic factors
that affect the overall market.
– Cyclical companies have higher betas than non-cyclical firms
– Firms which sell more discretionary products will have higher betas than
firms that sell less discretionary products
Determinant 2: Operating Leverage Effects
l Operating leverage refers to the proportion of the total costs of the firm
that are fixed.
l Other things remaining equal, higher operating leverage results in
greater earnings variability which in turn results in higher betas.
Measures of Operating Leverage
Fixed Costs Measure = Fixed Costs / Variable Costs
l This measures the relationship between fixed and variable costs. The
higher the proportion, the higher the operating leverage.
EBIT Variability Measure = % Change in EBIT / % Change in Revenues
l This measures how quickly the earnings before interest and taxes
changes as revenue changes. The higher this number, the greater the
operating leverage.
The Effects of Firm Actions on Beta
l When Robert Goizueta became CEO of Coca Cola, he proceeded to
move most of the bottling plants and equipment to Coca Cola Bottling,
which trades as an independent company (with Coca Cola as a primary
but not the only investor). Which of the following consequences would
you predict for Coca Cola’s beta?
o Coke’s beta should go up
o Coke’e beta should go down
o Coke’s beta should be unchanged
l Would your answer have been any different if Coca Cola had owned
100% of the bottling plants?
Determinant 3: Financial Leverage
l As firms borrow, they create fixed costs (interest payments) that make
their earnings to equity investors more volatile.
l This increased earnings volatility increases the equity beta
Equity Betas and Leverage
l The beta of equity alone can be written as a function of the unlevered
beta and the debt-equity ratio
βL = βu (1+ ((1-t)D/E)
βL = Levered or Equity Beta
βu = Unlevered Beta
t = Corporate marginal tax rate
D = Market Value of Debt
E = Market Value of Equity
Betas and Leverage: Hansol Paper, a Korean
Paper Company
– Current Beta = 1.03
– Current Debt/Equity Ratio = 950/346=2.74
– Current Unlevered Beta = 1.03/(1+2.74(1-.3)) = 0.35
Debt Ratio D/E Ratio Beta Cost of Equity
0.00% 0.00% 0.35 14.29%
10.00% 11.11% 0.38 14.47%
20.00% 25.00% 0.41 14.69%
30.00% 42.86% 0.46 14.98%
40.00% 66.67% 0.52 15.36%
50.00% 100.00% 0.60 15.90%
60.00% 150.00% 0.74 16.82%
70.00% 233.33% 1.00 18.50%
80.00% 400.00% 1.50 21.76%
90.00% 900.00% 3.00 31.51%
Bottom-up versus Top-down Beta
l The top-down beta for a firm comes from a regression
l The bottom up beta can be estimated by doing the following:
– Find out the businesses that a firm operates in
– Find the unlevered betas of other firms in these businesses
– Take a weighted (by sales or operating income) average of these
unlevered betas
– Lever up using the firm’s debt/equity ratio
l The bottom up beta will give you a better estimate of the true beta
– the standard error of the beta from the regression is high (and) the beta for
a firm is very different from the average for the business
– the firm has reorganized or restructured itself substantially during the
period of the regression
– when a firm is not traded
Decomposing Disney’s Beta
Business Unlevered D/E Ratio Levered Riskfree Risk Cost of
Beta Beta Rate Premium Equity
Creative Content 1.25 22.23% 1.43 7.00% 5.50% 14.85%
Retailing 1.5 22.23% 1.71 7.00% 5.50% 16.42%
Broadcasting 0.9 22.23% 1.03 7.00% 5.50% 12.65%
Theme Parks 1.1 22.23% 1.26 7.00% 5.50% 13.91%
Real Estate 0.7 22.23% 0.80 7.00% 5.50% 11.40%
Disney 1.09 22.23% 1.25 7.00% 5.50% 13.85%
Choosing among Alternative Beta Estimates:
Approach Beta Comments
Regression 1.40 Company has changed significantly
Modified Regression 1.15 Used MSCI as market index
Enhanced Beta 1.14 Fundamental regression has low R2
Accounting Beta 0.54 Only 16 observations
Proxy Variable 0.25* Uses market cap and book/market
Bottom-up Beta 1.25 Reflects current business and
* Estimated from expected return on 8.41%.
Which beta would you choose?
l Given the alternative estimates of beta for Disney, which one would
you choose to use in your valuation?
o Regression
o Modified Regression
o Enhanced Beta
o Accounting Beta
o Proxy Variable
o Bottom-up Beta
Estimating a Bottom-up Beta for Hansol Paper
l Hansol paper, like most Korean firms in 1996, had an extraordinary
amount of debt on its balance sheet. The beta does not reflect this risk
adequately, since it is estimated using the Korean index.
l To estimate a bottom up beta, we looked at paper and pulp firms:
Comparable Firms Average D/E Ratio Unlevered
(# of firms) Beta Beta
Asian Paper & Pulp (5) 0.92 65.00% 0.65
U.S. Paper and Pulp (45) 0.85 35.00% 0.69
Global Paper & Pulp (187) 0.80 50.00% 0.61
Unlevered Beta for Paper and Pulp is 0.61
l Using the current debt equity ratio of 274%, the beta can be estimated:
Beta for Hansol Paper = 0.61 (1 + (1-.3) (2.74)) = 1.78
Estimating Betas: More Examples
Company Approach Used Beta
ABN Amro Comparable Firms 0.99
European Banks
Nestle Bottom-up Firms 0.85
Large, brand name food companies
Titan Watches Regression against BSE 0.94
Checked against global watch manufacturers
Brahma Bottom-up Beta 0.80
Global Beverage Firms
Amazon.com Bottom-up Beta 1.80
Internet Companies (Why not bookstores?)
Measuring Cost of Capital
l It will depend upon:
– (a) the components of financing: Debt, Equity or Preferred stock
– (b) the cost of each component
l In summary, the cost of capital is the cost of each component weighted
by its relative market value.
WACC = ke (E/(D+E)) + kd (D/(D+E))
The Cost of Debt
l The cost of debt is the market interest rate that the firm has to pay on
its borrowing. It will depend upon three components-
(a) The general level of interest rates
(b) The default premium
(c) The firm's tax rate
What the cost of debt is and is not..
• The cost of debt is
– the rate at which the company can borrow at today
– corrected for the tax benefit it gets for interest payments.
Cost of debt = kd = Interest Rate on Debt (1 - Tax rate)
• The cost of debt is not
– the interest rate at which the company obtained the debt it has on its
Estimating the Cost of Debt
l If the firm has bonds outstanding, and the bonds are traded, the yield to
maturity on a long-term, straight (no special features) bond can be
used as the interest rate.
l If the firm is rated, use the rating and a typical default spread on bonds
with that rating to estimate the cost of debt.
l If the firm is not rated,
– and it has recently borrowed long term from a bank, use the interest rate
on the borrowing or
– estimate a synthetic rating for the company, and use the synthetic rating to
arrive at a default spread and a cost of debt
l The cost of debt has to be estimated in the same currency as the cost of
equity and the cash flows in the valuation.
Estimating Synthetic Ratings
l The rating for a firm can be estimated using the financial
characteristics of the firm. In its simplest form, the rating can be
estimated from the interest coverage ratio
Interest Coverage Ratio = EBIT / Interest Expenses
l For Hansol Paper, for instance
Interest Coverage Ratio = 109,569/85,401 = 1.28
– Based upon the relationship between interest coverage ratios and ratings,
we would estimate a rating of B- for Hansol Paper.
l For Brahma,
Interest Coverage Ratio = 413/257 = 1.61
– Based upon the relationship between interest coverage ratios and ratings,
we would estimate a rating of B for Brahma
Interest Coverage Ratios, Ratings and Default
If Interest Coverage Ratio is Estimated Bond Rating Default Spread
> 8.50 AAA 0.20%
6.50 - 8.50 AA 0.50%
5.50 - 6.50 A+ 0.80%
4.25 - 5.50 A 1.00%
3.00 - 4.25 A– 1.25%
2.50 - 3.00 BBB 1.50%
2.00 - 2.50 BB 2.00%
1.75 - 2.00 B+ 2.50%
1.50 - 1.75 B 3.25%
1.25 - 1.50 B– 4.25%
0.80 - 1.25 CCC 5.00%
0.65 - 0.80 CC 6.00%
0.20 - 0.65 C 7.50%
< 0.20 D 10.00%
Examples of Cost of Debt calculation
Company Approach Used Cost of Debt
Disney Rating & Default spread 7% + 0.50% = 7.50%
(in U.S. Dollars)
Hansol Paper Synthetic Rating based upon 12% + 4.25% = 16.25%
Interest coverage ratio (in nominal WN)
Nestle Rating & Default spread 4.25%+0.25%= 4.50%
(in Swiss Francs)
ABN Amro YTM on 10-year straight 5.40% (in NLG)
Titan Watches Recent Borrowing 13.5% (in nominal Rs.)
Brahma Synthetic Rating based upon 5% + 3.25% = 8.25%
interest coverage ratio (in real BR)
Calculate the weights of each component
l Use target/average debt weights rather than project-specific weights.
l Use market value weights for debt and equity.
– The cost of capital is a measure of how much it would cost you to go out
and raise the financing to acquire the business you are valuing today.
Since you have to pay market prices for debt and equity, the cost of capital
is better estimated using market value weights.
– Book values are often misleading and outdated.
Estimating Market Value Weights
l Market Value of Equity should include the following
– Market Value of Shares outstanding
– Market Value of Warrants outstanding
– Market Value of Conversion Option in Convertible Bonds
l Market Value of Debt is more difficult to estimate because few firms
have only publicly traded debt. There are two solutions:
– Assume book value of debt is equal to market value
– Estimate the market value of debt from the book value
– For Disney, with book value of $12.342 million, interest expenses of $479
million, and a current cost of borrowing of 7.5% (from its rating)
(1 − 1
Estimated MV of Disney Debt = (1.075) 12,342
479 + = $11,180
.075 (1.075)3
Estimating Cost of Capital: Disney
l Equity
– Cost of Equity = 13.85%
– Market Value of Equity = $54.88 Billion
– Equity/(Debt+Equity ) = 82%
l Debt
– After-tax Cost of debt = 7.50% (1-.36) = 4.80%
– Market Value of Debt = $ 11.18 Billion
– Debt/(Debt +Equity) = 18%
l Cost of Capital = 13.85%(.82)+4.80%(.18) = 12.22%
Book Value and Market Value
l If you use book value weights for debt and equity to calculate cost of
capital in the United States, and value a firm on the basis of this cost of
capital, you will generally end up
o over valuing the firm
o under valuing the firm
o neither
Estimating Cost of Capital: Hansol Paper
l Equity
– Cost of Equity = 23.57% (with beta of 1.78)
– Market Value of Equity = 23000*15.062= 346,426 Million
– Equity/(Debt+Equity ) = 26.72%
l Debt
– After-tax Cost of debt = 16.25% (1-.3) = 11.38%
– Market Value of Debt = 949,862 Million
– Debt/(Debt +Equity) = 73.28%
l Cost of Capital = 23.57%(.267)+11.38%(.733) = 14.63%
Firm Value , WACC and Optimal Debt ratios
l Objective:
– A firm should pick a debt ratio that minimizes its cost of capital.
l Why?:
– Because if operating cash flows are held constant, minimizing the Cost of
Capital maximizes Firm Value.
Mechanics of Cost of Capital Estimation
1. Estimate the Cost of Equity at different levels of debt:
Equity will become riskier -> Cost of Equity will increase.
2. Estimate the Cost of Debt at different levels of debt:
Default risk will go up and bond ratings will go down as debt goes up -> Cost
of Debt will increase.
3. Estimate the Cost of Capital at different levels of debt
4. Calculate the effect on Firm Value and Stock Price.
Disney: Debt Ratios, Cost of Capital and Firm
Debt Beta Cost of Cov Rating Rate AT WACC Firm Value
Ratio Equity Ratio Rate
0% 1.09 13.00% ∞ AAA 7.20% 4.61% 13.00% $53,842
10%1.17 13.43% 12.44 AAA 7.20% 4.61% 12.55% $58,341
20%1.27 13.96% 5.74 A+ 7.80% 4.99% 12.17% $62,650
30%1.39 14.65% 3.62 A- 8.25% 5.28% 11.84% $66,930
40%1.56 15.56% 2.49 BB 9.00% 5.76% 11.64% $69,739
50%1.79 16.85% 1.75 B 10.25% 6.56% 11.70% $68,858
60%2.14 18.77% 1.24 CCC 12.00% 7.68% 12.11% $63,325
70%2.72 21.97% 1.07 CCC 12.00% 7.68% 11.97% $65,216
80%3.99 28.95% 0.93 CCC 12.00% 7.97% 12.17% $62,692
90%8.21 52.14% 0.77 CC 13.00% 9.42% 13.69% $48,160
l Firm Value = Current Firm Value + Firm Value (WACC(old) - WACC(new))/(WACC(new)-g)
Hansol Paper: Debt Ratios, Cost of Capital and
Firm Value
Debt Ratio Beta Cost of Int. Cov. Rating Interest AT Cost WACC Firm Value
Equity Ratio Rate
0.00% 0.35 14.29% ∞ AAA 12.30% 8.61% 14.29% 988,162 WN
10.00% 0.38 14.47% 6.87 AAA 12.30% 8.61% 13.89% 1,043,287 WN
20.00% 0.41 14.69% 3.25 A+ 13.00% 9.10% 13.58% 1,089,131 WN
30.00% 0.46 14.98% 2.13 A 13.25% 9.28% 13.27% 1,138,299 WN
40.00% 0.52 15.36% 1.51 BBB 14.00% 9.80% 13.14% 1,160,668 WN
50.00% 0.60 15.90% 1.13 B+ 15.00% 10.50% 13.20% 1,150,140 WN
60.00% 0.74 16.82% 0.88 B 16.00% 11.77% 13.79% 1,056,435 WN
70.00% 0.99 18.43% 0.75 B 16.00% 12.38% 14.19% 1,001,068 WN
80.00% 1.50 21.76% 0.62 B- 17.00% 13.83% 15.42% 861,120 WN
90.00% 3.00 31.51% 0.55 B- 17.00% 14.18% 15.92% 813,775 WN
l Firm Value = Current Firm Value + Firm Value (WACC(old) - WACC(new))/(WACC(new)-g)
II. Estimating Cash Flows
DCF Valuation
Steps in Cash Flow Estimation
l Estimate the current earnings of the firm
– If looking at cash flows to equity, look at earnings after interest expenses -
i.e. net income
– If looking at cash flows to the firm, look at operating earnings after taxes
l Consider how much the firm invested to create future growth
– If the investment is not expensed, it will be categorized as capital
expenditures. To the extent that depreciation provides a cash flow, it will
cover some of these expenditures.
– Increasing working capital needs are also investments for future growth
l If looking at cash flows to equity, consider the cash flows from net
debt issues (debt issued - debt repaid)
Earnings Checks
l When estimating cash flows, we invariably start with accounting
earnings. To the extent that we start with accounting earnings in a base
year, it is worth considering the following questions:
– Are basic accounting standards being adhered to in the calculation of the
– Are the base year earnings skewed by extraordinary items - profits or
losses? (Look at earnings prior to extraordinary items)
– Are the base year earnings affected by any accounting rule changes made
during the period? (Changes in inventory or depreciation methods can
have a material effect on earnings)
– Are the base year earnings abnormally low or high? (If so, it may be
necessary to normalize the earnings.)
– How much of the accounting expenses are operating expenses and how
much are really expenses to create future growth?
Three Ways to Think About Earnings
Revenues Revenues * Capital Invested *
- Operating Expenses Operating Margin Pre-tax ROC
= Operating Income = Operating Income = Operating Income
Capital Invested = Book Value of Debt + Book Value of Equity
Pre-tax ROC = EBIT / (Book Value of Debt + Book Value of Equity)
The equity shortcuts would be as follows:
Revenues Revenues * Equity Invested *
- Operating Expenses Net Margin Return on Equity
- Interest Expenses = Net Income = Net Income
= Taxable Income
- Taxes
= Net Income
Dividends and Cash Flows to Equity
l In the strictest sense, the only cash flow that an investor will receive
from an equity investment in a publicly traded firm is the dividend that
will be paid on the stock.
l Actual dividends, however, are set by the managers of the firm and
may be much lower than the potential dividends (that could have been
paid out)
– managers are conservative and try to smooth out dividends
– managers like to hold on to cash to meet unforeseen future contingencies
and investment opportunities
l When actual dividends are less than potential dividends, using a model
that focuses only on dividends will under state the true value of the
equity in a firm.
Measuring Potential Dividends
l Some analysts assume that the earnings of a firm represent its potential
dividends. This cannot be true for several reasons:
– Earnings are not cash flows, since there are both non-cash revenues and
expenses in the earnings calculation
– Even if earnings were cash flows, a firm that paid its earnings out as
dividends would not be investing in new assets and thus could not grow
– Valuation models, where earnings are discounted back to the present, will
over estimate the value of the equity in the firm
l The potential dividends of a firm are the cash flows left over after the
firm has made any “investments” it needs to make to create future
growth and net debt repayments (debt repayments - new debt issues)
– The common categorization of capital expenditures into discretionary and
non-discretionary loses its basis when there is future growth built into the
Measuring Investment Expenditures
l Accounting rules categorize expenses into operating and capital
expenses. In theory, operating expenses are expenses that create
earnings only in the current period, whereas capital expenses are those
that will create earnings over future periods as well. Operating
expenses are netted against revenues to arrive at operating income.
– There are anomalies in the way in which this principle is applied.
Research and development expenses are treated as operating expenses,
when they are in fact designed to create products in future periods.
l Capital expenditures, while not shown as operating expenses in the
period in which they are made, are depreciated or amortized over their
estimated life. This depreciation and amortization expense is a non-
cash charge when it does occur.
l The net cash flow from capital expenditures can be then be written as:
Net Capital Expenditures = Capital Expenditures - Depreciation
The Working Capital Effect
l In accounting terms, the working capital is the difference between
current assets (inventory, cash and accounts receivable) and current
liabilities (accounts payables, short term debt and debt due within the
next year)
l A cleaner definition of working capital from a cash flow perspective is
the difference between non-cash current assets (inventory and accounts
receivable) and non-debt current liabilties (accounts payable
l Any investment in this measure of working capital ties up cash.
Therefore, any increases (decreases) in working capital will reduce
(increase) cash flows in that period.
l When forecasting future growth, it is important to forecast the effects
of such growth on working capital needs, and building these effects
into the cash flows.
Estimating Cash Flows: FCFE
l Cash flows to Equity for a Levered Firm
Net Income
+ Depreciation & Amortization
= Cash flows from Operations to Equity Investors
- Preferred Dividends
- Capital Expenditures
- Working Capital Needs (Changes in Non-cash Working Capital)
- Principal Repayments
+ Proceeds from New Debt Issues
= Free Cash flow to Equity
Estimating FCFE when Leverage is Stable
Net Income
- (1- δ) (Capital Expenditures - Depreciation)
- (1- δ) Working Capital Needs
= Free Cash flow to Equity
δ = Debt/Capital Ratio
For this firm,
– Proceeds from new debt issues = Principal Repayments + d (Capital
Expenditures - Depreciation + Working Capital Needs)
Estimating FCFE: Disney
l Net Income=$ 1533 Million
l Capital spending = $ 1,746 Million
l Depreciation per Share = $ 1,134 Million
l Non-cash Working capital Change = $ 477 Million
l Debt to Capital Ratio = 23.83%
l Estimating FCFE (1997):
Net Income $1,533 Mil
- (Cap. Exp - Depr)*(1-DR) $465.90
Chg. Working Capital*(1-DR) $363.33
= Free CF to Equity $ 704 Million
Dividends Paid $ 345 Million
FCFE and Leverage: Is this a free lunch?
Debt Ratio and FCFE: Disney
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
Debt Ratio
FCFE and Leverage: The Other Shoe Drops
Debt Ratio and Beta
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
Debt Ratio
Leverage, FCFE and Value
l In a discounted cash flow model, increasing the debt/equity ratio will
generally increase the expected free cash flows to equity investors over
future time periods and also the cost of equity applied in discounting
these cash flows. Which of the following statements relating leverage
to value would you subscribe to?
o Increasing leverage will increase value because the cash flow effects
will dominate the discount rate effects
o Increasing leverage will decrease value because the risk effect will be
greater than the cash flow effects
o Increasing leverage will not affect value because the risk effect will
exactly offset the cash flow effect
o Any of the above, depending upon what company you are looking at
and where it is in terms of current leverage
Estimating FCFE: Brahma
l Net Income (1996) = 325 Million BR
l Capital spending (1996) = 396 Million
l Depreciation (1996) = 183 Million BR
l Non-cash Working capital Change (1996) = 12 Million BR
l Debt Ratio = 43.48%
l Estimating FCFE (1996):
Earnings per Share 325.00 Million BR
- (Cap Ex-Depr) (1-DR) = (396-183)(1-.4348) = 120.39 Million BR
- Change in Non-cash WC (1-DR) = 12 (1-.4348) = 6.78 Million BR
Free Cashflow to Equity 197.83 Million Br
Dividends Paid 232.00 Million BR
Cashflow to Firm
Claimholder Cash flows to claimholder
Equity Investors Free Cash flow to Equity
Debt Holders Interest Expenses (1 - tax rate)
+ Principal Repayments
- New Debt Issues
Preferred Stockholders Preferred Dividends
Firm = Free Cash flow to Firm =
Equity Investors Free Cash flow to Equity
+ Debt Holders + Interest Expenses (1- tax rate)
+ Preferred Stockholders + Principal Repayments
- New Debt Issues
+ Preferred Dividends
A Simpler Approach
EBIT ( 1 - tax rate)
- (Capital Expenditures - Depreciation)
- Change in Working Capital
= Cash flow to the firm
l The calculation starts with after-tax operating income, where the entire
operating income is assumed to be taxed at the marginal tax rate
l Where are the tax savings from interest payments in this cash flow?
Estimating FCFF: Disney
l EBIT = $5,559 Million Tax Rate = 36%
l Capital spending = $ 1,746 Million
l Depreciation = $ 1,134 Million
l Non-cash Working capital Change = $ 477 Million
l Estimating FCFF
EBIT (1-t) $ 3,558
- Net Capital Expenditures $ 612
- Change in WC $ 477
= FCFF $ 2,469 Million
Estimating FCFF: Hansol Paper
l EBIT (1995) = 109,569 Million WN
l Capital spending (1995) =326,385 Million WN
l Depreciation (1995) = 45,000 Million WN
l Non-cash Working capital Change (1995) = 37,000 WN
l Estimating FCFF (1995)
Current EBIT * (1 - tax rate) = 109,569 (1-.3) =76,698 Million WN
- (Capital Spending - Depreciation) =282,385
- Change in Working Capital = 37,000
Current FCFF = - 242,687 Million WN
Negative FCFF and Implications for Value
l A firm which has a negative FCFF is a bad investment and not worth
o True
o False
l If true, explain why.
l If false, explain under what conditions it can be a valuable firm.
III. Estimating Growth
DCF Valuation
Ways of Estimating Growth in Earnings
l Look at the past
– The historical growth in earnings per share is usually a good starting point
for growth estimation
l Look at what others are estimating
– Analysts estimate growth in earnings per share for many firms. It is useful
to know what their estimates are.
l Look at fundamentals
– Ultimately, all growth in earnings can be traced to two fundamentals -
how much the firm is investing in new projects, and what returns these
projects are making for the firm.
I. Historical Growth in EPS
l Historical growth rates can be estimated in a number of different ways
– Arithmetic versus Geometric Averages
– Simple versus Regression Models
l Historical growth rates can be sensitive to
– the period used in the estimation
l In using historical growth rates, the following factors have to be
– how to deal with negative earnings
– the effect of changing size
Disney: Arithmetic versus Geometric Growth
Year EPS Growth Rate
1990 1.50
1991 1.20 -20.00%
1992 1.52 26.67%
1993 1.63 7.24%
1994 2.04 25.15%
1995 2.53 24.02%
1996 2.23 -11.86%
Arithmetic Average = 8.54%
Geometric Average = (2.23/1.50) (1/6) – 1 = 6.83% (6 years of growth)
l The arithmetic average will be higher than the geometric average rate
l The difference will increase with the standard deviation in earnings
Disney: The Effects of Altering Estimation
Year EPS Growth Rate
1991 1.20
1992 1.52 26.67%
1993 1.63 7.24%
1994 2.04 25.15%
1995 2.53 24.02%
1996 2.23 -11.86%
Taking out 1990 from our sample, changes the growth rates materially:
Arithmetic Average from 1991 to 1996 = 14.24%
Geometric Average = (2.23/1.20)(1/5) = 13.19% (5 years of growth)
Disney: Linear and Log-Linear Models for
Year Year Number EPS ln(EPS)
1990 1 $ 1.50 0.4055
1991 2 $ 1.20 0.1823
1992 3 $ 1.52 0.4187
1993 4 $ 1.63 0.4886
1994 5 $ 2.04 0.7129
1995 6 $ 2.53 0.9282
1996 7 $ 2.23 0.8020
l EPS = 1.04 + 0.19 ( t): EPS grows by $0.19 a year
Growth Rate = $0.19/$1.81 = 10.5% ($1.81: Average EPS from 90-96)
l ln(EPS) = 0.1375 + 0.1063 (t): Growth rate approximately 10.63%
A Test
l You are trying to estimate the growth rate in earnings per share at
Time Warner from 1996 to 1997. In 1996, the earnings per share was a
deficit of $0.05. In 1997, the expected earnings per share is $ 0.25.
What is the growth rate?
o -600%
o +600%
o +120%
o Cannot be estimated
Dealing with Negative Earnings
l When the earnings in the starting period are negative, the growth rate
cannot be estimated. (0.30/-0.05 = -600%)
l There are three solutions:
– Use the higher of the two numbers as the denominator (0.30/0.25 = 120%)
– Use the absolute value of earnings in the starting period as the
denominator (0.30/0.05=600%)
– Use a linear regression model and divide the coefficient by the average
l When earnings are negative, the growth rate is meaningless. Thus,
while the growth rate can be estimated, it does not tell you much about
the future.
The Effect of Size on Growth: Callaway Golf
Year Net Profit Growth Rate
1990 1.80
1991 6.40 255.56%
1992 19.30 201.56%
1993 41.20 113.47%
1994 78.00 89.32%
1995 97.70 25.26%
1996 122.30 25.18%
Geometric Average Growth Rate = 102%
Extrapolation and its Dangers
Year Net Profit
1996 $ 122.30
1997 $ 247.05
1998 $ 499.03
1999 $ 1,008.05
2000 $ 2,036.25
2001 $ 4,113.23
l If net profit continues to grow at the same rate as it has in the past 6
years, the expected net income in 5 years will be $ 4.113 billion.
Propositions about Historical Growth
l Proposition 1: And in today already walks tomorrow.
l Proposition 2: You cannot plan the future by the past
l Proposition 3: Past growth carries the most information for firms
whose size and business mix have not changed during the estimation
period, and are not expected to change during the forecasting period.
l Proposition 4: Past growth carries the least information for firms in
transition (from small to large, from one business to another..)
II. Analyst Forecasts of Growth
l While the job of an analyst is to find under and over valued stocks in
the sectors that they follow, a significant proportion of an analyst’s
time (outside of selling) is spent forecasting earnings per share.
– Most of this time, in turn, is spent forecasting earnings per share in the
next earnings report
– While many analysts forecast expected growth in earnings per share over
the next 5 years, the analysis and information (generally) that goes into
this estimate is far more limited.
l Analyst forecasts of earnings per share and expected growth are
widely disseminated by services such as Zacks and IBES, at least for
U.S companies.
How good are analysts at forecasting growth?
l Analysts forecasts of EPS tend to be closer to the actual EPS than
simple time series models, but the differences tend to be small
Study Time Period Analyst Forecast Error Time Series Model
Collins & Hopwood Value Line Forecasts 31.7% 34.1%
Brown & Rozeff Value Line Forecasts 28.4% 32.2%
Fried & Givoly Earnings Forecaster 16.4% 19.8%
l The advantage that analysts have over time series models
– tends to decrease with the forecast period (next quarter versus 5 years)
– tends to be greater for larger firms than for smaller firms
– tends to be greater at the industry level than at the company level
l Forecasts of growth (and revisions thereof) tend to be highly correlated
across analysts.
Are some analysts more equal than others?
l A study of All-America Analysts (chosen by Institutional Investor)
found that
– There is no evidence that analysts who are chosen for the All-America
Analyst team were chosen because they were better forecasters of
earnings. (Their median forecast error in the quarter prior to being chosen
was 30%; the median forecast error of other analysts was 28%)
– However, in the calendar year following being chosen as All-America
analysts, these analysts become slightly better forecasters than their less
fortunate brethren. (The median forecast error for All-America analysts is
2% lower than the median forecast error for other analysts)
– Earnings revisions made by All-America analysts tend to have a much
greater impact on the stock price than revisions from other analysts
– The recommendations made by the All America analysts have a greater
impact on stock prices (3% on buys; 4.7% on sells). For these
recommendations the price changes are sustained, and they continue to
rise in the following period (2.4% for buys; 13.8% for the sells).
The Five Deadly Sins of an Analyst
l Tunnel Vision: Becoming so focused on the sector and valuations
within the sector that they lose sight of the bigger picture.
l Lemmingitis:Strong urge felt by analysts to change recommendations
& revise earnings estimates when other analysts do the same.
l Stockholm Syndrome(shortly to be renamed the Bre-X syndrome):
Refers to analysts who start identifying with the managers of the firms
that they are supposed to follow.
l Factophobia (generally is coupled with delusions of being a famous
story teller): Tendency to base a recommendation on a “story” coupled
with a refusal to face the facts.
l Dr. Jekyll/Mr.Hyde: Analyst who thinks his primary job is to bring in
investment banking business to the firm.
Propositions about Analyst Growth Rates
l Proposition 1: There if far less private information and far more
public information in most analyst forecasts than is generally claimed.
l Proposition 2: The biggest source of private information for analysts
remains the company itself which might explain
– why there are more buy recommendations than sell recommendations
(information bias and the need to preserve sources)
– why there is such a high correlation across analysts forecasts and revisions
– why All-America analysts become better forecasters than other analysts
after they are chosen to be part of the team.
l Proposition 3: There is value to knowing what analysts are forecasting
as earnings growth for a firm. There is, however, danger when they
agree too much (lemmingitis) and when they agree to little (in which
case the information that they have is so noisy as to be useless).
III. Fundamental Growth Rates
Investment Current Return on
in Existing Investment on Current
X Projects
= Earnings
$ 1000 12% $120
Investment Next Period’s Investment Return on
in Existing Return on in New Investment on Next
X Investment
+ Projects
X New Projects
= Period’s
Investment Change in Investment Return on
in Existing ROI from in New Investment on
X current to next
period: 0%
+ Projects
X New Projects
Change in Earnings
= $ 12
Growth Rate Derivations
In the special case where ROI on existing projects remains unchanged and is equal to the ROI on new projects
Investment in New Projects Change in Earnings
Current Earnings X Return on Investment = Current Earnings
100 $12
120 X 12% = $120
Reinvestment Rate X Return on Investment = Growth Rate in Earnings
83.33% X 12% = 10%
in the more general case where ROI can change from period to period, this can be expanded as follows:
Investment in Existing Projects*(Change in ROI) + New Projects (ROI) Change in Earnings
Investment in Existing Projects* Current ROI = Current Earnings
For instance, if the ROI increases from 12% to 13%, the expected growth rate can be written as follows:
$1,000 * (.13 - .12) + 100 (13%) $23
$ 1000 * .12 = $120
= 19.17%
Expected Long Term Growth in EPS
l When looking at growth in earnings per share, these inputs can be cast as
Reinvestment Rate = Retained Earnings/ Current Earnings = Retention Ratio
Return on Investment = ROE = Net Income/Book Value of Equity
l In the special case where the current ROE is expected to remain unchanged
gEPS = Retained Earningst-1/ NIt-1 * ROE
= Retention Ratio * ROE
= b * ROE
l Proposition 1: The expected growth rate in earnings for a company
cannot exceed its return on equity in the long term.
Estimating Expected Growth in EPS: ABN
l Current Return on Equity = 15.79%
l Current Retention Ratio = 1 - DPS/EPS = 1 - 1.13/2.45 = 53.88%
l If ABN Amro can maintain its current ROE and retention ratio, its
expected growth in EPS will be:
Expected Growth Rate = 0.5388 (15.79%) = 8.51%
Expected ROE changes and Growth
l Assume now that ABN Amro’s ROE next year is expected to increase
to 17%, while its retention ratio remains at 53.88%. What is the new
expected long term growth rate in earnings per share?
l Will the expected growth rate in earnings per share next year be
greater than, less than or equal to this estimate?
o greater than
o less than
o equal to
Changes in ROE and Expected Growth
l When the ROE is expected to change,
gEPS= b *ROEt+1 +{(ROEt+1– ROEt)BV of Equityt)/ROEt (BV of Equityt)}
l Proposition 2: Small changes in ROE translate into large changes in
the expected growth rate.
– Corollary: The larger the existing asset base, the bigger the effect on
earnings growth of changes in ROE.
l Proposition 3: No firm can, in the long term, sustain growth in
earnings per share from improvement in ROE.
– Corollary: The higher the existing ROE of the company (relative to the
business in which it operates) and the more competitive the business in
which it operates, the smaller the scope for improvement in ROE.
Changes in ROE: ABN Amro
l Assume now that ABN’s expansion into Asia will push up the ROE to
17%, while the retention ratio will remain 53.88%. The expected
growth rate in that year will be:
gEPS= b *ROEt+1 + (ROEt+1– ROEt)(BV of Equityt )/ ROEt (BV of Equityt)
= 16.83%
l Note that 1.21% improvement in ROE translates into amost a doubling
of the growth rate from 8.51% to 16.83%.
ROE and Leverage
l ROE = ROC + D/E (ROC - i (1-t))
ROC = (Net Income + Interest (1 - tax rate)) / BV of Capital
= EBIT (1- t) / BV of Capital
D/E = BV of Debt/ BV of Equity
i = Interest Expense on Debt / BV of Debt
t = Tax rate on ordinary income
l Note that BV of Assets = BV of Debt + BV of Equity.
Decomposing ROE: Brahma
l Real Return on Capital = 687 (1-.32) / (1326+542+478) = 19.91%
– This is assumed to be real because both the book value and income are
inflation adjusted.
l Debt/Equity Ratio = (542+478)/1326 = 0.77
l After-tax Cost of Debt = 8.25% (1-.32) = 5.61% (Real BR)
l Return on Equity = ROC + D/E (ROC - i(1-t))
19.91% + 0.77 (19.91% - 5.61%) = 30.92%
Decomposing ROE: Titan Watches
l Return on Capital = 713 (1-.25)/(1925+2378+1303) = 9.54%
l Debt/Equity Ratio = (2378 + 1303)/1925 = 1.91
l After-tax Cost of Debt = 13.5% (1-.25) = 10.125%
l Return on Equity = ROC + D/E (ROC - i(1-t))
9.54% + 1.91 (9.54% - 10.125%) = 8.42%
Expected Growth in EBIT And Fundamentals
l When looking at growth in operating income, the definitions are
Reinvestment Rate = (Net Capital Expenditures + Change in WC)/EBIT(1-t)
Return on Investment = ROC = EBIT(1-t)/(BV of Debt + BV of Equity)
l Reinvestment Rate and Return on Capital
gEBIT = (Net Capital Expenditures + Change in WC)/EBIT(1-t) * ROC
= Reinvestment Rate * ROC
l Proposition 4: No firm can expect its operating income to grow over
time without reinvesting some of the operating income in net capital
expenditures and/or working capital.
l Proposition 5: The net capital expenditure needs of a firm, for a given
growth rate, should be inversely proportional to the quality of its
No Net Capital Expenditures and Long Term
l You are looking at a valuation, where the terminal value is based upon
the assumption that operating income will grow 3% a year forever, but
there are no net cap ex or working capital investments being made
after the terminal year. When you confront the analyst, he contends
that this is still feasible because the company is becoming more
efficient with its existing assets and can be expected to increase its
return on capital over time. Is this a reasonable explanation?
o Yes
o No
l Explain.
Estimating Growth in EBIT: Disney
l Reinvestment Rate = 50%
l Return on Capital =18.69%
l Expected Growth in EBIT =.5(18.69%) = 9.35%
Estimating Growth in EBIT: Hansol Paper
l Net Capital Expenditures = (150,000-45000) = 105,000 Million WN
(I normalized capital expenditures to account for lumpy investments)
l Change in Working Capital = 1000 Million WN
l Reinvestment Rate = (105,000+1,000)/(109,569*.7) = 138.20%
l Return on Capital = 6.76%
l Expected Growth in EBIT = 6.76% (1.382) = 9.35%
A Profit Margin View of Growth
l The relationship between growth and return on investment can also be
framed in terms of profit margins:
l In the case of growth in EPS
Growth in EPS = Retention Ratio * ROE
= Retention Ratio*Net Income/Sales * Sales/BV of Equity
= Retention Ratio * Net Margin * Equity Turnover Ratio
Growth in EBIT = Reinvestment Rate * ROC
= Reinvestment Rate * EBIT(1-t)/ BV of Capital
= Reinvestment Rate * AT Operating Margin * Capital Turnover Ratio
IV. Growth Patterns
Discounted Cashflow Valuation
Stable Growth and Terminal Value
l When a firm’s cash flows grow at a “constant” rate forever, the present
value of those cash flows can be written as:
Value = Expected Cash Flow Next Period / (r - g)
r = Discount rate (Cost of Equity or Cost of Capital)
g = Expected growth rate
l This “constant” growth rate is called a stable growth rate and cannot
be higher than the growth rate of the economy in which the firm
l While companies can maintain high growth rates for extended periods,
they will all approach “stable growth” at some point in time.
l When they do approach stable growth, the valuation formula above
can be used to estimate the “terminal value” of all cash flows beyond.
Growth Patterns
l A key assumption in all discounted cash flow models is the period of
high growth, and the pattern of growth during that period. In general,
we can make one of three assumptions:
– there is no high growth, in which case the firm is already in stable growth
– there will be high growth for a period, at the end of which the growth rate
will drop to the stable growth rate (2-stage)
– there will be high growth for a period, at the end of which the growth rate
will decline gradually to a stable growth rate(3-stage)
Stable Growth 2-Stage Growth 3-Stage Growth
Determinants of Growth Patterns
l Size of the firm
– Success usually makes a firm larger. As firms become larger, it becomes
much more difficult for them to maintain high growth rates
l Current growth rate
– While past growth is not always a reliable indicator of future growth, there
is a correlation between current growth and future growth. Thus, a firm
growing at 30% currently probably has higher growth and a longer
expected growth period than one growing 10% a year now.
l Barriers to entry and differential advantages
– Ultimately, high growth comes from high project returns, which, in turn,
comes from barriers to entry and differential advantages.
– The question of how long growth will last and how high it will be can
therefore be framed as a question about what the barriers to entry are, how
long they will stay up and how strong they will remain.
Stable Growth and Fundamentals
l The growth rate of a firm is driven by its fundamentals - how much it
reinvests and how high project returns are. As growth rates approach
“stability”, the firm should be given the characteristics of a stable
growth firm.
Model High Growth Firms usually Stable growth firms usually
DDM 1. Pay no or low dividends 1. Pay high dividends
2. Have high risk 2. Have average risk
3. Earn high ROC 3. Earn ROC closer to WACC
FCFE/ 1. Have high net cap ex 1. Have lower net cap ex
FCFF 2. Have high risk 2. Have average risk
3. Earn high ROC 3. Earn ROC closer to WACC
4. Have low leverage 4. Have leverage closer to
industry average
The Dividend Discount Model: Estimating
Stable Growth Inputs
l Consider the example of ABN Amro. Based upon its current return on
equity of 15.79% and its retention ratio of 53.88%, we estimated a
growth in earnings per share of 8.51%.
l Let us assume that ABN Amro will be in stable growth in 5 years. At
that point, let us assume that its return on equity will be closer to the
average for European banks of 15%, and that it will grow at a nominal
rate of 5% (Real Growth + Inflation Rate in NV)
l The expected payout ratio in stable growth can then be estimated as
Stable Growth Payout Ratio = 1 - g/ ROE = 1 - .05/.15 = 66.67%
g = b (ROE)
b = g/ROE
Payout = 1- b
The FCFE/FCFF Models: Estimating Stable
Growth Inputs
l To estimate the net capital expenditures in stable growth, consider the
growth in operating income that we assumed for Disney. The
reinvestment rate was assumed to be 50%, and the return on capital
was assumed to be 18.69%, giving us an expected growth rate of
l In stable growth (which will occur 10 years from now), assume that
Disney will have a return on capital of 16%, and that its operating
income is expected to grow 5% a year forever.
Reinvestment Rate = Growth in Operating Income/ROC = 5/16
l This reinvestment rate includes both net cap ex and working capital.
Estimated EBIT (1-t) in year 11 = $ 9,098 Million
Reinvestment = $9,098(5/16) = $2,843 Million
Net Capital Expenditures = Reinvestment - Change in Working Capital11
= $ 2,843m -105m = 2,738m | {"url":"http://www.docstoc.com/docs/45583921/Discounted-Cash-Flow","timestamp":"2014-04-17T20:36:18Z","content_type":null,"content_length":"153770","record_id":"<urn:uuid:d4e80d42-f261-41e1-8ff7-185d23ba285f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blind beamforming for non-Gaussian signals
Results 1 - 10 of 404
- Neural Computing Surveys , 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Cited by 1492 (93 self)
Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example,
principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation
is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this
paper, we survey the existing theory and methods for ICA. 1
, 2003
"... Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from
observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mutual i ..."
Cited by 390 (4 self)
Add to MetaCart
Blind signal separation (BSS) and independent component analysis (ICA) are emerging techniques of array processing and data analysis, aiming at recovering unobserved signals or `sources' from
observed mixtures (typically, the output of an array of sensors), exploiting only the assumption of mutual independence between the signals. The weakness of the assumptions makes it a powerful
approach but requires to venture beyond familiar second order statistics. The objective of this paper is to review some of the approaches that have been recently developed to address this exciting
problem, to show how they stem from basic principles and how they relate to each other.
- IEEE Trans. on Signal Processing , 1996
"... Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source
separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Eq ..."
Cited by 381 (10 self)
Add to MetaCart
Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source
separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence) . The EASI algorithms are based on the idea
of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the
performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized)
distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the
effectiveness of the proposed ap...
- J. Neurosci. Methods
"... Abstract: We have developed a toolbox and graphic user interface, EEGLAB, running under the cross-platform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and
/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event i ..."
Cited by 307 (32 self)
Add to MetaCart
Abstract: We have developed a toolbox and graphic user interface, EEGLAB, running under the cross-platform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or
averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus
multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), Independent Component Analysis (ICA) and time/frequency decompositions including
channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to
interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow
users to customize data processing using command history and interactive ‘pop ’ functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write
custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A ‘plug-in ’ facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is
freely available
, 1999
"... An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. This was
achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a pro ..."
Cited by 202 (21 self)
Add to MetaCart
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. This was achieved by
using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have suband super-Gaussian
regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the
stability analysis of Cardoso and Laheld (1996) to switch between sub- and super-Gaussian regimes. We demonstrate that the extended infomax algorithm is able to easily separate 20 sources with a
variety of source distributions. Applied to high-dimensional data from electroencephalographic (EEG) recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker
electrical ...
, 1997
"... Separation of sources consists in recovering a set of signals of which only instantaneous linear mixtures are observed. In many situations, no a priori information on the mixing matrix is
available: the linear mixture should be `blindly' processed. This typically occurs in narrow-band array processi ..."
Cited by 201 (6 self)
Add to MetaCart
Separation of sources consists in recovering a set of signals of which only instantaneous linear mixtures are observed. In many situations, no a priori information on the mixing matrix is available:
the linear mixture should be `blindly' processed. This typically occurs in narrow-band array processing applications when the array manifold is unknown or distorted. This paper introduces a new
source separation technique exploiting the time coherence of the source signals. In contrast to other previously reported techniques, the proposed approach relies only on stationary second-order
statistics, being based on a joint diagonalization of a set of covariance matrices. Asymptotic performance analysis of this method is carried out; some numerical simulations are provided to
illustrate the effectiveness of the proposed method. I. Introduction I N many situations of practical interest, one has to process multidimensional observations of the form: x(t) = y(t) + n(t) = As
(t) + n(t); (1) i.e. x...
- IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS , 2003
"... This paper presents an overview of recent progress in the area of multiple-input–multiple-output (MIMO) space–time coded wireless systems. After some background on the research leading to the
discovery of the enormous potential of MIMO wireless links, we highlight the different classes of technique ..."
Cited by 199 (5 self)
Add to MetaCart
This paper presents an overview of recent progress in the area of multiple-input–multiple-output (MIMO) space–time coded wireless systems. After some background on the research leading to the
discovery of the enormous potential of MIMO wireless links, we highlight the different classes of techniques and algorithms proposed which attempt to realize the various benefits of MIMO including
spatial multiplexing and space–time coding schemes. These algorithms are often derived and analyzed under ideal independent fading conditions. We present the state of the art in channel modeling and
measurements, leading to a better understanding of actual MIMO gains. Finally, the paper addresses current questions regarding the integration of MIMO links in practical wireless systems and
"... This article considers high-order measures of independence for the independent component analysis problem and discusses the class of Jacobi algorithms for their optimization. Several
implementations are discussed. We compare the proposed approaches with gradient-based techniques from the algorithmic ..."
Cited by 187 (4 self)
Add to MetaCart
This article considers high-order measures of independence for the independent component analysis problem and discusses the class of Jacobi algorithms for their optimization. Several implementations
are discussed. We compare the proposed approaches with gradient-based techniques from the algorithmic point of view and also on a set of biomedical data.
- IEEE Trans. Signal Processing , 2000
"... Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) non-stationarity of the sources to achieve separation. We introduce two objective
functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and w ..."
Cited by 126 (11 self)
Add to MetaCart
Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) non-stationarity of the sources to achieve separation. We introduce two objective
functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and we show how they can be optimized, off-line or on-line, by simple yet remarkably efficient
algorithms (one is based on a novel joint diagonalization procedure, the other on a Newton-like technique). The paper also includes (limited) numerical experiments and a discussion contrasting
non-Gaussian and non-stationary models. 1. INTRODUCTION The aim of this paper is to develop a blind source separation procedure adapted to source signals with time varying intensity (such as speech
signals). For simplicity, we shall restrict ourselves to the simplest mixture model: X(t) = AS(t) (1) where X(t) = [X 1 (t) XK (t)] T is the vector of observations (at time t), A is a fixed unknown K
K inver...
- SIAM J. Mat. Anal. Appl , 1996
"... . Simultaneous diagonalization of several matrices can be implemented by a Jacobi-like technique. This note gives the required Jacobi angles in close form. Key words. Simultaneous
diagonalization, Jacobi iterations, eigenvalues, eigenvectors, structured eigenvalue problem. AMS subject classificati ..."
Cited by 120 (3 self)
Add to MetaCart
. Simultaneous diagonalization of several matrices can be implemented by a Jacobi-like technique. This note gives the required Jacobi angles in close form. Key words. Simultaneous diagonalization,
Jacobi iterations, eigenvalues, eigenvectors, structured eigenvalue problem. AMS subject classifications. 65F15, 65-04. Introduction. Simultaneous diagonalization of several commuting matrices has
been recently considered in [1], mainly motivated by stability and convergence concerns. Exact or approximate simultaneous diagonalization was also independently introduced as a solution to a
statistical identification problem [2] (see [3] for a later paper in English). The simultaneous diagonalization algorithm described in these papers is an extension of the Jacobi technique: a joint
diagonality criterion is iteratively optimized under plane rotations. The purpose of this note is to complement [1] by giving a close form expression for the optimal Jacobi angles. 1. Jacobi angles
in close form. C... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=29122","timestamp":"2014-04-18T01:43:46Z","content_type":null,"content_length":"39773","record_id":"<urn:uuid:6937cfaa-9574-4627-bbb8-841f30e6a2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
A second look at situational pitching
This is sort of a follow-up to two articles. The seemingly obvious one is is my article from last week, about how situational pitching might affect a pitcher’s ability to prevent (or allow) runs. In
it, I advanced a theory that we could better assess a pitcher’s ability by taking into account how he pitches in particular situations, not just by looking at his overall performance.
How to determine this, however? To do that, I had to lean on a previous article on how well we can predict ERA using current methods. (While referencing that research, I discovered a minor bug that
doesn’t change the conclusions. For more detail, see the References section.)
The test
So I conducted a repeat of the previous test of persistance, if you will, in pitching metrics. I created another metric based on the skeleton of tRA, but more akin to RE24. In other words, every
event was given a different linear weights value depending on the number of outs and the runners on base. That way, a pitcher can be credited for the added value he creates (or fails to create) by
timing his pitching performance based upon the situation.
And… it doesn’t seem to help any:
ERA FIP xFIP tERA tERA24
1 Year 2.32 1.92 1.78 1.95 2.03
Multiyear 1.35 1.13 1.06 1.15 1.20
500+ IP 0.47 0.46 0.46 0.48 0.49
This looks at the Root Mean Square Error of split halves (in other words, even and odd numbered events) from 2003-2008. The first row does split halves for each pitcher by season; the second looks at
splits for a pitcher over the entire five-year sample; the third looks at only those players with at least 500 IP in the first split half.
And… nothing. At no point does looking at situational pitching splits add any predictive value.
At this point, I am required to note that it’s very possible I am simply chasing phantoms. But I’m not so sure just yet.
Another possible approach?
By breaking everything down into 24 base-out states, it’s possible that I’m splitting things too fine; in other words, adding more noise than I am signal. That doesn’t necessarily mean the signal
isn’t there, just that we have to work harder to find it.
(And of course, the harder we work to find the signal, the less meaningful it probably is in the aggregate. But if it is there, there are probably some individual pitchers to whom this skill matters
more than a little.)
Let’s circle back around to the value of a walk for a minute. The value of any event on offense is based upon three things:
1. Getting on
2. Moving other runners over
3. Avoiding an out
By timing walks so that they occur more frequently with first base open, a pitcher can reduce the number of runners they move over. As Peter Jensen pointed out in the comments last week, it’s very
possible that pitchers have no real skill here, as there are simply some times where the situation dictates a walk, regardless of the pitcher’s skill at issuing walks in general. Call these
intentional unintentional walks. A pitcher with an overall low walk rate will issue more of these relative to his total walks, though, even if this is not a “skill” in any sense.
So let’s consider a pitcher’s rate of walks with first base open, compared to his walks in general. How well does that persist? Let’s look at split half correlation, weighted, from 1989 to 1999. And
let’s compare it to the year to year BABIP over that same time period.
There is slightly more correlation between our open-walk rate (.24) than BABIP (.21), in substantially fewer chances (52 walks for the average pitcher in that time period, compared to 470 balls in
play). What this tells us is that there is a stronger talent in controlling one’s situational walk rates than there is in controlling one’s BABIP, but it’s not any easier to pick up this talent due
to the vastly smaller number of observations.
Again, there doesn’t seem to be a whole lot here. Unfortunately, it doesn’t seem that we can discover a whole lot about a pitcher by looking at these sorts of things (as much as I thought and perhaps
hoped there would be). I’m not out, but I’m certainly down, shall we say.
References & Resources
The other note is that yes, I should be testing against RA, not ERA. I recieved a very nasty e-mail informing me of this, as a matter of fact. And I agree with the point in general. So why am I
persisting in my error? Because on this scale, it really doesn’t matter what one I test against. For any individual player, yes, it matters which you use. But when you aggregate like this, the
problems of ERA versus RA really disappear, and most people are more comfortable “thinking in ERA,” if you will.
1. Jon said...
Colin, what’s .01 difference of correlation mean? If I read things correctly, it looks like once you get up to 500 innings, there’s not much difference between any of these measurements. But how
many pitchers qualify for that row?
Leave a Reply Cancel reply | {"url":"http://www.hardballtimes.com/a-second-look-at-situational-pitching/","timestamp":"2014-04-17T22:06:00Z","content_type":null,"content_length":"46947","record_id":"<urn:uuid:89976258-59ee-41e1-8e0c-ab38cc1e0da1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
I know it is probably going to sound outlandish but... where does space finally stop
Where does space finally stop being space. Could there be an end to it if it is a vacuum? If there isn't an end then could there be different spots with universes? Different types of being entirely?
Maybe non atomic or no hands or feet like? Could there be people human or non human out there that would resemble humanity? Does anyone live being the hard finish we have of 100 years and past it?
Could they be technological or horse sword and outhouse? Could they know of wealth we don't or sciences or other things we just can not conceive of? Do you think in our evolution and basic guessing
about how to live that we would impress them? Should we care?
erm its probably outlandish anyway.
Wikipedia actually has a fairly good answer to this:
But basically, yes there is a finite volume of space that holds matter produced in the big bang at the start of the universe.
this confuses me and I will read the wiki article in a bit but if space was space as in nothing there then how did the big bang which produced matter produce space?
Given that c is the maximum speed limit of the universe, the edge of our universe is roughly 13Gigayears (however old the universe actually is) away from us in any direction. Anything past that has
no meaningful definition to us, as we could never, even given infinite time, get past that.
At least as I understand it.
DommiD wrote:
this confuses me and I will read the wiki article in a bit but if space was space as in nothing there then how did the big bang which produced matter produce space?
Space and time are the same thing in our four dimensional Universe. What we have is an expanding bubble of dimensions. You can't exactly ask "what's 10 km outside this bubble", because you can't
measure km without space or time. The metre is defined by how much space is crossed in a unit of time. Without them, you don't have any distance. So even thinking about what can happen (can't happen
without time) outside this bubble (can't be an outside without distance) is just silly.
As the Universe cannot expand infinitely quickly (or everything embedded in spacetime would also be spread out to zero density), it follows that the Universe is finite in volume. If it is finite in
space, it must also be finite in time.
Space isn't nothing, it's our distances and our times. We take them for granted as we can't grab or eat them, but they're even more real than a big solid rock, because that rock is 99.98% nothing and
spacetime is 100% everything.
Frennzy wrote:
Given that c is the maximum speed limit of the universe, the edge of our universe is roughly 13Gigayears (however old the universe actually is) away from us in any direction.
No, the universe is much larger, as that wikipedia page explains:
The region visible from Earth (the observable Universe) is a sphere with a radius of about 47 billion light years...
The current estimate of the Universe's age is 13.72±0.12 billion years old, based on observations of the cosmic microwave background radiation.[39] Independent estimates (based on measurements such
as radioactive dating) agree at 13–15 billion years.[40] The Universe has not been the same at all times in its history; for example, the relative populations of quasars and galaxies have changed and
space itself appears to have expanded. This expansion accounts for how Earth-bound scientists can observe the light from a galaxy 30 billion light years away, even if that light has traveled for only
13 billion years; the very space between them has expanded. This expansion is consistent with the observation that the light from distant galaxies has been redshifted; the photons emitted have been
stretched to longer wavelengths and lower frequency during their journey. The rate of this spatial expansion is accelerating, based on studies of Type Ia supernovae and corroborated by other data.
Since the universe increases it's diameter more rapidly then light can pass through it, the universe itself is much larger then just the age and speed of light would suggest.
My word, co-moving distances!
I hate the connotation (religion) that is going to come off with this statement but... Can scientists figure there is anything greater.
Like, is time here the ultimate of what time is. Or could there be a greater time that coexists with our time?
I don't know whether we'll ever reach the same degree of confidence that we have in, say evolution, but yes, scientists have some interesting ideas about the existence of 9 or 10 dimensions, of which
time is one. There's also a fascination notion that there are an infinite number of universes, in which every possible configuration of mass exists.
I'm halfway through Brian Greene's The Hidden Reality, his latest recap of the current state of string theory. You might want to give it a try; it will answer your questions and raise a lot more you
haven't thought of.
redleader wrote:
Since the universe increases it's diameter more rapidly then light can pass through it, the universe itself is much larger then just the age and speed of light would suggest.
Even then, there's nothing to suggest people at the edge of our observable sphere don't themselves have an observable sphere of the same size, into stuff we will never see.
I don't like these answers. For all we know, the universe is infinite. It could just keep on going. The observable universe, that we can see, is finite. But there's no evidence that it doesn't go on
the universe is infinite
I get the feeling that the term universe and the term infinite are as interchangeable as there goes.
tie wrote:
I don't like these answers. For all we know, the universe is infinite. It could just keep on going. The observable universe, that we can see, is finite. But there's no evidence that it doesn't go on
It would have needed to exist forever in order to extend forever.
This is quite obviously false. You propose infinite spacetime, you can't have the space bit without the time bit.
Hat Monster wrote:
tie wrote:
I don't like these answers. For all we know, the universe is infinite. It could just keep on going. The observable universe, that we can see, is finite. But there's no evidence that it doesn't go on
It would have needed to exist forever in order to extend forever.
This is quite obviously false. You propose infinite spacetime, you can't have the space bit without the time bit.
Hat has it right here. In simplest terms: the universe has existed for finite time, and expands at a finite rate. Its size is therefor finite.
As for the observable universe, it continues to grow in spatial terms, but it actually continues to shrink in matter/energy terms, as objects pass our horizon never to be seen again.
vishnu wrote:
Hat Monster wrote:
tie wrote:
I don't like these answers. For all we know, the universe is infinite. It could just keep on going. The observable universe, that we can see, is finite. But there's no evidence that it doesn't go on
It would have needed to exist forever in order to extend forever.
This is quite obviously false. You propose infinite spacetime, you can't have the space bit without the time bit.
Hat has it right here. In simplest terms: the universe has existed for finite time, and expands at a finite rate. Its size is therefor finite.
As for the observable universe, it continues to grow in spatial terms, but it actually continues to shrink in matter/energy terms, as objects pass our horizon never to be seen again.
You guys are dismissing this in a non-convincing way.
The observable universe is finite. But civilizations at the edge of the observable universe have their own observable regions that extend beyond our observable universe. How far does that go if it's
not infinite? I'm not aware of any way to know.
I don't think anyone's saying the universe expanded from finite to infinite, but how do we know the big bang wasn't infinite? Or if it is finite, how far does the universe go before it wraps back on
itself or whatever?
Megalodon wrote:
The observable universe is finite. But civilizations at the edge of the observable universe have their own observable regions that extend beyond our observable universe. How far does that go if it's
not infinite? I'm not aware of any way to know.
There's no way to just look and see, ala a naive direct measurement, of course. However thanks to the magic of deductive reasoning, we don't always have to resort to such, which is why you're getting
the above explanations.
If the popular version of the creation of the universe is to be accepted (the big bang), the universe came into existence as a finite point containing a finite amount of matter and energy. Enormous,
but finite. It then set about rapidly expanding. From this point of view, it is clear that the universe is finite.
However, some people when talking about the size of the universe are interested in geometry. As in, they want to know whether you can keep traveling in one direction forever. The current thinking is
that the universe is flat*. If this is the case, the universe will continue to expand infinitely - that doesn't mean the universe is infinite though. It means there is no limit on its expansion. It
is of finite (but huge) size, but it will keep getting bigger and bigger without limit.
If you hear a cosmologist say that the universe is infinite, she is probably just being a little sloppy in describing a non-positive curvature. She doesn't mean that there is "stuff" extending out
infinitely in all directions, she means that stuff is allowed to expand out infinitely in all directions. A very important distinction - the former being a statement about the current state, the
latter being a statement about future behavior and the nature of the "end" of the universe.
*(according to WMAP data, within 2% of flat, but of course even slightly positive curvature fundamentally changes the end-game for the universe)
vishnu wrote:
Megalodon wrote:
*(according to WMAP data, within 2% of flat, but of course even slightly positive curvature fundamentally changes the end-game for the universe)
What about negative curvature, whatever that is?
Still expands infinitely. Only positive curvature is closed.
Megalodon wrote:
The observable universe is finite. But civilizations at the edge of the observable universe have their own observable regions that extend beyond our observable universe. How far does that go if it's
not infinite? I'm not aware of any way to know.
Me neither, but seeing how our current models hold the Universe's expansion to be isotropic, it would be reasonable to expect observers at the edge of our own observable region to see a night sky of
similar size and matter distribution.
Apply induction. Asplode brain.
I don't think anyone's saying the universe expanded from finite to infinite, but how do we know the big bang wasn't infinite?
I would think the finite energy of the Cosmic Microwave Background is proof the Big Bang wasn't infinite.
Or if it is finite, how far does the universe go before it wraps back on itself or whatever?
Those people studying the very early universe's inflationary period and its associated (hypothetical) inflaton really need to get cracking.
Megalodon wrote:
The observable universe is finite. But civilizations at the edge of the observable universe have their own observable regions that extend beyond our observable universe. How far does that go if it's
not infinite? I'm not aware of any way to know.
Our observable universe isn't the same as someone else's, this is true, but that someone else also has to exist within spacetime and the cool bit is no matter which way they look, they will see a
younger universe until they're looking so far they see the surface of last scattering - the CMB. We've been at that level for 20 years or so now.
To any observer, the observable universe is a 45-50 billion light year bubble centered on himself. There's no "edge" to it, as the observer sees the universe's history around him.
vishnu wrote:
Still expands infinitely. Only positive curvature is closed.
Which suggests the question, how can a finite universe that isn't closed have everyone see a similar sky in every direction no matter their location without being infinite?
I'm not saying you could travel that distance. Expansion is accelerating, we're talking about points that are moving away from us faster than light speed. But the people living in distant galaxies
have a similar sky, and the people living in galaxies distant to them do too, so we have no idea how far the universe extends.
Apteris wrote:
Me neither, but seeing how our current models hold the Universe's expansion to be isotropic, it would be reasonable to expect observers at the edge of our own observable region to see a night sky of
similar size and matter distribution.
Apteris wrote:
I would think the finite energy of the Cosmic Microwave Background is proof the Big Bang wasn't infinite.
The CMB dates from when the universe became transparent to light. It would look like that either way.
Isotropic matter distribution is a simplifying assumption.
If you think of the big bang as having a shock front, there are certainly unanswered questions about the geometry of spacetime at the front. Clearly the isotropy assumption doesn't hold there.
Unfortunately it isn't something we'll ever get to make direct measurements of. We can only infer that since there was a big bang, there could be a shock front.
Hat Monster wrote:
Our observable universe isn't the same as someone else's, this is true, but that someone else also has to exist within spacetime and the cool bit is no matter which way they look, they will see a
younger universe until they're looking so far they see the surface of last scattering - the CMB. We've been at that level for 20 years or so now.
To any observer, the observable universe is a 45-50 billion light year bubble centered on himself. There's no "edge" to it, as the observer sees the universe's history around him.
Yes yes yes, we get that. This is exactly the answer I was trying to preempt because it doesn't answer the question.
If they have their bubble, and we have ours, there's clearly more universe than we can observe. These regions might be too distant to observe each other, but nobody is going to claim the matter that
generated the CMB we see did anything other than evolve into galaxies and civilizations like ours, or that the civilizations there see a non-similar sky.
So even if they're not mutually observable, the universe either has infinite size (though not infinite density) or has a topology that preserves the "no edges" property without being infinite. Either
way there's a lot more universe than is observable to us.
vishnu wrote:
Isotropic matter distribution is a simplifying assumption. Not a principle or consequence of a theory.
Granted. Yet there's no evidence to contradict it.
vishnu wrote:
If you think of the big bang as having a shock front, there are certainly unanswered questions about the geometry of spacetime at the front. Clearly the isotropy assumption doesn't hold there.
What shape does the shock front have? Is it a plane? The universe would still have to extend in all directions along the plane. I don't think that avoids the issue.
This not being my area of expertise, I hesitate to say too much about the geometry of spacetime at such a shock front, however since we lose isotropy and homogeneity we necessarily lose constant
curvature, so it isn't a simple shape.
Separately, as I recall from differential geometry, it is also possible for a flat universe to be compact - for example it could be a 4-torus. So you could have finite size without a shock front,
Megalodon wrote:
So even if they're not mutually observable, the universe either has infinite size (though not infinite density) or has a topology that preserves the "no edges" property without being infinite. Either
way there's a lot more universe than is observable to us.
Since we were basically at the center of the universe before most expansion happened, isn't most of the volume of the universe still visible to us (at some point in its history at least)?
redleader wrote:
Since we were basically at the center of the universe before most expansion happened, isn't most of the volume of the universe still visible to us (at some point in its history at least)?
As hat says, we can see things within a 45-50 Gly bubble. There's no real way to know the extent of the universe outside of the bubble.
As Megalodon suggests, it is even possible that the universe was infinite in extent prior to said expansion. (If it is infinite now, it had to be then as well)
vishnu wrote:
Separately, as I recall from differential geometry, it is also possible for a flat universe to be compact - for example it could be a 4-torus. So you could have finite size without a shock front,
This is exactly what I'm getting at. And whatever its dimensions, this shape must be much lager than the dimensions of the observable universe. I know we don't have evidence for this other than lower
bounds we could place on the shape's dimensions from the lack of shape we can see in the observable universe, I just wanted to get to the meat of this question as opposed to being stuck on the
generic Discovery channel "size of the observable universe" answer.
redleader wrote:
Since we were basically at the center of the universe before most expansion happened, isn't most of the volume of the universe still visible to us (at some point in its history at least)?
As far as we can tell there is no absolute center, and the extent of the universe is probably much larger than the observable universe. It could go for trillions of light years in every direction,
but the universe isn't old enough, and is expanding too fast, for us to ever be able to see that.
Hat Monster wrote:
tie wrote:
I don't like these answers. For all we know, the universe is infinite. It could just keep on going. The observable universe, that we can see, is finite. But there's no evidence that it doesn't go on
It would have needed to exist forever in order to extend forever.
This is quite obviously false. You propose infinite spacetime, you can't have the space bit without the time bit.
I don't think that's how infinity works.
The universe could start out with infinite size and very high density. It could then expand for a finite time until it reached the density we have now and it would look just like a big bang.
The question of whether the universe is infinite or not (not just unbounded) doesn't appear to have been solved just yet.
EDIT - typo
Megalodon wrote:
This is exactly what I'm getting at. And whatever its dimensions, this shape must be much lager than the dimensions of the observable universe. I know we don't have evidence for this other than lower
bounds we could place on the shape's dimensions from the lack of shape we can see in the observable universe, I just wanted to get to the meat of this question as opposed to being stuck on the
generic Discovery channel "size of the observable universe" answer.
Well, if the universe is a flat closed manifold then the only lower bound I can think of is that we don't seem to be able to observe the same object looking in two different directions - or at least,
not enough objects for there to be identifiable structure. If it is a 4-torus, that sets a lower bound on the minor circumference at ~100 Gly.
If it turns out to be a 4-sphere instead (positive curvature) it must be one hell of a lot bigger, because as yet we can't distinguish the curvature from flat.
http://en.wikipedia.org/wiki/Homology_s ... ogy_sphere
"L'Univers chiffonné" [could be translated to "The not simply connected Universe"] by Jean-Pierre Luminet (2003) was a nice vulgarisation book to approach cosmology general topics, and his hypothesis
in particular too. No idea if it ever was translated in english. <edited: yes it was: "The Wraparound Universe", AK Peters, New York, 2007>
Not sure if in the last years it could have been proved to be wrong.
But as I'm not able to comment on these subjects, i just post links, and run to better read this thread.
So, is the universe just the matter and energy which resulted from the Big Bang, or is the universe also the void into which the Big Bang is expanding?
If a person was transported beyond the "front" of this expansion, what would this person see? Could this person even exist "there"?
nimro wrote:
So, is the universe just the matter and energy which resulted from the Big Bang, or is the universe also the void into which the Big Bang is expanding?
If a person was transported beyond the "front" of this expansion, what would this person see? Could this person even exist "there"?
Our universe is what resulted from the Big Bang. Any "void" that this is expanding in to is outside our universe.
Frennzy wrote:
Given that c is the maximum speed limit of the universe, the edge of our universe is roughly 13Gigayears (however old the universe actually is) away from us in any direction.
That would be the size of the visible universe. Not the entire universe
Which suggests the question, how can a finite universe that isn't closed have everyone see a similar sky in every direction no matter their location without being infinite?
Because the universe is not a three dimensional construct. You are apply three dimensional thinking to a construct with more than 3 dimensions. It doesn't work.
Our universe is what resulted from the Big Bang. Any "void" that this is expanding in to is outside our universe.
This is wrong. The universe is not expanding into anything. You can't even say the universe is expanding, precisely because there is nothing for it to expand into. What is happening is that
everything is rushing apart.
Once again, you are apply three dimensional thinking to a non three dimensional space.
If a person was transported beyond the "front" of this expansion, what would this person see? Could this person even exist "there"?
There is no front. There is no edge.
redleader wrote:
Since we were basically at the center of the universe before most expansion happened, isn't most of the volume of the universe still visible to us (at some point in its history at least)?
there is no center of the universe. Or, equally, every single point in the universe can equally be called the center.
This stuff quickly stops making sense when the universe is conceptualized as a three dimensional space. It's not. It's fun to talk about at college parties, but you pretty soon run into people just
spouting nonsense.
Emkorial wrote:
Because the universe is not a three dimensional construct. You are apply three dimensional thinking to a construct with more than 3 dimensions. It doesn't work.
I know, and discussed exactly that.
Emkorial wrote:
That would be the size of the visible universe. Not the entire universe
It wouldn't be either. The observable universe has a radius of about ~45 Gly, thanks to expansion.
Which suggests the question, how can a finite universe that isn't closed have everyone see a similar sky in every direction no matter their location without being infinite?
Because the universe is not a three dimensional construct. You are apply three dimensional thinking to a construct with more than 3 dimensions. It doesn't work.
He obviously knows that, or he wouldn't be talking about closure. His question is exactly on point, and the answer to it is: it can't.
This is wrong. The universe is not expanding into anything. You can't even say the universe is expanding, precisely because there is nothing for it to expand into. What is happening is that
everything is rushing apart.
That's a pretty pedantic point, and not even really a correct one. Expanding doesn't require anything to "expand into." Expanding is absolutely the commonly accepted terminology. As for whether it is
expanding into anything, there's no way to know. Something that is outside of our universe is by definition not something we can investigate. Is there a four dimensional "space" into which our
universe is embedded? Or is our model of the universe as the three dimensional surface of a four dimensional shape just that - a mathematical model?
There is no front. There is no edge.
I'm not sure this is a resolved question, although you're certainly right that the currently accepted model doesn't have one. Thanks to this discussion I went digging through old journals a bit
yesterday because I remembered reading a paper about a (completely mathematically consistent) model that treated the big bang as a more typical explosion - with a shock front. I didn't find it
though. =/
That's a pretty pedantic point, and not even really a correct one. Expanding doesn't require anything to "expand into." Expanding is absolutely the commonly accepted terminology.
I don't think it's pedantic, it's a pretty salient point.
In order to say spacetime is expanding, you need to have a separate fabric of reality for spacetime to reside in that it can expand into. There isn't (as far as our cosmological models tell us). If
there was, that would open up whole new fields of research.
Spacetime is all there is. It cannot get bigger to inhabit a bigger space, there's nothing bigger for it to expand into. It can simply create space as everything in it moves apart. It's pretty
unintuitive, but it is what it is.
You don't need an ambient space to say that spacetime is expanding.
This thread is full of misinformation.
I love this place (local corner of spacetime). That is all.
So our universe is getting larger (or expanding) [or creating more space within itself thereby giving the illusion of expansion] {wait, is it really that different for laymen understanding?} and
there is, from what we can determine thus far, absolutely nothing outside our universe. Or I still don't understand... | {"url":"http://arstechnica.com/civis/viewtopic.php?p=23660531","timestamp":"2014-04-16T06:46:22Z","content_type":null,"content_length":"112058","record_id":"<urn:uuid:3ce2a680-fdf3-42f0-b9e4-8912061b838b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function ASIN, ACOS, ATAN
asin number => radians
acos number => radians
atan number1 &optional number2 => radians
Arguments and Values:
number---a number.
number1---a number if number2 is not supplied, or a real if number2 is supplied.
number2---a real.
radians---a number (of radians).
asin, acos, and atan compute the arc sine, arc cosine, and arc tangent respectively.
The arc sine, arc cosine, and arc tangent (with only number1 supplied) functions can be defined mathematically for number or number1 specified as x as in the next figure.
Function Definition
Arc sine -i log (ix+ sqrt(1-x^2) )
Arc cosine (<PI>/2) - arcsin x
Arc tangent -i log ((1+ix) sqrt(1/(1+x^2)) )
Figure 12-14. Mathematical definition of arc sine, arc cosine, and arc tangent
These formulae are mathematically correct, assuming completely accurate computation. They are not necessarily the simplest ones for real-valued computations.
If both number1 and number2 are supplied for atan, the result is the arc tangent of number1/number2. The value of atan is always between -<PI> (exclusive) and <PI> (inclusive) when minus zero is not
supported. The range of the two-argument arc tangent when minus zero is supported includes -<PI>.
For a real number1, the result is a real and lies between -<PI>/2 and <PI>/2 (both exclusive). number1 can be a complex if number2 is not supplied. If both are supplied, number2 can be zero provided
number1 is not zero.
The following definition for arc sine determines the range and branch cuts:
arcsin z = -i log (iz+sqrt(1-z^2))
The branch cut for the arc sine function is in two pieces: one along the negative real axis to the left of -1 (inclusive), continuous with quadrant II, and one along the positive real axis to the
right of 1 (inclusive), continuous with quadrant IV. The range is that strip of the complex plane containing numbers whose real part is between -<PI>/2 and <PI>/2. A number with real part equal to -
<PI>/2 is in the range if and only if its imaginary part is non-negative; a number with real part equal to <PI>/2 is in the range if and only if its imaginary part is non-positive.
The following definition for arc cosine determines the range and branch cuts:
arccos z = <PI>/2- arcsin z
or, which are equivalent,
arccos z = -i log (z+i sqrt(1-z^2))
arccos z = 2 log (sqrt((1+z)/2) + i sqrt((1-z)/2))/i
The branch cut for the arc cosine function is in two pieces: one along the negative real axis to the left of -1 (inclusive), continuous with quadrant II, and one along the positive real axis to the
right of 1 (inclusive), continuous with quadrant IV. This is the same branch cut as for arc sine. The range is that strip of the complex plane containing numbers whose real part is between 0 and
<PI>. A number with real part equal to 0 is in the range if and only if its imaginary part is non-negative; a number with real part equal to <PI> is in the range if and only if its imaginary part is
The following definition for (one-argument) arc tangent determines the range and branch cuts:
arctan z = log (1+iz) - log (1-iz)/(2i)
Beware of simplifying this formula; ``obvious'' simplifications are likely to alter the branch cuts or the values on the branch cuts incorrectly. The branch cut for the arc tangent function is in two
pieces: one along the positive imaginary axis above i (exclusive), continuous with quadrant II, and one along the negative imaginary axis below -i (exclusive), continuous with quadrant IV. The points
i and -i are excluded from the domain. The range is that strip of the complex plane containing numbers whose real part is between -<PI>/2 and <PI>/2. A number with real part equal to -<PI>/2 is in
the range if and only if its imaginary part is strictly positive; a number with real part equal to <PI>/2 is in the range if and only if its imaginary part is strictly negative. Thus the range of arc
tangent is identical to that of arc sine with the points -<PI>/2 and <PI>/2 excluded.
For atan, the signs of number1 (indicated as x) and number2 (indicated as y) are used to derive quadrant information. The next figure details various special cases. The asterisk (*) indicates that
the entry in the figure applies to implementations that support minus zero.
y Condition x Condition Cartesian locus Range of result
y = 0 x > 0 Positive x-axis 0
* y = +0 x > 0 Positive x-axis +0
* y = -0 x > 0 Positive x-axis -0
y > 0 x > 0 Quadrant I 0 < result< <PI>/2
y > 0 x = 0 Positive y-axis <PI>/2
y > 0 x < 0 Quadrant II <PI>/2 < result< <PI>
y = 0 x < 0 Negative x-axis <PI>
* y = +0 x < 0 Negative x-axis +<PI>
* y = -0 x < 0 Negative x-axis -<PI>
y < 0 x < 0 Quadrant III -<PI>< result< -<PI>/2
y < 0 x = 0 Negative y-axis -<PI>/2
y < 0 x > 0 Quadrant IV -<PI>/2 < result< 0
y = 0 x = 0 Origin undefined consequences
* y = +0 x = +0 Origin +0
* y = -0 x = +0 Origin -0
* y = +0 x = -0 Origin +<PI>
* y = -0 x = -0 Origin -<PI>
Figure 12-15. Quadrant information for arc tangent
(asin 0) => 0.0
(acos #c(0 1)) => #C(1.5707963267948966 -0.8813735870195432)
(/ (atan 1 (sqrt 3)) 6) => 0.087266
(atan #c(0 2)) => #C(-1.5707964 0.54930615)
Affected By: None.
Exceptional Situations:
acos and asin should signal an error of type type-error if number is not a number. atan should signal type-error if one argument is supplied and that argument is not a number, or if two arguments are
supplied and both of those arguments are not reals.
acos, asin, and atan might signal arithmetic-error.
See Also:
log, sqrt, Section 12.1.3.3 (Rule of Float Substitutability)
The result of either asin or acos can be a complex even if number is not a complex; this occurs when the absolute value of number is greater than one.
The following X3J13 cleanup issues, not part of the specification, apply to this section:
Copyright 1996-2005, LispWorks Ltd. All rights reserved. | {"url":"http://www.lispworks.com/documentation/lw50/CLHS/Body/f_asin_.htm","timestamp":"2014-04-20T00:46:11Z","content_type":null,"content_length":"12142","record_id":"<urn:uuid:b509f9d9-ea82-4173-81aa-c856cb72c339>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
randomElement :: [a] -> RVar aSource
A random variable returning an arbitrary element of the given list. Every element has equal probability of being chosen. Because it is a pure RVar it has no memory - that is, it "draws with
shuffle :: [a] -> RVar [a]Source
A random variable that returns the given list in an arbitrary shuffled order. Every ordering of the list has equal probability.
shuffleN :: Int -> [a] -> RVar [a]Source
A random variable that shuffles a list of a known length (or a list prefix of the specified length). Useful for shuffling large lists when the length is known in advance. Avoids needing to traverse
the list to discover its length. Each ordering has equal probability. | {"url":"http://hackage.haskell.org/package/random-fu-0.2.4.0/docs/Data-Random-List.html","timestamp":"2014-04-19T00:01:56Z","content_type":null,"content_length":"6426","record_id":"<urn:uuid:5254651a-37f7-479f-8b3e-d3b8a931e09e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exchange with the Future King of Physics
The following is my email correspondence with Joy Christian. It shows how seriously he took my bait and how he quite agreed with my main point, namely that the “Quantum Crackpot Randi Challenge”
should be earnestly attempted.
Somebody who just does not believe in what modern experiments observe, say relativity, is a common crackpot, nothing too bad – could be a nice guy. However, telling people incessantly to have the
model that explains quantum experiments while carefully hiding that he actually does not accept those experiments, that is outright dishonest. Obnoxious his insistence about that other people’s stuff
is all “simple minded”. Bell simple minded? At this point it is not anymore about a nice, honest crackpot, but about somebody who plays the fame game, somebody driven by megalomaniac attention
Since he forwarded my mail to another crackpot on the internet without asking my consent, I just go ahead and post this here. There are long parts of mathematically blown up trivialities (it all
comes down to the formula I = +/-1 and no more!), thus I put in bold font all you need to read. For those too busy, here the summary in two lines:
Joy Christian: “You are of course right to worry that this is not what we observe in experiments.”
Proper Scientists: “CRACKPOT!”
I also put a few comments like so “[Comment: blah blah]” and will at the very end conclude with some remarks.
15 May 2011 06:22
Dear Professor Christian!
Your pointing out that the codomain should be equatorial S2 in S3 is intriguing. However, the very core of your argument, i.e. locality, may benefit from a more convincing exposition; I think this is
proven by your trying for a number of years now to convince people without full success. Are you interested in cooperating to make the core of your argument more accessible to a physics audience?
[Comment: I hope it is perfectly clear to all that I never seriously considered cooperating with that crackpot!]
My idea is the following: Locality means that the photon that arrives at Alice’s detector has all the necessary information locally with it. This local information determines either completely what
Alice is going to measure, or at least the probabilities of what she can measure are determined in such a way that afterwards all expectation values between Alice and Bob will work out correctly. In
other words, the photon can be thought of as having a hidden book or little computer with it, containing all the information necessary (either all infinite data on S2 or in form of a finite amount of
parameters that together with the direction of Alice’s detector axis feed into a formula evaluated by the photon’s little hidden,classical (i.e. non-quantum) computer).
For a physicist, math is always suspect as long as there is no working model. If the photon can have all the information necessary locally with it in a classical computer, it must be possible to make
a working model in a classical computer.
The modeled situation can be very simple: We only consider three different detector angles that can be chosen by Alice and Bob, namely 0, Pi/8, and 3Pi/8, and only consider two photon singlet EPR
states (if alpha=beta, Alice will get +1 when Bob gets -1). This setup is well known, can be understood by any physics student, and it violates Bell’s inequality severely (0.32 > 0.43 is clearly
wrong by a large amount).
Any codomain topology can be simulated in virtual reality plainly by programming the correct relations between the data. Since the violation is numerically large and computer memory is very large
nowadays, the problem of a finite resolution of the S2 or even S3 codomain should not be a problem.
I will write a simple program that draws a complete state (according to your model mu=+/-I) randomly and evaluates the necessary data that Alice’s and Bob’s photons need to take with them. The
photons (i.e. their hidden variables) are send to two other computers via the internet, where other people (Alice and Bob) pick angles 0, Pi/8, or 3Pi/8 independently for every arriving photon as
they wish or randomly chosen by their computer. Say ten thousand EPR pairs are send and evaluated in this way. The statistical correlations of the locally evaluated measurement results are as a last
step compared and the Bell violation is displayed, which is accomplished, if desired, even on a forth computer to make it utterly convincing. All the programs are open source, everybody can check
that the programs do not secretly establish an internet connection between Alice’s and Bob’s computers after the angles are chosen (in other words: there is no cheating via secret non-locality). If
the Bell inequality is violated in this way, it will be a huge confirmation that is going to spread over the internet like a firestorm.
Indeed, this would be so convincing, if I could figure out how to actually extract the measurement results from your A(alpha,mu)=+mu(…) and B(beta,mu)=-mu(…), I would have done it already. However, I
do not see how it can be done [Comment: This is a lie of course, the calculation is very simple.], and given the real valued quaternions and soon, there should not be any problem. In fact, I fear
that if this cannot be done rather soon, it will be likely understood as a clear indication that your argument is wrong in the very core.
I would be very honored indeed if you were willing to cooperate on this so that we can prove locality in a totally convincing manner that everybody can accept even if they do not know Clifford
algebra and that nobody can refute, as everybody would be able to just download it as an internet java application and run it with their friends, seeing the Bell inequality violated in seconds with
today’s computer and connection speeds.
Looking forward to your clarifications and hopefully cooperation
Joy’s Answer:
Hello Sascha,
Thank you for your message. Simulating my model is not so easy. Several people have tried over the years without success. The problem is that the hidden variable "mu" of my model is quite different
from the simple minded hidden variables usually considered. It randomizes the handedness of the entire Euclidean space in which the experiment is supposed to be taking place. I think the best hope
for simulating my model is---not by trying to extract information from "mu"---but by implementing the correct algebraic logic of the model as described in my latest
paper (see the attached). What has to be done is program equations (1) and (2) of the attached paper in such a manner that the fixed and random bivector basis defined by equations (3) and (4) in it
are duly respected. [Comment: This is precisely what I discuss below in the email of the 17th of May; this is where he claims he can win the Quantum Crackpot Randi Challenge.] Note that what is being
randomized in equation (4) are the basis of an entire algebra, which in turn defines the sense of rotation for every single point of the 3-sphere. If you are able to program the mathematics described
in equation (4) of the attached paper, then you might have a chance of success. I am coping this email to Albert Jan, who is also interested in simulating my model. See also his website http://
16 May 2011 10:18
Dear Joy,
Thank you sincerely for your reply. Please allow me to clarify the situation.
1) One may prescribe any topology desired to a simulated domain. [Other’s “simple minded hidden variables” in “Euclidean space in which the experiment is supposed to be taking place” are irrelevant].
2) Local realism implies that a classical computer model must (must!) be possible.
3) Large Bell violation is achieved already by merely considering only the three angles 0, Pi/8, and 3Pi/8. The two choices in eq. (4) of arXiv:1103:1879 [b_j b_k = – d_jk +/– e_jkl b_l] randomizes
the basis somewhat. Lets construct an entirely new basis for every EPR pair and send the information along with the photons. Still, such is all trivial compared to what can be stored and calculated
with an average desktop computer in mere milliseconds. [What do you mean by “If you are able to program the mathematics described in equation (4) …”? Surely you agree that this real quaternion
equation can be put into mathematica for example by any undergraduate.]
Given these three aspects and given that several people failed for years to come up with even the simplest locally realistic Bell violation, your claim of locality is highly suspect. Do not get me
wrong; I am not trying to attack you, but merely wonder whether you fully appreciate that these facts put the ball squarely into your side of the court.
Hand waving about profoundness over other’s “simple minded” variables invites suspicions about skirting around the issue – regardless of whether you like it or not – this is the impression easily
taken away. For example: “What has to be done is program equations (1) and (2) of the attached paper in such a manner that the fixed and random bivector basis defined by equations (3) and (4) in it
are duly respected.” Let me translate this into what it kind of sounds like: “My claim can be seen as clearly correct once somebody comes up with my key missing details in the key equations in such a
manner that everything works out (which however is precisely what cannot happen in case the claim is wrong!).”
[Comment: The following is the program that somebody internet savvy should actually write as a java applet or whatever and put out there as part of the Quantum Crackpot Randi Challenge! I personally
only work with research software that is not so internet friendly.]
So let me suggest again: I can (almost any student can) write a program that fully simulates the experiment with the singlet EPR photon pairs and the three angles. It would have two modes, “QM”and
“Classical”. In QM mode, the violation of Bell’s inequality is assured by non-locality (i.e. communication send from Alice to Bob if Alice measures first and vice versa). This communication is
switched off in Classical mode, where instead “simpleminded” hidden variables are distributed to Alice and Bob, which, as you agree, cannot violate Bell’s inequality. The correct QM statistics would
not result. This would be a very simple program as far as computer programs go. 1000 to 10000 EPR pairs can be simulated in a few seconds; good statistics is no problem.
The implicitly assumed topology, namely Euclidian R-cubed, is reflected in the structure of the hidden variables and the way Alice and Bob pick the three angles. Say, for example, that I wanted to
introduce that the outcome of Alice rotating her crystal by 360 degrees (instead of twice around by a full 4Pi) must count as a separate axis because of the double covering [SO(3)versus SU(2) issue]
resolved by Fermions. No problem: a week maximum and mode “Double Cover” is open source in the internet!
In other words: You claim that the only differences are well defined like for example the codomain topology being S3 instead of “naïve” R-cubed. If so, simply take the program and modify a few
relations in order for your point being proven. If you cannot, it will be understood as a clear indication of that your central claim of locality is wrong.
[Comment: This is the point of my posting the “Quantum Crackpot Randi Challenge” and why you are all sincerely invited to put the program as explained above onto the internet. From now on, anybody
with some crackpot “local QM” theory is cordially invited to either modify that program so that Bell’s inequality is violated or to shut the hell up! (Just like the traditional Randi Challenge!)]
You would be only able to counter this severe attack by explaining clearly why it is that there should be any problem in changing the implicitly assumed topology in the hidden classical data
structure that previously simulates a simple physical experiment perfectly. For example: clearly show how the six directions (instead of three because of double cover) plus two handedness choices
lead to an infinite amount of data on all of S3 in such a finely fractal mapping that no finite resolution via digital computer memory can carry it along and no floating point precision would be able
to calculate it locally.
You spend a number of years on this now. If it is possible at all, it must be possible with that simple example! I fit is possible at all, we should be able to make it work in a few months maximum
for that particular example. You know very well, if even just this one single simple example works to refute non-locality, we can reserve plane tickets to Sweden. Lets get started.
Joy’s Answer:
Hi Sascha,
I like your passion, and I agree with most of what you are saying. My latest paper is only two months old, and no one has tried to simulate the model described in it so far. The previous model with
"mu" was tried by several people, but I cannot be sure whether they did things correctly, because I myself know nothing about computers. I cannot write even one line of code.
The model is quite simple. Alice chooses a direction, say a, and calculates a fixed bivector using the fixed basis defined by equation (3). Then she waits until she receives the hidden variable
lambda, and then calculates the random bivector using the random basis defined by equation (4). She then takes the product of the fixed bivector and the random bivector, as defined in equation (1).
The product gives her a number, +/-1. Bob follows exactly the same procedure for another direction, say b. For a large sample of random bivectors the results of the products obtained by Alice and Bob
are compared, and the correlations are calculated.That is all there is to it. I claim that if this procedure (i.e., this algorithm) is followed correctly, you will see violations of BI locally.
17 May 2011 14:51 [Comment: Here I basically just show that his mathematics is trivial, blown up nonsense.]
Let me see whether I understand you correctly and am not missing something vital:
You say: Alice chooses vector a, and calculates a fixed bivector using the fixed basis.
Practical: a = (0, 1, 0) and the fixed bivector {a_j beta_j} of the fixed basis{beta_1, beta_2, beta_3} is thus beta_2.
You say: She receives lambda, and then calculates the random bivector using the random basis and beta_j beta_k = – delta_jk – lambda epsilon_jkl beta_l.
Practical: lambda = +/–1, thus the random basis is {+/– beta_1, +/– beta_2, +/–beta_3} and the random bivector is +/– beta_2. Not all that random; merely two possible and simple cases.
You say: She then takes the product of the fixed bivector and the random bivector,as defined in equation (1).
Practical: There is a further minus in equation (1) inside {– a_j beta_j} which I did not count towards the fixed bivector {a_j beta_j} in the above, and with this further minus sign the equation
works out to be A = {– beta_2} {+/– beta_2} =–/+ (– delta_22) = +/– 1 exactly as you claim.
You say: Bob follows exactly the same procedure for a direction b.
Practical: b = (0, cos(Pi/8), sin(Pi/8)) and the fixed bivector {b_k beta_k} is calculated using the same (“exactly the same procedure”) fixed basis {beta_1, beta_2, beta_3}.
So, Bob’s fixed bivector is cos(Pi/8) beta_2 + sin(Pi/8) beta_3 (or did I somehow mess up with the epsilon? Please correct this into the notation you would prefer so we do not talk past each other).
His random bivector is something like +/– {b_k beta_k} and thus B = +/– (cos cos beta_2 beta_2 + cos sin …. – sin cos … + sin sin beta_3 beta_3) = ...(please correct if I am totally wrong)
Which together with cos cos + sin sin = 1 results in B = –/+ 1, exactly as you claim.
Surely I did something wrong, because I should not get exactly equation (2), as it would imply that Bob gets every time the exact opposite answer of Alice’s measurement even when the angle is Pi/8.
That is not what is experimentally observed. Nevertheless, even if there is some epsilon and cos sin factors I got wrong, we surely agree on that the resulting B is obviously completely determined by
lambda, and that it is either plus or minus one.
The big problem I have with this is that it allows only two combined outcomes, namely either [A=1 and B=B(b, lambda = +1) with the b as given], or the only other possible outcome, which is [A= –1 and
B(b, lambda = – 1)]. Whatever that B really happens to be in the above calculation, there are only two different cases! This is clearly wrong since if Alice and Bob actually do this experiment at
these angles, they will get all four possibilities [A=1, B=1], [A=1, B= –1],
[A= –1, B=1], and [A= –1, B= –1].
Something here is very wrong. You either mean a different, non-naïve lambda (as it is currently in your last paper, it really is just plus or minus 1), or you mean something like that the fixed basis
is actually also randomly chosen every time freshly again before the next photon arrives, either by the EPR pair or even by Alice and Bob randomly and independently (locally) every time (yet without
them realizing that this is what they actually effectively do).
[Comment: Here in these last three sections I am trying very hard to give him plenty of opportunities to save his face with some sort of copout about that I got the true complexity of the hidden
variables wrong or whatever. This is to trigger him to either start mumbo-jumbo pseudo-science talk right now or admit that I understand his maths perfectly (it is trivial after all).]
I would be very grateful indeed if you could clear up my difficulties here,
Joy’s Answer:
Hi Sascha,
You are doing nothing wrong in your calculations. [Comment: That was a pretty easy catch. Now how will he claim that I do not understand the maths? Sure he will anyways!] You are interpreting some
things incorrectly however. Equations (1) and (2) correspond to independent observations of Alice and Bob. It does not matter what angles Alice and Bob choose. For all angles their results are
exactly opposite, and therefore the product of their result is always -1. You are of course right to worry that this is not what we observe in experiments. [Comment: Closed the trapdoor behind him
and started frying himself. Thanks. This is basically where you can stop reading. From here on it just goes deeper into the crackpot sewer.] Remember, however, that Alice and Bob are not aware of
each other. It is only after all of the data is collected and the results are compared we see the correlations. Alice and Bob themselves only see random outcomes, +1 or -1, for all possible angles.
To understand the correlations and the occurrences of all four possible joint outcomes,+ +, + –, – +, and + +, let me first explain the physical meaning of the bivectors appearing in equations (1)
and (2).
The random bivectors are supposed represent the physical spin:
{ a_k beta_k (lambda) } = spin "up" about the direction a if lambda = +1
= spin "down" about the direction a if lambda = -1
{ b_j beta_j (lambda) } = spin "up" about the direction b if lambda = +1
= spin "down" about the direction b if lambda = -1
The fixed bivectors, {- a_j beta_j }, on the other hand, are supposed to represent
the measuring devices that measure the random bivectors (spins). In other words,
spin { a_j beta_j (lambda) } is measured with respect to {-- a_j beta_j }
and spin { b_j beta_j (lambda) } is measured with respect to {+ b_k beta_k }
The crucial point here is that Alice's spin and Bob's spin are measured with respect to two different measuring devices, {-- a_j beta_j } and {+ b_k beta_k}.
Suppose now we set b = a. Then the two spins will be measured by two oppositely aligned measuring devices; that is to say, the two measuring devices will differ by a minus sign, and we will get the
perfect anti-correlation ( -1 ), as your calculations confirm. This corresponds to the cases + – and – +.
Next, suppose we set b = -- a for the two measuring devices. [Comment: See here how he in typical crackpot manner goes back to what is long since dealt with. The “Quantum Crackpot Randi Challenge” is
specifically about only the three angles including Pi/8, this is 100% clear since email number one. He insists on talking about angles that show no Bell violation anyway, states that the model is
correct there (trivially true), and then claims to have disproved Bell’s argument. Smooth performance of a classical crackpot move.] Then it is clear from the above that the two spins will be
measured by "the same" device {-- a_j beta_j } (they are of course still two separate devices at the two opposite end of the experiment, but they will be mathematically exactly the same). Therefore
in this case the two spins will be the same, and we will have 50/50 chance of either + + or – – , giving the product +1.
What these two cases show is that the joint outcomes of spins depend on the measuring devices used by Alice and Bob. There is nothing nonlocal going on here. It is just that one must use the same
scales on both sides to make any meaningful comparison of the two sets of measurement results (clearly, one cannot use the same amount of Dollars and Euros to buy the same number of bananas). One
must measure the spins with the same "scale" on both sides,and with the same "units."
So how does all these affect the correlations? Well, correlations are in general defined as covariance of the measurement results divided by the product of two standard deviations, and these standard
deviations take care of the "scaling" problem. [Comment: Super characteristic crackpot gobbledygook. Only missing is some infinite fractal singularity or so. He is desperately trying to make me
forget something. But what?] You can check that equations (5) of my paper satisfies both the perfect anti-correlation condition for 0 degrees and the perfect correlation conditions for 180 degrees as
we vary the measurement axes from 0degrees separation to 180 separation. [Comment: Oh, right, that his “model” cannot even model the one angle Pi/8. Sorry, but I did not forget that all my emails are
precisely not about these 0 and 180 degrees angles as they do not violate the Bell inequality anyway.]
So theoretically everything works out fine.
Now you may wonder how to deal with the scaling problem in your simulation. Well, my guess is that somehow you will have to make sure that your program keeps the fixed bivectors (the measuring
devices) completely fixed, and the randomness is introduced only in the random bivectors (i.e., the "spins"). Then, when the results from the two ends are compared, the distribution will be normal
distribution in the units of standard deviations, which are the measuring devices, and that should take care of the "scaling" problem.
20 May 2011 02:40
“For all angles their results are exactly opposite. You are of course right to worry that this is not what we observe in experiments. Remember, however, that Alice and Bob are not aware of each
other. It is only after all of the data is collected and the results are compared we see the correlations.”
Alice and Bob writing down all the outcomes and afterward comparing; that IS the experiment! Analyzing the data, they will find all outcomes, + +, + –, – +, and + + if the relative angle between
directions a and b is Pi/8. What are you trying to say? That the recorded data change inside Bob’s lab-log while he drives to Alice’s place?
[Comment: So, by now you see I am getting a little impatient. How is it that he has still not caught on to what I think about his model?]
Joy’s Answer:
"What are you trying to say? That the recorded data change inside Bob’s lab-log while he drives to Alice’s place?"
Yes! In a sense, that is exactly what is happening. But not in a mysterious way. It is simply a matter of rescaling. Clearly, Alice and Bob must use the same scale to analyze the two sets of data for
their comparison to be meaningful.
What I am saying is nothing new. This is how any statistical distribution of numbers is compared. What is new in my model is that the statistical analysis must also respect the topology of the
3-sphere, because that is the topology of the experiment. When this is done correctly, all four outcomes, + +, + –, – +, and – – , appear correctly. There is no mystery here --- only the
counter-intuitiveness of the topology of the 3-sphere.
-- Joy
23 May 2011 11:44
I see. Scaling – the non-mysterious kind. Well that explains it. So, the angle is Pi/8 and Alice’s lab book says “Photon 1: +1, Photon 2: +1, Photon 3: -1, Photon 4: +1, …” and according to your
local realism, Bob’s lab log shows: “Photon 1: -1, Photon 2: -1, Photon 3: +1, Photon 4: -1, …”
Could you shortly explain how non-mysterious re-scaling while Bob drives to Alice’s place leads to Bob’s lab log changing to read instead “Photon 1: -1, Photon 2: -1, Photon 3: -1, Photon 4: +1, …”
or whatever once he arrived and compares with Alice?
Enough brown semi solids. In good old crackpot manner, he just keeps on writing mumbo-jumbo about “scaling” and the “counter-intuitiveness of the topology of the 3-sphere” without ever even once
acknowledging the question asked, which is clearly for at least three of my mails now one and only one: What ghosts come flying down from the heaven of Crackpotonia to rewrite the measurement results
and Bob’s and Alice’s memories and lab-logs so that his crackpot model’s total anticorrelation is “non-mysteriously” changed into what quantum physics is well known to result in?
The further emails only drove home the already obvious: He does plainly not accept the experimental evidence! But instead of telling people this honestly, he keeps stating that his model can
reproduce all the data. This is not an inch better than creationists’ “arguing”: totally dishonest.
He deceives people with a pseudo profound but basically trivial model that supposedly “explains everything”, including what is observed in experiments, while in fact he does not accept the
experimental observations and thereby excludes himself from the scientific community, for which scientific experiment is the ultimate judge.
The guy is funded by "Foundational Questions Institute (FQXi)", which funds loads of pseudo-science, so maybe no surprise there. But the big question left is: What is such a clearly obnoxious fraud
doing at Oxford University and Perimeter Institute?
By the way: I have NO intention of wasting my time “discussing” with crackpots. Put up or shut up! Means what? Means: Solve the simple “Quantum Crackpot Randi Challenge”, then we can talk.
This is a very interesting exchange and subject (inspite of what I pointed out and Lubos does not wish to accept). The fine details of this or that mathematical construction do not matter if it
cannot agree with experiments. You are quite right about that.
Science advances as much by mistakes as by plans.
Hontas Farmer | 06/02/11 | 17:15 PM
Hi Sasha:
Would you really would like to get a position at FQXi? On the other hand ... two years to pluck one goose at a time ... No, this would be too much masochism (reversed sadism?) for any normal human
being. I'd rather see you get a position at Lavendish, meet your Jim Watson and later have time to write ... er ... Alice&Bob helix ?
alles gute
smo (not verified) | 06/02/11 | 22:00 PM
That just went over my head, sorry. FQXi is a group that was supposed to help fund promising fundamental physics that is not easily otherwise funded (they have no "positions"). It is a great idea but
turned sour: Most of their money goes to well established people who have loads of funding already, but want more for particular crackpot projects that are not promising at all, which is when FQXi
jumps in, like in this case! They are also famous for essay contests asking for novel insights, but they make sure that the winning entries are always pure politically correct and totally useless
feel good drivel.
So, yes, although I do not want to be seen too close to them, I sure would not mind getting some of the huge amount they waste on basically anti-scientific garbage every year, since I am in the very
position that was originally aimed at with the money.
Sascha Vongehr | 06/02/11 | 22:52 PM
Hank Campbell | 06/03/11 | 15:30 PM
From http://fqxi.org/community/forum/topic/1308
Member Joy Christian wrote on Sep. 9, 2012 @ 13:17 GMT
It is far worse, Tom, it is far worse. Mr. Vongehr has resorted to activities that border criminality.
He has twisted my words and even fabricated words in my name. He has simply manufactured some posts and email correspondence as if they were by me and posted them on his blog, with active help of
the proprietor of the Science20 blogs, Mr. Hank Campbell. ...
Joy, I had some place for you in my heart up to now, just like I have for Motl and others with mental problems (we brains three or four standard deviations from the average usually have those), but
you now really want to claim that you did not eagerly have that email conversation with me? You really are despicable! Come at me, give it your best. Those emails are still on the University of
Southern California email server with their tracks. Come at me and I stomp you like the worm you are.
Sascha Vongehr | 09/09/12 | 20:05 PM | {"url":"http://www.science20.com/alpha_meme/blog/exchange_future_king_physics-79613","timestamp":"2014-04-19T09:54:30Z","content_type":null,"content_length":"76744","record_id":"<urn:uuid:3ac969b8-97de-4434-a7a8-e184d05f0e39>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Consider two straight lines L1 (y = 2x +1) and L2 (y=x). a) What is the slope of line L1 in respect to line L2? b) What is the equation of line L1 in respect to line L2?
Best Response
You've already chosen the best response.
Oh, you want the angle between the two lines.
Best Response
You've already chosen the best response.
You need to use the formula to find the angle between two lines. \[\tan \alpha = \frac{m_2-m_1}{1+m_1m_2}\]
Best Response
You've already chosen the best response.
m_2 is typically the greater slope, which is 2 in this case. m_1 is therefore 1.
Best Response
You've already chosen the best response.
Since the slope of a line is defined as the tangent of that line with respect to the x-axis, the tangent of the angle between your two lines should give you the relative slope...which I assume is
what the question wants.
Best Response
You've already chosen the best response.
hmm... ok, I will try it again with this extra knowledge
Best Response
You've already chosen the best response.
right, i think i get it
Best Response
You've already chosen the best response.
I could be misinterpreting your question here - it's late where I am and I'm about to hit the hay.
Best Response
You've already chosen the best response.
BecomeMyFan, did you get the second part? I forgot, sorry. I'm about to go to bed. Hopefully you can lure someone in to finish it...
Best Response
You've already chosen the best response.
full problem is Consider two straight lines L1 (y = 2x +1) and L2 (y=x). a) What is the slope of line L1 in respect to line L2? b) What is the equation of line L1 in respect to line L2? NOTE.
Think that the original xy co-ordinate system has been rotated in respect to the origin so that line L2 defines the new x-axis x´ and y´ is the new y-axis!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4d9313570b9d8b0bf7c608a9","timestamp":"2014-04-16T19:37:11Z","content_type":null,"content_length":"47409","record_id":"<urn:uuid:0cda14e1-7065-4fce-ad67-ebb5e06d7adb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimization problem - please help!
January 14th 2009, 06:45 PM #1
Oct 2008
Minimization problem - please help!
A 3 feet piece of wire is cut into two pieces and one piece is bent into a square and the other is bent into a equilateral triangle. How should the wire cut so that the total area enclosed is
minimized? (Hint: Area of a square of a side a is a² and area of an equilateral triangle of side a is (radical 3)/4 * a²)
Consider a cut made at x length from one side. Now you have two pieces x and 3-x. To make a square you need to have 4 sides of equal length, so divide the segment by 4 and for the triangle divide
by 3. Now make a general area function for the sum of both areas in terms of x.
January 14th 2009, 06:54 PM #2
MHF Contributor
Oct 2005 | {"url":"http://mathhelpforum.com/calculus/68259-minimization-problem-please-help.html","timestamp":"2014-04-19T08:53:51Z","content_type":null,"content_length":"32523","record_id":"<urn:uuid:e716622d-b86b-4f74-995e-8e8ac75c2b44>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Create a Graph Using a Spreadsheet
Edit Article
Edited by Vivek Kumar Rohra, Anita K. Moore, Rob S, Ben Rubenstein and 25 others
To quickly create a graph in a spreadsheet, perform the following steps.
1. Create a Graph Using a Spreadsheet Step 1 Version 2.jpg
Enter your data into the spreadsheet in a table format.
□ Table Format:
□ Box 1-a is the x-axis (generally put any sort of time on this axis).
□ Box 1-b is y-axis.
□ Information for the x-axis is placed in boxes 2-a to infinity-a.
□ Information for the y-axis is placed in boxes 2-b to infinity-b.
2. Create a Graph Using a Spreadsheet Step 2 Version 2.jpg
Select the cells that contain the information that you want to appear in the bar graph. If you want the column labels and the row labels to show up in the graph, ensure that those are selected
3. Create a Graph Using a Spreadsheet Step 3 Version 2.jpg
Press the F11 button on your keyboard. This will create your bar graph on a "chart sheet." A chart sheet is basically a spreadsheet page within a workbook that is totally dedicated to displaying
your graph.
4. Create a Graph Using a Spreadsheet Step 4 Version 2.jpg
Use the Chart wizard Click insert then chart, if F11 doesn't work. In Gnumeric it won't work. Choose Chart Type.
□ Choose a data range.
□ Choose a data series.
□ Choose chart elements.
5. Create a Graph Using a Spreadsheet Step 5 Version 2.jpg
On the Chart toolbar, which appears after your chart is created, click on the arrow next to the Chart Type button and click on the Bar Chart button..
6. Create a Graph Using a Spreadsheet Step 6 Version 2.jpg
You can also create a pie chart.
• To add more detail to the bar graph click on the Chart Wizard on the Standard toolbar and fill in the necessary information.
• To make a chart title as an element of the graph, click once in the chart area of the graph and click on the Chart Wizard button on the Standard Toolbar. Click Next until you get to Step 3 -
Chart Options. In the Chart Title field, type the chart title and click on Finish.
Things You'll Need
• Computer
• Spreadsheet software such as Microsoft Excel, OpenOffice.org Calc, iWork Numbers or Gnumeric.
• Data that contains categories and numeric figures
Article Info
Thanks to all authors for creating a page that has been read 135,787 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Create-a-Graph-Using-a-Spreadsheet","timestamp":"2014-04-21T12:16:49Z","content_type":null,"content_length":"71676","record_id":"<urn:uuid:137cd2f3-06f5-4b05-b003-3fd14cb86e05>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Negative Binomial Regression
Negative binomial regression is implemented using maximum likelihood estimation. The traditional model and the rate model with offset are demonstrated, along with regression diagnostics.
Traditional Model
Negative binomial regression is a type of generalized linear model in which the dependent variable 1]:
where 1] derives this parametrization as a Poisson-gamma mixture, or alternatively as the number of failures before the
The traditional negative binomial regression model, designated the NB2 model in [1], is
where the predictor variables
Given a random sample of
Designating the 2), we can then write the distribution (1) as
We estimate
and the log-likelihood function is
The values of
Example 1: Traditional Model with Simulated Data
We will use Mathematica to replicate some examples given by Hilbe [1], who uses R and Stata. We start with simulated data generated with known regression coefficients, then recover the coefficients
using maximum likelihood estimation. We will generate a sample of 2), using
Now we define and maximize the log-likelihood function (3), obtaining the estimates of GeneralizedLinearModelFit, while 1].
But we arbitrarily set all starting values to 1.0 and successfully find the correct estimates.
Define two helper functions.
Next, we find the standard errors of the estimates. The standard errors are the square roots of the diagonal elements of the variance-covariance matrix
Then we find the Hessian and
Finally, these are our standard errors.
We can now print a table of the results: the estimates of the coefficients, their standard errors, and the Wald
We see that in each case the confidence interval has captured the population parameter.
Traditional Model for Rates, Using Offset
If the dependent variable
Since 2) above:
which can also be written as:
This last term, offset. So in our log-likelihood function, instead of replacing
Then we proceed as before, maximizing the new log-likelihood function in order to estimate the parameters.
Example 2: Traditional Model with Offset for the Titanic Data
The Titanic survival data, available from [2] and analyzed in [1] using R and Stata, is summarized in Table 1, with crew members deleted.
Why did fewer first-class children survive than second class or third class? Was it because first-class children were at extra risk? No, it was because there were fewer first-class children on board
the Titanic in the first place. So we do not want to model the raw number (4) we need
We set up the design matrix, with indicators 1 for adults and males, and using indicator variables for second class and third class, which means first class will be a reference.
Then we set up the dependent variable and the offset.
We define the log-likelihood (5).
Now we maximize it to find the coefficients.
Then we find the standard errors of the coefficients.
And again we can print a table of the results.
But perhaps more useful for interpretation of the coefficients would be the Incidence Rate Ratio (IRR) for each variable, which is obtained by exponentiating each coefficient. For example, out of a
sample of 4), will be
which we estimate to be 1]), while a confidence interval for IRR is found by exponentiating the confidence interval for the coefficient. Thus we obtain the following.
We do not need IRR for
The confidence interval for the variable class2 contains 1.0, consistent with the lack of significance of its coefficient, and indicating that the survival rate of second-class passengers was not
significantly different than that of first-class passengers. We will address this after computing some model assessment statistics and residuals.
Model Assessment
Various types of model fit statistics and residuals are readily computed. We use definitions given in [1]; alternate definitions exist and would require only minor changes.
Commonly used model fit statistics include the log-likelihood, deviance, Pearson chi-square dispersion, Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC).
We already have the log-likelihood
where 5), and
The Pearson chi-square dispersion statistic is given by
We compute these for the Titanic data above and display them.
These model assessment statistics are most useful when compared to those of a competing model, which we pursue in the next section after computing residuals.
The raw residuals are of course 6).
These residuals can be standardized by dividing by
Here are the unstandardized residuals for the Titanic data.
And here are the leverages and the standardized residuals.
Hilbe recommends plotting the Standardized Pearson residuals versus
We have two Standardized Pearson residuals that are not within the range class2 was not significant. Perhaps the model will be improved if we remove class2. All that is required is to remove class2
from the design matrix class2 removed.
We set up design matrix
Comparing to the full model, we see that the assessment statistics have improved (they are smaller, indicating a better fit), and the Standardized Pearson residuals with high leverages are within the
recommended boundaries. It appears that the model has been improved by dropping class2.
The traditional negative binomial regression model (NB2) was implemented by maximum likelihood estimation without much difficulty, thanks to the maximization command and especially to the automatic
computation of the standard errors via the Hessian.
Other negative binomial models, such as the zero-truncated, zero-inflated, hurdle, and censored models, could likewise be implemented by merely changing the likelihood function.
The author acknowledges suggestions and assistance by the editor and the referee that helped to improve this article.
[1] J. Hilbe, Negative Binomial Regression, 2nd ed., New York: Cambridge University Press, 2011.
[2] “JSE Data Archive.” Journal of Statistics Education. (Nov 19, 2012) www.amstat.org/publications/jse/jse_data_archive.htm.
M. L. Zwilling, “Negative Binomial Regression,” The Mathematica Journal, 2013. dx.doi.org/10.3888/tmj.15-6.
About the Author
Michael L. Zwilling
Department of Mathematics
University of Mount Union
1972 Clark Avenue
Alliance, OH 44601 | {"url":"http://www.mathematica-journal.com/2013/06/negative-binomial-regression/","timestamp":"2014-04-20T14:20:20Z","content_type":null,"content_length":"49239","record_id":"<urn:uuid:8cf6de19-1922-401a-9b58-65bbf388ff12>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Godel's incompleteness: can the diagonalization be done in computation rather than logic?
up vote 8 down vote favorite
I have always had trouble understanding Godel's proof of his first incompleteness theorem, because the diagonalization part is done on the logical side, which is unfamiliar to me, rather than on the
computational side, which I find more familiar. I decided to replicate the proof but do as much of the work as possible in terms of computation, rather than logic.
Consider the Turing machine $H$ which when run on input $x$:
• (firstly a triviality) checks whether $x = \lceil \phi \rceil$ for some two-place predicate $\phi$ and if not then loops forever (or indeed it could do anything – I don't believe this part of the
argument is important!)
• searches for proofs in PA of
□ $\phi(\lceil\phi\rceil, 1)$ and
□ $\lnot\phi(\lceil\phi\rceil, 1)$ [A]
• if it finds the former first, it halts writing $0$ on its tape
• if it finds the latter first, it halts writing $1$ on its tape
• if there is a proof of neither, it loops forever
Since computable functions are expressible in PA, there is a two-place predicate $h$ such that $H(x) = y$ implies $\vdash h(\underline{x}, \underline{y})$ and $H(x) \not= y$ implies $\vdash \lnot h(\
underline{x}, \underline{y})$ [B].
Then what is $H(\lceil h \rceil)$?
• if $H(\lceil h \rceil) = 0$ then $\vdash h(\lceil h \rceil, 1)$ (by definition of $H$) and $\vdash \lnot h(\lceil h \rceil, 1)$ by definition of $h$
• if $H(\lceil h \rceil) = 1$ then $\vdash \lnot h(\lceil h \rceil, 1)$ (by definition of $H$) and $\vdash h(\lceil h \rceil, 1)$ by definition of $h$
• if $H(\lceil h \rceil)$ is anything else, including a non-terminating computation, then neither $\vdash h(\lceil h \rceil, 1)$ nor $\vdash \lnot h(\lceil h \rceil, 1)$
My conclusion is that either PA is inconsistent (first two possibities) or incomplete (third possibility).
My first question is: is this a correct proof that PA is either inconsistent or incomplete? I don't want to be misled by a simple misunderstanding!
My second question is: if indeed it is a correct proof, why are the common expositions not done this way? It seems much more easy to do the diagonalization part of the argument in the world of Turing
machines and computation than in the world of wffs and logic. Moreover no weakening to $\omega$-consistency is necesary.
[A] In wffs, $1$ is my abbreviation for $s(0)$ of course.
[B] I use $\vdash \psi$ to mean there is a proof in PA of $\psi$. Perhaps this is more usually denoted $\vdash_{PA} \psi$, but I wanted to conserve space and typing!
1 At a quick glance, this seems good; however, note that showing "computable functions are expressible in PA" is not trivial. – Noah S Jun 4 '12 at 12:44
3 Just a quick comment about "no weakening to $\omega$-consistency is necessary": Your algorithm includes a step where it looks at which of two (intuitively contradictory) things happens first.
That's exactly the idea behind Rosser's improvement of Gödel's proof, reducing the hypothesis from $\omega$-consistency to mere consistency. (I conjecture that this familiar computational idea was
the motivation for Rosser's proof.) – Andreas Blass Jun 4 '12 at 14:14
You might also like to look at the discussion here: scottaaronson.com/blog/?p=710 including the first comment by Amit Sahai. – Ryan O'Donnell Jun 4 '12 at 14:21
@Andreas: very nice to know that, thanks! – Tom Ellis Jun 4 '12 at 14:27
Regarding your claim that "it is much easier to do the diagonalizations part of the argument in the world of...computability than in the world of...logic", I would say that most logicians think of
the syntactic diagonalization as essentially a computational argument, since one verifies that certain formal operations such as substitutions are expressible. Your argument still has this aspect,
but you have hidden away the details in the black-box claim that every computable function is representable. – Joel David Hamkins Jun 4 '12 at 14:29
show 1 more comment
1 Answer
active oldest votes
There are indeed many proofs of the incompleteness theorem, and when I teach my introductory graduate logic course, we usually give at least four or five different proofs as they arise
naturally with the different topics. Here are a few sketches of different proof methods that commonly arise:
Traditional proof via the fixed point lemma. This is the traditional proof via the fixed-point lemma, by which we know that for each formula $\varphi(x)$ there is a sentence $\psi$ such that
$PA\vdash\psi\leftrightarrow\varphi(\psi)$. With this lemma, one makes the Goedel sentence $\psi$ that asserts that $\psi$ is not provable in PA. It follows that indeed it is not provable in
PA, and hence it is true and unprovable.
Via the halting problem. This is the proof via the halting problem, and uses some similar ideas to your proposed proof. First, one proves that the halting problem is not decidable by any
computable procedure. Next, one argues that there must be a Turing machine $p$ which does not halt, but we cannot prove it in PA, since otherwise we can solve the halting problem as follows:
given a Turing machine program, we simulate it during the day, waiting to see if it halts, and at night, we search for a proof that it doesn't halt. If all instances of non-halting were
provable, we would thereby solve the halting problem.
Via the existence of undecidable problems. Indeed, if there is any computably undecidable problem that is expressible in arithmetic, then there must be instances that cannot be settled in PA
(or any othere computably axiomatizable theory), since otherwise by searching for proofs we would be able to decide the problem. This observation is at the core of the connection between two
common uses of the word "undecidable". Namely, any c.e. set $A$ that is not (computably) decidable must admit infinitely many natural numbers $n$ such that $n\notin A$ but this is not
up vote provable in your favorite formal system, since otherwise $A$ would be decidable. So the computably undecidable sets are saturated with non-instances that are undecidable in your formal
21 down system.
Via Tarski's theorem. Tarski's theorem is that there is no arithmetic predicate true of exactly the Goedel codes of true arithmetic assertions. That is, arithmetic truth is not
arithmetically definable. This is a generalization of the incompleteness theorem, since provability is arithmetically definable, and so it cannot coincide with truth. One can prove Tarski's
theorem in a variety of ways, including the fixed point lemma, or the non-collapse of the arithmetic hierarchy, among others.
Via the non-collapse of the arithmetic hierarchy. It is not difficult to prove the existence of a universal $\Sigma_1^0$-set of natural numbers. This is analogous to the arithmetization of
syntax in the traditional syntactic proof or to the existence of universal machines used in the halting problem proof. Once one knows that there are universal $\Sigma_1^0$ sets, it follows
by an easy diagonalization that $\Sigma^0_1\neq\Pi^0_1$ and the arithmetic hierarchy does not collapse. This immediately implies Tarski's theorem, which as we've said implies that
provability (which is arithmetic) does not coincide with truth.
By the end of my course, it often becomes a running joke that every new theorem also implies the incompleteness theorem, and I usually try to make those conclusions explicit. So there are
indeed many other proof methods besides the main ones I mention above, and perhaps other users will post them. It is generally much easier to find proofs of the first incompleteness theorem
than the second, but by now there are also interesting proofs of the second incompleteness theorem from various directions.
+1 I'd like to learn more about the ways to prove the second incompleteness theorem. Right now I know only the traditional proof which uses the Gödel sentence "I am not provable" obtained
by the fixed point theorem. Where can I read more about other proof ideas? – Johannes Hahn Jun 4 '12 at 15:16
Johannes, although most of these different proofs are folklore knowledge in the area, I am sorry that I don't know a specific text that presents them all. – Joel David Hamkins Jun 4 '12 at
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/98763/godels-incompleteness-can-the-diagonalization-be-done-in-computation-rather-th?sort=votes","timestamp":"2014-04-20T01:45:58Z","content_type":null,"content_length":"66445","record_id":"<urn:uuid:7dbf7d21-645b-4611-870c-032aece65341>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implementation of Conjugate Gradient for solving large linear systems
• Type:
• Status: Closed
• Priority:
• Resolution: Fixed
• Affects Version/s: 0.5
This patch contains an implementation of conjugate gradient, an iterative algorithm for solving large linear systems. In particular, it is well suited for large sparse systems where a traditional QR
or Cholesky decomposition is infeasible. Conjugate gradient only works for matrices that are square, symmetric, and positive definite (basically the same types where Cholesky decomposition is
applicable). Systems like these commonly occur in statistics and machine learning problems (e.g. regression).
Both a standard (in memory) solver and a distributed hadoop-based solver (basically the standard solver run using a DistributedRowMatrix a la DistributedLanczosSolver) are included.
There is already a version of this algorithm in taste package, but it doesn't operate on standard mahout matrix/vector objects, nor does it implement a distributed version. I believe this
implementation will be more generically useful to the community than the specialized one in taste.
This implementation solves the following types of systems:
Ax = b, where A is square, symmetric, and positive definite
A'Ax = b where A is arbitrary but A'A is positive definite. Directly solving this system is more efficient than computing A'A explicitly then solving.
(A + lambda * I)x = b and (A'A + lambda * I)x = b, for systems where A or A'A is singular and/or not full rank. This occurs commonly if A is large and sparse. Solving a system of this form is used,
for example, in ridge regression.
In addition to the normal conjugate gradient solver, this implementation also handles preconditioning, and has a sample Jacobi preconditioner included as an example. More work will be needed to build
more advanced preconditioners if desired.
MAHOUT-499 Implement LSMR in-memory
relates to
MAHOUT-772 Refactor Matrix/Vector implementation with linear operators
Based on r1092853.
Patch consists of 11 new files in 4 new directories under both core/ and math/. No changes to existing code.
Includes 3 new unit tests. All unit tests pass.
Jonathan Traupman
added a comment -
Based on r1092853. Patch consists of 11 new files in 4 new directories under both core/ and math/. No changes to existing code. Includes 3 new unit tests. All unit tests pass.
This is a cool thing. What do you think of it in relationship to the LSMR code I have up on github as part of [DEL:MAHOUT-525:DEL] ?
Ted Dunning
added a comment -
Jonathan, This is a cool thing. What do you think of it in relationship to the LSMR code I have up on github as part of MAHOUT-525 ?
I'd have to dig a little deeper to get a full understanding of all that your doing with the SGD regression stuff. (BTW, I think you mean [DEL:MAHOUT-529:DEL]?) Broadly speaking, though, I'd say the
two patches are complementary. Conjugate gradient is just a iterative method for solving linear systems. Regression is one obvious application, but linear systems come up a lot in a whole range of
algorithms, making CG a fairly general building block.
As a linear system solver, the big advantage of CG over e.g. the Cholesky
As for a linear regression implementation using CG compared to one using SGD, it would be hard for me to reach any conclusions without comparing the two approaches head to head on the same data. CG
would probably gain some benefit from being easily parallelizable, but the individual updates in SGD seem very fast and lightweight, so any speed advantage to CG would probably only come up for truly
massive datasets. The SGD implementation in your patch also has a lot of regularization support that a simple CG implementation of LMS would lack (ridge regression i.e. L2 regularization comes for
free, but L1 is considerably harder). I'm also unaware of how one would do the automatic validation/hyperparameter tuning using CG that your SGD implementation does.
Jonathan Traupman
added a comment -
Ted- I'd have to dig a little deeper to get a full understanding of all that your doing with the SGD regression stuff. (BTW, I think you mean MAHOUT-529 ?) Broadly speaking, though, I'd say the two
patches are complementary. Conjugate gradient is just a iterative method for solving linear systems. Regression is one obvious application, but linear systems come up a lot in a whole range of
algorithms, making CG a fairly general building block. As a linear system solver, the big advantage of CG over e.g. the Cholesky decomposition is a) being iterative, it's very easy to adapt it to map
/reduce for very large datasets and b) for matrices of the form (cI + A), where A is of rank k, CG will typically run in O(kn^2) time instead of O(n^3). CG also has a few disadvantages, namely that
for full rank matrices, it requires n^3 multiplies compared to IIRC n^3/3 for Cholesky – the same asymptotic performance, but that constant factor difference can add up in the real world. Another
large disadvantage is that if you are solving a collection of linear systems, i.e. AX = B, where X and B are both matrices instead of vectors, you have to run a separate CG solver for each of the k
columns of X for a total O(kn^3) runtime. Traditional matrix decomposition methods are usually O(n^3) to do the decomposition, but only O(n^2) to solve the system using the decomposed matrix, so you
can solve a collection of k systems in O(n^3 + kn^2). As for a linear regression implementation using CG compared to one using SGD, it would be hard for me to reach any conclusions without comparing
the two approaches head to head on the same data. CG would probably gain some benefit from being easily parallelizable, but the individual updates in SGD seem very fast and lightweight, so any speed
advantage to CG would probably only come up for truly massive datasets. The SGD implementation in your patch also has a lot of regularization support that a simple CG implementation of LMS would lack
(ridge regression i.e. L2 regularization comes for free, but L1 is considerably harder). I'm also unaware of how one would do the automatic validation/hyperparameter tuning using CG that your SGD
implementation does. FWIW, I also have an implementation of the Cholesky decomposition, which I've been meaning to Mahout-ize and submit when I can find the time to do it.
This all sounds good. There is a point of confusion, I think.
I'd have to dig a little deeper to get a full understanding of all that your doing with the SGD regression stuff. (BTW, I think you mean [DEL:MAHOUT-529:DEL]?) Broadly speaking, though, I'd say
the two patches are complementary. Conjugate gradient is just a iterative method for solving linear systems. Regression is one obvious application, but linear systems come up a lot in a whole
range of algorithms, making CG a fairly general building block.
I was talking about the LSMR implementation. It is an iterative sparse solver similar to LSQR, but with better convergence properties. Like your code, it requires only a forward product. I should
pull out a separate patch and attach it here.
Ted Dunning
added a comment -
Jonathan, This all sounds good. There is a point of confusion, I think. I'd have to dig a little deeper to get a full understanding of all that your doing with the SGD regression stuff. (BTW, I think
you mean MAHOUT-529 ?) Broadly speaking, though, I'd say the two patches are complementary. Conjugate gradient is just a iterative method for solving linear systems. Regression is one obvious
application, but linear systems come up a lot in a whole range of algorithms, making CG a fairly general building block. I was talking about the LSMR implementation. It is an iterative sparse solver
similar to LSQR, but with better convergence properties. Like your code, it requires only a forward product. I should pull out a separate patch and attach it here.
As for a linear regression implementation using CG compared to one using SGD, it would be hard for me to reach any conclusions without comparing the two approaches head to head on the same data.
CG would probably gain some benefit from being easily parallelizable, but the individual updates in SGD seem very fast and lightweight, so any speed advantage to CG would probably only come up
for truly massive datasets. The SGD implementation in your patch also has a lot of regularization support that a simple CG implementation of LMS would lack (ridge regression i.e. L2
regularization comes for free, but L1 is considerably harder). I'm also unaware of how one would do the automatic validation/hyperparameter tuning using CG that your SGD implementation does.
The other big difference, btw, is that all of our parallel approaches require at least one pass through the data. The SGD stuff can stop early and often only needs a small fraction of the input to
converge. That gives sub-linear convergence time in terms of input size (which sounds whacky, but is real). Any approach that needs to read the entire data set obviously can't touch that scaling
Offsetting this is the idea that if we don't need all the data for a given complexity of model, then we probably don't want to stop but would rather just have a more complex model. This is where the
non-parametric approaches come in. The would give simple answers with small inputs and more nuanced answers with large data.
Ted Dunning
added a comment -
As for a linear regression implementation using CG compared to one using SGD, it would be hard for me to reach any conclusions without comparing the two approaches head to head on the same data. CG
would probably gain some benefit from being easily parallelizable, but the individual updates in SGD seem very fast and lightweight, so any speed advantage to CG would probably only come up for truly
massive datasets. The SGD implementation in your patch also has a lot of regularization support that a simple CG implementation of LMS would lack (ridge regression i.e. L2 regularization comes for
free, but L1 is considerably harder). I'm also unaware of how one would do the automatic validation/hyperparameter tuning using CG that your SGD implementation does. The other big difference, btw, is
that all of our parallel approaches require at least one pass through the data. The SGD stuff can stop early and often only needs a small fraction of the input to converge. That gives sub-linear
convergence time in terms of input size (which sounds whacky, but is real). Any approach that needs to read the entire data set obviously can't touch that scaling factor. Offsetting this is the idea
that if we don't need all the data for a given complexity of model, then we probably don't want to stop but would rather just have a more complex model. This is where the non-parametric approaches
come in. The would give simple answers with small inputs and more nuanced answers with large data.
OK, yeah, I think I misunderstood which code you were talking about.
I assume this is the reference for the LSMR stuff: http://www.stanford.edu/group/SOL/reports/SOL-2010-2R1.pdf
I'll have to take some time to digest it, but based on a quick skim it looks like both LSMR and LSQR are more or less mathematically equivalent to CG applied to least squares regression, but with
better convergence and numeric properties with inexact arithmetic.
BTW, do you have any links to a SGD bibliography or other list of resources on it? From what I've seen in the code and some of your comments, it looks like a cool technology that I'd like to know
more about.
Jonathan Traupman
added a comment -
OK, yeah, I think I misunderstood which code you were talking about. I assume this is the reference for the LSMR stuff: http://www.stanford.edu/group/SOL/reports/SOL-2010-2R1.pdf I'll have to take
some time to digest it, but based on a quick skim it looks like both LSMR and LSQR are more or less mathematically equivalent to CG applied to least squares regression, but with better convergence
and numeric properties with inexact arithmetic. BTW, do you have any links to a SGD bibliography or other list of resources on it? From what I've seen in the code and some of your comments, it looks
like a cool technology that I'd like to know more about.
Also, can you point me to the specific branch and path in your repo to the LSMR implementation? I poked around but couldn't readily find it.
Jonathan Traupman
added a comment -
Also, can you point me to the specific branch and path in your repo to the LSMR implementation? I poked around but couldn't readily find it.
Ted Dunning
added a comment -
See https://github.com/tdunning/LatentFactorLogLinear/tree/lsmr As I mentioned, I will try rebasing back to trunk to get a clean patch set you can apply there.
Here is the LSMR implementation in a form that can be applied to trunk.
Does this help you out?
Ted Dunning
added a comment -
Jonathan, Here is the LSMR implementation in a form that can be applied to trunk. Does this help you out?
What would you think about adapting this code so that we actually define some special matrix types that do
the A'A x, (A + lambda I) x and (A'A + lambda I) x products efficiently.
That would allow all product only methods like LSMR, CG and in-memory random projection for SVD to work on
all of these cases directly without having the extra convenience methods.
What do you think?
Ted Dunning
added a comment -
Jonathan, What would you think about adapting this code so that we actually define some special matrix types that do the A'A x, (A + lambda I) x and (A'A + lambda I) x products efficiently. That
would allow all product only methods like LSMR, CG and in-memory random projection for SVD to work on all of these cases directly without having the extra convenience methods. What do you think?
DistributedRowMatrix already does A'A x (it's the timesSquared() method), and I've long thought about doing the other two as well (for easy PageRank computation).
Jake Mannix
added a comment -
DistributedRowMatrix already does A'A x (it's the timesSquared() method), and I've long thought about doing the other two as well (for easy PageRank computation).
I think it's a good idea to do this and would be happy to make it happen. Probably won't be before the weekend with my schedule unfortunately.
The only problem the current times()/timesSquared() implementation in DistributedRowMatrix is that each algorithm that uses it needs a flag to determine whether to use times() or timesSquared(). If
we created a few subclasses of DistributedRowMatrix for the A'A, (A + lambda * I), etc. cases, we could have them just expose the times() method, implementing it as appropriate through calls to the
superclass methods.
Jonathan Traupman
added a comment -
I think it's a good idea to do this and would be happy to make it happen. Probably won't be before the weekend with my schedule unfortunately. The only problem the current times()/timesSquared()
implementation in DistributedRowMatrix is that each algorithm that uses it needs a flag to determine whether to use times() or timesSquared(). If we created a few subclasses of DistributedRowMatrix
for the A'A, (A + lambda * I), etc. cases, we could have them just expose the times() method, implementing it as appropriate through calls to the superclass methods.
What flag? Doing (A'A)x vs. (A)x is pretty fundamental to the algorithm. Why would you prefer to switch choices of which to do based on class type? I can see how it would make the logic inside of
something like LanczosSolver simpler (you just always do "times(vector)", instead of "if symmetric, do times(), else do timesSquared()"), but if you really want to do it generally, what you want is a
mini MapReduce paradigm: define a method which takes a pair of functions:
Vector DistributedRowMatrix#mapRowsWithCombiner(Vector input, Function<Vector, Vector> rowMapper, Function<Vector, Vector> resultReducer)
{ // for each row, emit Vectors: rowMapper.apply(input, row) // combine/reduce: for each intermediateOutputRow, output <- resultReducer.apply(intermediateOutputRow, output) return output; }
But I'm not sure this has the right level of generality (too general? Not general enough?).
I'm definitely not convinced that overloading times() to mean many different things is really wise. Having it mean (A)x vs. (A + lambda I)x could totally be fine, however. It's defining the matrix to
have a extremely compressed way of showing it's diagonal.
Jake Mannix
added a comment -
What flag? Doing (A'A)x vs. (A)x is pretty fundamental to the algorithm. Why would you prefer to switch choices of which to do based on class type? I can see how it would make the logic inside of
something like LanczosSolver simpler (you just always do "times(vector)", instead of "if symmetric, do times(), else do timesSquared()"), but if you really want to do it generally, what you want is a
mini MapReduce paradigm: define a method which takes a pair of functions: Vector DistributedRowMatrix#mapRowsWithCombiner(Vector input, Function<Vector, Vector> rowMapper, Function<Vector, Vector>
resultReducer) { // for each row, emit Vectors: rowMapper.apply(input, row) // combine/reduce: for each intermediateOutputRow, output <- resultReducer.apply(intermediateOutputRow, output) return
output; } But I'm not sure this has the right level of generality (too general? Not general enough?). I'm definitely not convinced that overloading times() to mean many different things is really
wise. Having it mean (A)x vs. (A + lambda I)x could totally be fine, however. It's defining the matrix to have a extremely compressed way of showing it's diagonal.
I prefer a class that implements a virtual matrix.
I have a patch that I will attach shortly that illustrates this.
Ted Dunning
added a comment -
I prefer a class that implements a virtual matrix. I have a patch that I will attach shortly that illustrates this.
Here is an alternative to a switch in the solver. We would use
s.solve(new SquaredMatrix(A, lambda), b)
to solve A'A + lambda I
Ted Dunning
added a comment -
Here is an alternative to a switch in the solver. We would use s.solve(new SquaredMatrix(A, lambda), b) to solve A'A + lambda I
Yes, I'm talking about the logic inside of things like LanczosSolver. Ted pointed out that we have a number of algorithms that use matrix/vector multiply as a fundamental operation but that have
special case code to handle certain common matrix forms. It's a bit redundant to have these cases in each algorithm. It also means that every time you create new special form matrix, you have to
modify each of these algorithms to handle that form.
I don't think we're suggesting overloading what times() means. Rather, we're suggesting having subclasses of DistributedRowMatrix (or possibly separate implementations of VectorIterable) for special
form matrices whose internal representations may be done in a more efficient manner. E.g. define a "SquaredDistributedMatrix" class that represents a matrix of the form B = A'A. All the operations,
including times() mathematically mean exactly what they should: B.times
Jonathan Traupman
added a comment -
Yes, I'm talking about the logic inside of things like LanczosSolver. Ted pointed out that we have a number of algorithms that use matrix/vector multiply as a fundamental operation but that have
special case code to handle certain common matrix forms. It's a bit redundant to have these cases in each algorithm. It also means that every time you create new special form matrix, you have to
modify each of these algorithms to handle that form. I don't think we're suggesting overloading what times() means. Rather, we're suggesting having subclasses of DistributedRowMatrix (or possibly
separate implementations of VectorIterable) for special form matrices whose internal representations may be done in a more efficient manner. E.g. define a "SquaredDistributedMatrix" class that
represents a matrix of the form B = A'A. All the operations, including times() mathematically mean exactly what they should: B.times means Bx. Under the hood, it will be implemented as A'(Ax) because
it's more efficient, but that implementation detail shouldn't matter or be exposed to an algorithm that's just interested in doing a matrix/vector multiply. Likewise for (A + lambda * I) or (A'A +
lamdba * I) or (A'A + B'B) or band-diagonal matrices. The specific implementation of the times() method takes care of the representational details so that any algorithm that accepts one of these
matrix types can operate on any of them.
Sorry for posting a patch that is different in function and which shadows the older patch. The latest one I posted
has just the virtual matrix stuff. The earlier one that Jonathan posted has his solver code as well.
So are we ready to integrate the three strands of development here (LSMR, CG and VirtualMatrix?)
Or do we need to think and talk a bit more?
Ted Dunning
added a comment -
All, Sorry for posting a patch that is different in function and which shadows the older patch. The latest one I posted has just the virtual matrix stuff. The earlier one that Jonathan posted has his
solver code as well. So are we ready to integrate the three strands of development here (LSMR, CG and VirtualMatrix?) Or do we need to think and talk a bit more?
Ok, you guys have convinced me (esp Ted's version, with the virtual matrix idea, but it's basically the same thing, because DRM just wraps an HDFS file). We'll either need to throw
UnsupportedOperationException for a lot of methods, or else define a nice simple super-interface to VectorIterable which only defines Vector times(Vector input), and maybe VectorIterable times
(VectorIterable m).
Jake Mannix
added a comment -
Ok, you guys have convinced me (esp Ted's version, with the virtual matrix idea, but it's basically the same thing, because DRM just wraps an HDFS file). We'll either need to throw
UnsupportedOperationException for a lot of methods, or else define a nice simple super-interface to VectorIterable which only defines Vector times(Vector input), and maybe VectorIterable times
(VectorIterable m).
In fact, I'll say that I'd love to have a chance to kill the oh-so-poorly-named timesSquared() method. If we just had a virtual SquaredMatrix whose times() method was the implementation we currently
have, that would be awesome.
Jake Mannix
added a comment -
In fact, I'll say that I'd love to have a chance to kill the oh-so-poorly-named timesSquared() method. If we just had a virtual SquaredMatrix whose times() method was the implementation we currently
have, that would be awesome.
I should have some time this weekend to pull all this stuff together into an integrated patch with the virtual matrix code, conj. gradient, and LSMR.
A few questions, though:
□ with the virtual matrices, do we want two classes for A + lambda * I and A'A + lambda * I, or 4 classes for each of A, A'A, A + lambda * I, and A'A + lambda * I?
□ should we make the virtual matrices subclasses of AbstractMatrix as in Ted's patch or implementations of VectorIterable like the current DistributedRowMatrix?
□ what should we do with timesSquared() and the isSymmetric flag in DistributedLanczos solver? Remove it? Mark it deprecated? Leave it unchanged?
□ is several separate patches for the different pieces preferred or is one big patch easier?
Jonathan Traupman
added a comment -
I should have some time this weekend to pull all this stuff together into an integrated patch with the virtual matrix code, conj. gradient, and LSMR. A few questions, though: with the virtual
matrices, do we want two classes for A + lambda * I and A'A + lambda * I, or 4 classes for each of A, A'A, A + lambda * I, and A'A + lambda * I? should we make the virtual matrices subclasses of
AbstractMatrix as in Ted's patch or implementations of VectorIterable like the current DistributedRowMatrix? what should we do with timesSquared() and the isSymmetric flag in DistributedLanczos
solver? Remove it? Mark it deprecated? Leave it unchanged? is several separate patches for the different pieces preferred or is one big patch easier?
added a comment -
Integrated in Mahout-Quality #769 (See https://builds.apache.org/hudson/job/Mahout-Quality/769/ ) MAHOUT-672 - the forgotten files
with the virtual matrices, do we want two classes for A + lambda * I and A'A + lambda * I, or 4 classes for each of A, A'A, A + lambda * I, and A'A + lambda * I?
I think 2. It is easy enough to handle the lambda = 0 case with another constructor. I think that A and A' A are fundamentally different, however.
should we make the virtual matrices subclasses of AbstractMatrix as in Ted's patch or implementations of VectorIterable like the current DistributedRowMatrix?
Or make two variants?
what should we do with timesSquared() and the isSymmetric flag in DistributedLanczos solver? Remove it? Mark it deprecated? Leave it unchanged?
Deprecating it would be nice. Jake should have a major vote.
is several separate patches for the different pieces preferred or is one big patch easier?
One big one is much easier to deal with.
Ted Dunning
added a comment -
with the virtual matrices, do we want two classes for A + lambda * I and A'A + lambda * I, or 4 classes for each of A, A'A, A + lambda * I, and A'A + lambda * I? I think 2. It is easy enough to
handle the lambda = 0 case with another constructor. I think that A and A' A are fundamentally different, however. should we make the virtual matrices subclasses of AbstractMatrix as in Ted's patch
or implementations of VectorIterable like the current DistributedRowMatrix? Or make two variants? what should we do with timesSquared() and the isSymmetric flag in DistributedLanczos solver? Remove
it? Mark it deprecated? Leave it unchanged? Deprecating it would be nice. Jake should have a major vote. is several separate patches for the different pieces preferred or is one big patch easier? One
big one is much easier to deal with.
Yes, a SquaredMatrix and a PlusIdentityMultipleMatrix (? ugly name) would be enough, if composable properly.
We might need two variants, sadly. Maybe we should migrate VectorIterable to some other abstract base class (get rid of interface, for previously discussed interface/abstract class reasons), and give
it a better name. Maybe that would make it easier to not have two variants? It would be a class which just has the times(Vector) and times(Matrix) methods, and that's almost it? (numRows/numCols too,
I guess).
As for LanczosSolver, please check out the patch for [DEL:MAHOUT-319:DEL]. The api for solve is most likely changing anyways. And I'm in favor of just removing timesSquared() and isSymmetric, not
marking deprecated. Still pre-1.0-days, folks!
Jake Mannix
added a comment -
Yes, a SquaredMatrix and a PlusIdentityMultipleMatrix (? ugly name) would be enough, if composable properly. We might need two variants, sadly. Maybe we should migrate VectorIterable to some other
abstract base class (get rid of interface, for previously discussed interface/abstract class reasons), and give it a better name. Maybe that would make it easier to not have two variants? It would be
a class which just has the times(Vector) and times(Matrix) methods, and that's almost it? (numRows/numCols too, I guess). As for LanczosSolver, please check out the patch for MAHOUT-319 . The api for
solve is most likely changing anyways. And I'm in favor of just removing timesSquared() and isSymmetric, not marking deprecated. Still pre-1.0-days, folks!
Sounds like removing the timesSquared method is moving ahead of deprecating. I would prefer to remove it, but would defer to anybody who had an objection.
Ted Dunning
added a comment -
PlusIdentityMultipleMatrix DiagonalOffsetMatrix? Sounds like removing the timesSquared method is moving ahead of deprecating. I would prefer to remove it, but would defer to anybody who had an
I thought about calling it DiagonalOffsetMatrix, but these are a proper subset of multiples of the identity.
Jake Mannix
added a comment -
I thought about calling it DiagonalOffsetMatrix, but these are a proper subset of multiples of the identity.
For that matter, why not call it DiagonalOffsetMatrix and just have the identity case be a special case?
Ted Dunning
added a comment -
OK. IdentityOffsetMatrix? For that matter, why not call it DiagonalOffsetMatrix and just have the identity case be a special case?
Got a bit of this done this evening, but ran into a roadblock: I had to separate out the new VirtualMatrix interface from VectorIterable. VirtualMatrix contains the times() method while
VectorIterable keeps the iterator() and iterateAll() methods. I had to do this because there's no efficient way to iterate the rows of a squared A'A matrix without actually constructing the full
product matrix.
However, when I started looking at porting the Lanczos solver to use the new VirtualMatrix type, the core algorithm translates fine but the getInitialVector() routine, which relies on an iteration
through the data matrix, presents difficulties.
The easiest path through this impasse that I can see would be defining the new DistributedSquaredMatrix (which implements A'A) to also implement VectorIterable, but have it iterate over its
underlying source matrix A, rather than the product matrix. This would preserve the current behavior of Lanczos solver albeit at the expense of an iterator on DistributedSquaredMatrix that doesn't
make a great deal of sense in a more general context. This solution would probably not work for the diagonal offset case, because it's unclear how to transform the iterated rows using the offset. We
could always define a "IterableVirtualMatrix" interface that extends both VectorIterable and VirtualMatrix for algorithms, like Lanczos, that require it, though I'm still bothered by the weird
iterator semantics.
The other possible solution I considered would be to add the times(VirtualMatrix m) method to the VirtualMatrix interface, then rewriting the getInitialVector() routine in terms of fundamental matrix
operations: for the symmetric case, scaleFactor looks to be trace(A*A) and the initial vector looks to be A times a vector of all ones. The asymmetric case is mathematically different, but I don't
know enough about Lanczos to fully understand why. Unfortunately, implementing matrix multiplication with virtual matrices may be hard, or at the very least computationally expensive.
Jonathan Traupman
added a comment -
Got a bit of this done this evening, but ran into a roadblock: I had to separate out the new VirtualMatrix interface from VectorIterable. VirtualMatrix contains the times() method while
VectorIterable keeps the iterator() and iterateAll() methods. I had to do this because there's no efficient way to iterate the rows of a squared A'A matrix without actually constructing the full
product matrix. However, when I started looking at porting the Lanczos solver to use the new VirtualMatrix type, the core algorithm translates fine but the getInitialVector() routine, which relies on
an iteration through the data matrix, presents difficulties. The easiest path through this impasse that I can see would be defining the new DistributedSquaredMatrix (which implements A'A) to also
implement VectorIterable, but have it iterate over its underlying source matrix A, rather than the product matrix. This would preserve the current behavior of Lanczos solver albeit at the expense of
an iterator on DistributedSquaredMatrix that doesn't make a great deal of sense in a more general context. This solution would probably not work for the diagonal offset case, because it's unclear how
to transform the iterated rows using the offset. We could always define a "IterableVirtualMatrix" interface that extends both VectorIterable and VirtualMatrix for algorithms, like Lanczos, that
require it, though I'm still bothered by the weird iterator semantics. The other possible solution I considered would be to add the times(VirtualMatrix m) method to the VirtualMatrix interface, then
rewriting the getInitialVector() routine in terms of fundamental matrix operations: for the symmetric case, scaleFactor looks to be trace(A*A) and the initial vector looks to be A times a vector of
all ones. The asymmetric case is mathematically different, but I don't know enough about Lanczos to fully understand why. Unfortunately, implementing matrix multiplication with virtual matrices may
be hard, or at the very least computationally expensive. Thoughts?
I think we need to kill VectorIterable, and replace it with something like "LinearOperator", which just has:
Vector times(Vector)
LinearOperator times(LinearOperator)
LinearOperator transpose()
int domainDimension() // ie numCols
int rangeDimension() // ie numRows
and no iterator methods.
getInitialVector() doesn't need to be implemented the way it is. LanczosSolver uses the iterator to calculate a good starting vector, but it doesn't need to: DistributedLanczosSolver just takes the
vector of all 1's (normalized), and that works great in practice. Let's just change the behavior of LanczosSolver to do this as well, skipping on the iteration.
Before you get too involved with this refactoring on trunk, Jonathan, you should be careful: as I mentioned above, you're likely going to conflict with my changes for [DEL:MAHOUT-319:DEL]. They're
API changes to LanczosSolver's core solve() method.
Jake Mannix
added a comment -
I think we need to kill VectorIterable, and replace it with something like "LinearOperator", which just has: Vector times(Vector) LinearOperator times(LinearOperator) LinearOperator transpose() int
domainDimension() // ie numCols int rangeDimension() // ie numRows and no iterator methods. getInitialVector() doesn't need to be implemented the way it is. LanczosSolver uses the iterator to
calculate a good starting vector, but it doesn't need to: DistributedLanczosSolver just takes the vector of all 1's (normalized), and that works great in practice. Let's just change the behavior of
LanczosSolver to do this as well, skipping on the iteration. Before you get too involved with this refactoring on trunk, Jonathan, you should be careful: as I mentioned above, you're likely going to
conflict with my changes for MAHOUT-319 . They're API changes to LanczosSolver's core solve() method.
For on-the-fly collaboration, I've cloned apache/mahout.git on GitHub, and applied my [DEL:MAHOUT-319:DEL] patch to it, and will be continuing on there throughout the week. Clone me there and we can
avoid collisions.
Jake Mannix
added a comment -
For on-the-fly collaboration, I've cloned apache/mahout.git on GitHub, and applied my MAHOUT-319 patch to it, and will be continuing on there throughout the week. Clone me there and we can avoid
OK, sounds good. The VirtualMatrix stuff I've written so far looks a lot like the signature for LinearOperator you described. I can rename it easily enough, and "LinearOperator" has a good ring to
I looked over the mahout-319 patch and I don't think the conflicts will be too bad. Mostly it will just be replacing VectorIterable -> LinearOperator in a bunch of places. I'll clone your github repo
to do the work off that. If mahout-319 is going out soon, we should probably just back this work up behind it, since I don't think there's any urgency to it.
I'll make the changes to getInitialVector() as you suggest, that should be easy.
Jonathan Traupman
added a comment -
OK, sounds good. The VirtualMatrix stuff I've written so far looks a lot like the signature for LinearOperator you described. I can rename it easily enough, and "LinearOperator" has a good ring to
it. I looked over the mahout-319 patch and I don't think the conflicts will be too bad. Mostly it will just be replacing VectorIterable -> LinearOperator in a bunch of places. I'll clone your github
repo to do the work off that. If mahout-319 is going out soon, we should probably just back this work up behind it, since I don't think there's any urgency to it. I'll make the changes to
getInitialVector() as you suggest, that should be easy.
OK. Here is a patch combining all the stuff that we've been talking about with this issue:
□ LinearOperators and refactor of various algorithms to use LinearOperators instead of VectorIterables
□ Conjugate gradient solver from the original patch
□ Ted's LSMR implementation, refactored to use LinearOperators
The patch is from git diff, so you'll need to use "patch -p1" to apply it to trunk in svn.
I've tested that it applies successfully to a copy of trunk checked out from SVN. Everything compiles and tests all pass.
Jonathan Traupman
added a comment -
OK. Here is a patch combining all the stuff that we've been talking about with this issue: LinearOperators and refactor of various algorithms to use LinearOperators instead of VectorIterables
Conjugate gradient solver from the original patch Ted's LSMR implementation, refactored to use LinearOperators The patch is from git diff, so you'll need to use "patch -p1" to apply it to trunk in
svn. I've tested that it applies successfully to a copy of trunk checked out from SVN. Everything compiles and tests all pass.
On Mahout, we don't have the magic application of patches that Hadoop and Zookeeper have. If we did, then using git diff --no-index would prevent the need for -p1 and would make the scripts work. As
I mentioned, this doesn't matter here, but is very nice on those other projects.
Ted Dunning
added a comment -
Jonathan, On Mahout, we don't have the magic application of patches that Hadoop and Zookeeper have. If we did, then using git diff --no-index would prevent the need for -p1 and would make the scripts
work. As I mentioned, this doesn't matter here, but is very nice on those other projects.
Canceling the old stuff uploaded to this issue.
OK, here's a combined patch in SVN format. Applied it to a clean trunk w/o problems. All tests pass.
Jonathan Traupman
added a comment -
OK, here's a combined patch in SVN format. Applied it to a clean trunk w/o problems. All tests pass.
Use the "mahout-672-combined.patch" attachment and ignore the rest.
Jonathan Traupman
added a comment -
Use the "mahout-672-combined.patch" attachment and ignore the rest.
Sorry I haven't updated this issue in a while – got very busy with work, etc.
Anyway, this is a new patch based against the latest trunk. All the unit tests pass and it should be ready to go with the LSMR, conjugate gradient, and linear operator stuff.
Jonathan Traupman
added a comment -
Sorry I haven't updated this issue in a while – got very busy with work, etc. Anyway, this is a new patch based against the latest trunk. All the unit tests pass and it should be ready to go with the
LSMR, conjugate gradient, and linear operator stuff.
This is a big patch and I'm not qualified to review it. But I noticed a few small issues. Look at DistributedRowMatrix for instance – these changes should not be applied. There are also some spurious
whitespace and import changes.
It's also a pretty big patch. Are there more opportunities for reuse? thinking of AbstractLinearOperator for instance.
Sean Owen
added a comment -
This is a big patch and I'm not qualified to review it. But I noticed a few small issues. Look at DistributedRowMatrix for instance – these changes should not be applied. There are also some spurious
whitespace and import changes. It's also a pretty big patch. Are there more opportunities for reuse? thinking of AbstractLinearOperator for instance.
Yes, it is a big a patch...wasn't always this way. Here's a brief summary of this came to be the monster it is:
□ originally, this was just an implementation of a conjugate gradient solver for linear systems that worked with either normal or distributed matrices.
□ Ted mentioned that he had some mostly completed LSMR code that did very similar stuff and asked if I could integrate it with this patch, which I did.
□ a long discussion between Ted, Jake and me ensued about how a lot of algorithms (e.g. CG, LSMR, Lanczos SVD) all used the same concept of a matrix that could be multiplied by a vector but that
didn't need row-wise or element-wise access to the data in the matrix. After much back and forth, we settled on the name "LinearOperator."
The LinearOperator stuff is disruptive to the code base, but it does provide some nice functionality, too. For example, you can implement things like preconditioners, diagonal offsets (e.g. ridge
regression) and other transformations to the data efficiently using linear operators without needing to either actually modify your underlying data set or add the functionality to the specific
algorithm you're using. This was the original motivation behind it, since I had included a diagonal offset option for low-rank matrices in my CG code but that wasn't in Ted's LSMR implementation. We
decided that it might be better to put this in some common place that all similar algorithms could use for free. Since all matrices are linear operators, but the converse isn't true, it was decided
that Matrix should be a subclass of LinearOperator, not the other way around.
One thing I'm not 100% comfortable with is the parallel interface and class hierarchies (i.e. LinearOperator, Matrix vs. AbstractLinearOperator, AbstractMatrix). I'd like to see the interfaces go
away in favor of abstract classes, but I don't recall us reaching any consensus on this.
I have some time to work on this stuff now (after a long busy spell), so if you can send me specific issues (e.g. the DistributedRowMatrix stuff you mentioned) I'll try to take a look at it.
Jonathan Traupman
added a comment -
Yes, it is a big a patch...wasn't always this way. Here's a brief summary of this came to be the monster it is: originally, this was just an implementation of a conjugate gradient solver for linear
systems that worked with either normal or distributed matrices. Ted mentioned that he had some mostly completed LSMR code that did very similar stuff and asked if I could integrate it with this
patch, which I did. a long discussion between Ted, Jake and me ensued about how a lot of algorithms (e.g. CG, LSMR, Lanczos SVD) all used the same concept of a matrix that could be multiplied by a
vector but that didn't need row-wise or element-wise access to the data in the matrix. After much back and forth, we settled on the name "LinearOperator." The LinearOperator stuff is disruptive to
the code base, but it does provide some nice functionality, too. For example, you can implement things like preconditioners, diagonal offsets (e.g. ridge regression) and other transformations to the
data efficiently using linear operators without needing to either actually modify your underlying data set or add the functionality to the specific algorithm you're using. This was the original
motivation behind it, since I had included a diagonal offset option for low-rank matrices in my CG code but that wasn't in Ted's LSMR implementation. We decided that it might be better to put this in
some common place that all similar algorithms could use for free. Since all matrices are linear operators, but the converse isn't true, it was decided that Matrix should be a subclass of
LinearOperator, not the other way around. One thing I'm not 100% comfortable with is the parallel interface and class hierarchies (i.e. LinearOperator, Matrix vs. AbstractLinearOperator,
AbstractMatrix). I'd like to see the interfaces go away in favor of abstract classes, but I don't recall us reaching any consensus on this. I have some time to work on this stuff now (after a long
busy spell), so if you can send me specific issues (e.g. the DistributedRowMatrix stuff you mentioned) I'll try to take a look at it.
May I suggest that a redo of Matrices include a solution to the double-dispatch problem?
In this case, there are many variations of the exact code to apply for Matrix :: Vector operations, and way too many uses of instanceof.
Also, the LinearOperator suite is big enough to be its own patch.
Lance Norskog
added a comment -
May I suggest that a redo of Matrices include a solution to the double-dispatch problem? Double Dispatch In this case, there are many variations of the exact code to apply for Matrix :: Vector
operations, and way too many uses of instanceof . Also, the LinearOperator suite is big enough to be its own patch.
I removed all the matrix/vector changes and linear operator stuff from this patch, so this code just implements the conjugate gradient and LSMR solvers using the 0.5 standard linear algebra stuff.
I'll create a new issue for the linear operators and other linear algebra refactoring. I'm not sure when I'll have the time to work on it, but I'll try to implement the suggestions made here.
Since I'd like to get the linear operator stuff out sooner rather than later, I did not add the code for the A'A and (A + kI) cases back to the CG implementation. So for now, the CG solver will only
work for symmetric pos. def. matrices.
Jonathan Traupman
added a comment -
I removed all the matrix/vector changes and linear operator stuff from this patch, so this code just implements the conjugate gradient and LSMR solvers using the 0.5 standard linear algebra stuff.
I'll create a new issue for the linear operators and other linear algebra refactoring. I'm not sure when I'll have the time to work on it, but I'll try to implement the suggestions made here. Since
I'd like to get the linear operator stuff out sooner rather than later, I did not add the code for the A'A and (A + kI) cases back to the CG implementation. So for now, the CG solver will only work
for symmetric pos. def. matrices.
Folks – what's the status on this? It's been sitting about for a few months. Is this going to go in for 0.6 or should we retire it?
Sean Owen
added a comment -
Folks – what's the status on this? It's been sitting about for a few months. Is this going to go in for 0.6 or should we retire it?
The currently attached patch was good to go when I uploaded it a few months back. I can verify this weekend that it still applies cleanly. I took out all the linear operator stuff per request, but
the current patch has working implementations of CG and LSMR for symmetric positive definite matrices.
The ability to operate on other matrices (e.g. of the form A'A) is no longer in this patch/issue because it relies on the linear operator stuff, but there is no reason to wait for that to get this
core functionality.
Jonathan Traupman
added a comment -
The currently attached patch was good to go when I uploaded it a few months back. I can verify this weekend that it still applies cleanly. I took out all the linear operator stuff per request, but
the current patch has working implementations of CG and LSMR for symmetric positive definite matrices. The ability to operate on other matrices (e.g. of the form A'A) is no longer in this patch/issue
because it relies on the linear operator stuff, but there is no reason to wait for that to get this core functionality.
Thank you for this work. It would be very interesting to have solver like this.
One question I have is, for a mapreduce version, how many Map reduce steps this would require? And secondly, does it use any interestingly sized side information in mapppers or reducers?
I skimmed the patch and it mentions it may take up to numcols iterations assuming fixed lambda – how do these iterations map into mapreduce steps?
Thank you.
Dmitriy Lyubimov
added a comment -
Jonathan, Thank you for this work. It would be very interesting to have solver like this. One question I have is, for a mapreduce version, how many Map reduce steps this would require? And secondly,
does it use any interestingly sized side information in mapppers or reducers? I skimmed the patch and it mentions it may take up to numcols iterations assuming fixed lambda – how do these iterations
map into mapreduce steps? Thank you.
Here is an updated patch that applies to the current trunk. It compiles and all tests pass.
The only change from the July 25 patch is to remove a call to the now removed Vector.addTo() method.
Jonathan Traupman
added a comment -
Here is an updated patch that applies to the current trunk. It compiles and all tests pass. The only change from the July 25 patch is to remove a call to the now removed Vector.addTo() method.
Basically, at the core of the CG solver is a matrix/vector multiply, so we get the map/reduce implementation by using a DistributedRowMatrix instead of an in-memory matrix. Since we do one matrix/
vector multiply per iteration, it will require one map/reduce job per iteration, which somewhat limits its performance – there's a large range of data sizes that could benefit from distributed
computation but that get bogged down by Hadoop's slow job setup/teardown. Essentially, we're looking at many of the same tradeoffs we have with the distributed Lanczos decomposition stuff.
The mappers and reducers do not use much memory for side storage. I'm not super familiar with the guts of DistributedRowMatrix, but I believe the only side information it needs is the vector it is
multiplying the matrix by, so a couple of MB max.
If Mahout ever adopts something like Giraph, we can probably make a lot of these distributed iterative linear algebra algorithms a lot more efficient.
Jonathan Traupman
added a comment -
Dmitriy- Basically, at the core of the CG solver is a matrix/vector multiply, so we get the map/reduce implementation by using a DistributedRowMatrix instead of an in-memory matrix. Since we do one
matrix/vector multiply per iteration, it will require one map/reduce job per iteration, which somewhat limits its performance – there's a large range of data sizes that could benefit from distributed
computation but that get bogged down by Hadoop's slow job setup/teardown. Essentially, we're looking at many of the same tradeoffs we have with the distributed Lanczos decomposition stuff. The
mappers and reducers do not use much memory for side storage. I'm not super familiar with the guts of DistributedRowMatrix, but I believe the only side information it needs is the vector it is
multiplying the matrix by, so a couple of MB max. If Mahout ever adopts something like Giraph, we can probably make a lot of these distributed iterative linear algebra algorithms a lot more
Is it possible to multiply by many vectors at once to accelerate the convergence here by essentially exploring in multiple directions at once?
Ted Dunning
added a comment -
Jonathan, Is it possible to multiply by many vectors at once to accelerate the convergence here by essentially exploring in multiple directions at once?
Hey Jonathan,
I gather you replaced a.addTo(b) with b.assign(a, Functions.PLUS)? If so, then all will be well.
Jake Mannix
added a comment -
Hey Jonathan, I gather you replaced a.addTo(b) with b.assign(a, Functions.PLUS)? If so, then all will be well.
Reply to Ted's and Jake's comments:
> Is it possible to multiply by many vectors at once to accelerate the convergence here by essentially exploring in multiple directions at once?
This might be possible, but I don't think it's just an easy tweak. At iteration i, we compute the conjugate gradient, then move up it to a local min and repeat. We don't know the direction we're
going to go at i+1 until after we've finished iteration i. To do what you suggest would mean moving along a vector that's different than the gradient, trying to collapse multiple CG steps into one.
I'd imagine there's some literature on this (if not, might be a fruitful avenue of research).
However, my gut reaction is 1) this won't just be a simple modification to this algorithm and 2) the technique used to approximate the gradient at i+1 while still on iteration i might involve
operations that will make a distributed version more difficult/impossible. E.g. I've seen that the LSMR solver often converges in fewer steps than the CG one, but it's doing enough additional stuff
with the data matrix that creating a map/reduce version would be a lot more work than just using a DistributedRowMatrix.
> I gather you replaced a.addTo(b) with b.assign(a, Functions.PLUS)? If so, then all will be well.
Yes, that's all I did.
Anyway, I'd really like to reach some closure on this issue. These two algorithms aren't the end-all be-all of linear system solvers, but I think they're useful in their current form and can be a
foundation for further work.
Jonathan Traupman
added a comment -
Reply to Ted's and Jake's comments: > Is it possible to multiply by many vectors at once to accelerate the convergence here by essentially exploring in multiple directions at once? This might be
possible, but I don't think it's just an easy tweak. At iteration i, we compute the conjugate gradient, then move up it to a local min and repeat. We don't know the direction we're going to go at i+1
until after we've finished iteration i. To do what you suggest would mean moving along a vector that's different than the gradient, trying to collapse multiple CG steps into one. I'd imagine there's
some literature on this (if not, might be a fruitful avenue of research). However, my gut reaction is 1) this won't just be a simple modification to this algorithm and 2) the technique used to
approximate the gradient at i+1 while still on iteration i might involve operations that will make a distributed version more difficult/impossible. E.g. I've seen that the LSMR solver often converges
in fewer steps than the CG one, but it's doing enough additional stuff with the data matrix that creating a map/reduce version would be a lot more work than just using a DistributedRowMatrix. > I
gather you replaced a.addTo(b) with b.assign(a, Functions.PLUS)? If so, then all will be well. Yes, that's all I did. Anyway, I'd really like to reach some closure on this issue. These two algorithms
aren't the end-all be-all of linear system solvers, but I think they're useful in their current form and can be a foundation for further work.
> Anyway, I'd really like to reach some closure on this issue. These two algorithms aren't the end-all be-all of linear system solvers, but I think they're useful in their current form and can be a
foundation for further work.
I'll try out the patch now!
Jake Mannix
added a comment -
> Anyway, I'd really like to reach some closure on this issue. These two algorithms aren't the end-all be-all of linear system solvers, but I think they're useful in their current form and can be a
foundation for further work. I'll try out the patch now!
Regarding doing many multiplications at once, I did some of the math just now and it looks like you can build a solver that does this sort of thing, but the resulting algorithm really begins to look
more like the stochastic projection for SVD than like CG.
Let's get this in place.
Ted Dunning
added a comment -
Regarding doing many multiplications at once, I did some of the math just now and it looks like you can build a solver that does this sort of thing, but the resulting algorithm really begins to look
more like the stochastic projection for SVD than like CG. Let's get this in place.
Basically, at the core of the CG solver is a matrix/vector multiply, so we get the map/reduce implementation by using a DistributedRowMatrix instead of an in-memory matrix. Since we do one matrix
/vector multiply per iteration, it will require one map/reduce job per iteration, which somewhat limits its performance – there's a large range of data sizes that could benefit from distributed
computation but that get bogged down by Hadoop's slow job setup/teardown. Essentially, we're looking at many of the same tradeoffs we have with the distributed Lanczos decomposition stuff.
Thank you, Jonathan.
Yeah, so i figured. that's my concern. That's the Achilles' heel of much of distributed stuff in Mahout. I.e. space of iterations (I feel) must be very close to O(1), otherwise it severely affects
stuff. Even using side information is not that painful it seems compared to iteration growth. That severely decreases pragmatical use.
My thinking is that we need to keep algorithms we recommend accountable to some standard. My understanding is that there's similar problem with ALS WR implementation right now. I.e. we can have it in
the codebase but Sebastien stops short of recommending it to folks on the list.
That's kind of the problem i touched recently : Mahout stuff is different in a sense that it requires deeper investigation of best parallelization strategy than just let's throw-it-at-it approach.
Even with matrix multiplications ther're notions that decrease computation time tenfold compared to DRM approach for some cases which are less than general but in practice are suprisingly common
(example of such are multiplication steps in SSVD). Hadoop sorting is not as inexpensive as its pitch may suggest. And tenfold is sort of interesting...it's a difference between 10 hours and 1 hour.
Anyway. I am ok with committing this with @Maturity(Experimental) until we confirm running time on some input. I will probably even have to check this out soon, i may have a use case for it soon.
Dmitriy Lyubimov
added a comment -
Basically, at the core of the CG solver is a matrix/vector multiply, so we get the map/reduce implementation by using a DistributedRowMatrix instead of an in-memory matrix. Since we do one matrix/
vector multiply per iteration, it will require one map/reduce job per iteration, which somewhat limits its performance – there's a large range of data sizes that could benefit from distributed
computation but that get bogged down by Hadoop's slow job setup/teardown. Essentially, we're looking at many of the same tradeoffs we have with the distributed Lanczos decomposition stuff. Thank you,
Jonathan. Yeah, so i figured. that's my concern. That's the Achilles' heel of much of distributed stuff in Mahout. I.e. space of iterations (I feel) must be very close to O(1), otherwise it severely
affects stuff. Even using side information is not that painful it seems compared to iteration growth. That severely decreases pragmatical use. My thinking is that we need to keep algorithms we
recommend accountable to some standard. My understanding is that there's similar problem with ALS WR implementation right now. I.e. we can have it in the codebase but Sebastien stops short of
recommending it to folks on the list. That's kind of the problem i touched recently : Mahout stuff is different in a sense that it requires deeper investigation of best parallelization strategy than
just let's throw-it-at-it approach. Even with matrix multiplications ther're notions that decrease computation time tenfold compared to DRM approach for some cases which are less than general but in
practice are suprisingly common (example of such are multiplication steps in SSVD). Hadoop sorting is not as inexpensive as its pitch may suggest. And tenfold is sort of interesting...it's a
difference between 10 hours and 1 hour. Anyway. I am ok with committing this with @Maturity(Experimental) until we confirm running time on some input. I will probably even have to check this out
soon, i may have a use case for it soon.
> That's the Achilles' heel of much of distributed stuff in Mahout. I.e. space of iterations (I feel) must be very close to O(1), otherwise it severely affects stuff. Even using side information is
not that painful it seems compared to iteration growth. That severely decreases pragmatical use.
I think it's a bit extreme to say we need to have nearly O(1) Map-reduce passes to be useful. Lots of iterative stuff requires quite a few passes before convergence (as you say: Lanczos and LDA both
fall into this realm), yet it's just the price you have to pay sometimes.
This may be similar.
Jonathan, what size inputs have you run this on, with what running time in comparison to the other algorithms we have? From what I can see, this looks good to commit as well.
Jake Mannix
added a comment -
> That's the Achilles' heel of much of distributed stuff in Mahout. I.e. space of iterations (I feel) must be very close to O(1), otherwise it severely affects stuff. Even using side information is
not that painful it seems compared to iteration growth. That severely decreases pragmatical use. I think it's a bit extreme to say we need to have nearly O(1) Map-reduce passes to be useful. Lots of
iterative stuff requires quite a few passes before convergence (as you say: Lanczos and LDA both fall into this realm), yet it's just the price you have to pay sometimes. This may be similar.
Jonathan, what size inputs have you run this on, with what running time in comparison to the other algorithms we have? From what I can see, this looks good to commit as well.
> Jonathan, what size inputs have you run this on, with what running time in comparison to the other algorithms we have? From what I can see, this looks good to commit as well.
The largest dataset I've run it on was a synthetic set of about the same size as the old distributed Lanczos test matrix (before it was made smaller to speed up the tests). I ended up using a smaller
matrix in the test because it was taking too long to run. I haven't tested it on truly huge matrices, but I can do that at some point if people are interested.
As for comparisons to other algorithms, both CG and LSMR ran faster than the QR decomp/solve that's already in Mahout on most of the test inputs I was playing with. They also ran faster than a
Cholesky decomp/solve that I had implemented but haven't submitted. I don't have numbers on me right now, though.
Between the two, LSMR often converges faster than CG, and when it does, it's faster. For problems where they both take the same number of steps, performance is about the same, with CG sometimes being
a bit quicker since it does less computation per iteration. The big advantage of the CG implementation is that it can use m/r, so should be scalable to larger matrices.
Jonathan Traupman
added a comment -
> Jonathan, what size inputs have you run this on, with what running time in comparison to the other algorithms we have? From what I can see, this looks good to commit as well. The largest dataset
I've run it on was a synthetic set of about the same size as the old distributed Lanczos test matrix (before it was made smaller to speed up the tests). I ended up using a smaller matrix in the test
because it was taking too long to run. I haven't tested it on truly huge matrices, but I can do that at some point if people are interested. As for comparisons to other algorithms, both CG and LSMR
ran faster than the QR decomp/solve that's already in Mahout on most of the test inputs I was playing with. They also ran faster than a Cholesky decomp/solve that I had implemented but haven't
submitted. I don't have numbers on me right now, though. Between the two, LSMR often converges faster than CG, and when it does, it's faster. For problems where they both take the same number of
steps, performance is about the same, with CG sometimes being a bit quicker since it does less computation per iteration. The big advantage of the CG implementation is that it can use m/r, so should
be scalable to larger matrices.
I've run the test suite and all tests are passing on my box too, so I'm going to commit this later today if nobody objects.
Jake Mannix
added a comment -
I've run the test suite and all tests are passing on my box too, so I'm going to commit this later today if nobody objects.
I think it's a bit extreme to say we need to have nearly O(1)
These may mean quite different things in practice. the devil is in the details. by ~O(1) i meant ok, if in practice it grows so slow that it's enought to process several billion rows (m) of input by
having 40 iterations, that's perhaps still 'nearly' O(1) in my definiton.
I just said that i seem to have gleaned in the javadoc explanation that with this patch
num of iterations ~ n (num of columns) so if for million columns it means a million iterations, that's probably not cool. On the other side, conversion will unlikely require a million. Then what? I
kind of still did not get a clear clarifications on the estimate of num iterations (except that it is not exactly O(1)).
Dmitriy Lyubimov
added a comment -
I think it's a bit extreme to say we need to have nearly O(1) These may mean quite different things in practice. the devil is in the details. by ~O(1) i meant ok, if in practice it grows so slow that
it's enought to process several billion rows (m) of input by having 40 iterations, that's perhaps still 'nearly' O(1) in my definiton. I just said that i seem to have gleaned in the javadoc
explanation that with this patch num of iterations ~ n (num of columns) so if for million columns it means a million iterations, that's probably not cool. On the other side, conversion will unlikely
require a million. Then what? I kind of still did not get a clear clarifications on the estimate of num iterations (except that it is not exactly O(1)).
I agree that having things that scale not as numDocs or numFeatures is pretty critical, if we're talking about map-reduce passes. This code doesn't fall into that trap, from what I can see.
As I've heard no objections, I committed this (revision 1188491). Thanks Jonathan!
Jake Mannix
added a comment -
I agree that having things that scale not as numDocs or numFeatures is pretty critical, if we're talking about map-reduce passes. This code doesn't fall into that trap, from what I can see. As I've
heard no objections, I committed this (revision 1188491). Thanks Jonathan!
committed at svn revision 1188491
Integrated in Mahout-Quality #1118 (See https://builds.apache.org/job/Mahout-Quality/1118/)
[DEL:MAHOUT-672:DEL] on behalf of jtraupman
jmannix : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1188491
Files :
• /mahout/trunk/core/src/main/java/org/apache/mahout/math/hadoop/solver
• /mahout/trunk/core/src/main/java/org/apache/mahout/math/hadoop/solver/DistributedConjugateGradientSolver.java
• /mahout/trunk/core/src/test/java/org/apache/mahout/math/hadoop/solver
• /mahout/trunk/core/src/test/java/org/apache/mahout/math/hadoop/solver/TestDistributedConjugateGradientSolver.java
• /mahout/trunk/core/src/test/java/org/apache/mahout/math/hadoop/solver/TestDistributedConjugateGradientSolverCLI.java
• /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver
• /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver/ConjugateGradientSolver.java
• /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver/JacobiConditioner.java
• /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver/LSMR.java
• /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver/Preconditioner.java
• /mahout/trunk/math/src/test/java/org/apache/mahout/math/solver
• /mahout/trunk/math/src/test/java/org/apache/mahout/math/solver/LSMRTest.java
• /mahout/trunk/math/src/test/java/org/apache/mahout/math/solver/TestConjugateGradientSolver.java
added a comment -
Integrated in Mahout-Quality #1118 (See https://builds.apache.org/job/Mahout-Quality/1118/ ) MAHOUT-672 on behalf of jtraupman jmannix : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&
rev=1188491 Files : /mahout/trunk/core/src/main/java/org/apache/mahout/math/hadoop/solver /mahout/trunk/core/src/main/java/org/apache/mahout/math/hadoop/solver/DistributedConjugateGradientSolver.java
/mahout/trunk/core/src/test/java/org/apache/mahout/math/hadoop/solver /mahout/trunk/core/src/test/java/org/apache/mahout/math/hadoop/solver/TestDistributedConjugateGradientSolver.java /mahout/trunk/
core/src/test/java/org/apache/mahout/math/hadoop/solver/TestDistributedConjugateGradientSolverCLI.java /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver /mahout/trunk/math/src/main/java/
org/apache/mahout/math/solver/ConjugateGradientSolver.java /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver/JacobiConditioner.java /mahout/trunk/math/src/main/java/org/apache/mahout/
math/solver/LSMR.java /mahout/trunk/math/src/main/java/org/apache/mahout/math/solver/Preconditioner.java /mahout/trunk/math/src/test/java/org/apache/mahout/math/solver /mahout/trunk/math/src/test/
java/org/apache/mahout/math/solver/LSMRTest.java /mahout/trunk/math/src/test/java/org/apache/mahout/math/solver/TestConjugateGradientSolver.java | {"url":"https://issues.apache.org/jira/browse/MAHOUT-672?focusedCommentId=13135034&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel","timestamp":"2014-04-18T01:27:38Z","content_type":null,"content_length":"273399","record_id":"<urn:uuid:9e8655fa-3f4d-46ee-8eec-14a632d78928>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Strøm-type" model structure on chain complexes?
up vote 8 down vote favorite
The Quillen model structure on spaces has weak equivalences given by the weak homotopy equivalences and the fibrations are the Serre fibrations. The cofibrations are characterized by the lifting
property, but in the end they are those inclusions which are built up by cell attachments (or are retracts of such things).
The Strøm model structure on spaces has weak equivalences = the homotopy equivalences and fibrations the Hurewicz fibrations. Cofibrations the closed inclusions which satisfy the homotopy extension
The projective model structure on non-negatively graded chain complexes over $\Bbb Z$ has fibrations given by the degreewise surjections and weak equivalences given by the quasi-isomorphisms.
Cofibrations are given by the degree-wise split inclusions such that the quotient complex is degree-wise free.
From the above, it would appear to me that the projective model structure on chain complexes is analogous to the Quillen model structure on spaces.
Is there a model structure on (non-negatively graded) chain complexes over $\Bbb Z$ for which the weak equivalences are the chain homotopy equivalences?
(Extra wish: I want cofibrations in the projective model structure to be a sub-class of the cofibrations in the model structure answering the question. Conjecturally, they should be the inclusions
satisfying the chain homotopy extension property.)
at.algebraic-topology model-categories
1 There is the paper "The homotopy category of chain complexes is a homotopy category" by Golasiński, Marek; Gromadzki, Grzegorz, ams.org/mathscinet-getitem?mr=713138. It treats all complexes, not
necessarily bounded on any side but should give what you want. Unfortunately, the paper seems not to be available online, so I cannot check whether your extra wish is fulfilled in that structure
without doing some thinking. – Theo Buehler Jan 19 '11 at 12:54
4 This seems to be the same as this question: mathoverflow.net/questions/42755/… Denis-Charles Cisinski provided a positive answer. – Neil Strickland Jan 19 '11 at 12:55
1 Thanks Neil, that does what I want. – John Klein Jan 19 '11 at 13:00
+1! Very good question and answer! – Harry Gindi Jan 19 '11 at 14:27
add comment
2 Answers
active oldest votes
There are several other papers, I think earlier ones, that cover this.
[32] M. Cole. The homotopy category of chain complexes is a homotopy category. Preprint (1990's)
[29] J. Daniel Christensen and Mark Hovey. Quillen model structures for relative homological algebra. Math. Proc. Cambridge Philos. Soc., 133(2):261–293, 2002.
[121] R. Schw¨anzl and R. M. Vogt. Strong cofibrations and fibrations in enriched categories. Arch. Math. (Basel), 79(6):449–462, 2002.
up vote 7 down
vote accepted The references are from the book ``More concise algebraic topology: localizations, completions, and model categories'', by Kate Ponto and myself. It will be published this year by the
University of Chicago Press. It includes and compares three natural model structures on spaces and chain complexes, the Strom model structure, the Quillen model structure, and Cole's
mixed model structures:
33] Michael Cole. Mixing model structures. Topology Appl., 153(7):1016–1032, 2006.
2 +1 mainly out of excitement for the new book! – Dylan Wilson Feb 4 '11 at 4:06
Thanks Peter. That's probably a definitive list. – John Klein Feb 4 '11 at 11:58
add comment
Very recently a paper appeared on the arxiv by Barthel-May-Riehl which addresses this question in a very complete way. It discusses the three model structures on DG-algebras (answering the
OPs question and covering the mixed model structure as well), then goes on to give six model structures on DG modules over a DGA. This paper generalizing the references in Peter May's
up vote 2 answer above and gives model category foundations for some classical constructions in differential graded algebra.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology model-categories or ask your own question. | {"url":"http://mathoverflow.net/questions/52508/strom-type-model-structure-on-chain-complexes","timestamp":"2014-04-21T09:58:07Z","content_type":null,"content_length":"61779","record_id":"<urn:uuid:1e198e4c-72a9-4fa4-8fc1-fb5bdfd265da>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
angular momentum
In classical physics, the angular momentum of a system is the momentum associated with its rotational motion. It is defined as the system's moment of inertia multiplied by its angular velocity.
In quantum mechanics, a system's total angular momentum is the sum of the angular momentum from its rotational motion (called orbital angular momentum) and its spin.
The term "baryon" refers to any particle in the Standard Model that is made of three quarks. Murray Gell-Mann arranged the baryons into a periodic table-like structure according to their baryon
number and strangeness (see Unit 1, Fig. 1). Protons and neutrons are the most familiar baryons.
beta decay
Beta decay is a type of radioactive decay in which a beta particle (electron or positron) is emitted together with a neutrino. Beta decay experiments provided the first evidence that neutrinos
exist, which was unexpected theoretically at the time. Beta decay proceeds via the weak interaction.
A boson is a particle with integer, rather than half-integer, spin. In the Standard Model, the force-carrying particles such as photons are bosons. Composite particles can also be bosons. Mesons
such as pions are bosons, as are ^4He atoms. See: fermion, meson, spin.
charge conjugation
Charge conjugation is an operation that changes a particle into its antiparticle.
chiral symmetry
A physical theory has chiral symmetry if it treats left-handed and right-handed particles on equal footing. Chiral symmetry is spontaneously broken in QCD.
In QCD, color is the name given to the charge associated with the strong force. While the electromagnetic force has positive and negative charges that cancel one another out, the strong force has
three types of color, red, green, and blue, that are canceled out by anti-red, anti-green, and anti-blue.
Compton scattering
Compton scattering is the scattering of photons from electrons. When Arthur Compton first explored this type of scattering experimentally by directing a beam of electrons onto a target crystal,
he found that the wavelength of the scattered photons was longer than the wavelength of the photons incident on the target, and that larger scattering angles were associated with longer
wavelengths. Compton explained this result by applying conservation of energy and momentum to the photon-electron collisions.
cross section
A cross section, or scattering cross section, is a measure of the probability of two particles interacting. It has units of area, and depends on the initial energies and trajectories of the
interacting particles as well as the details of the force that causes the particles to interact.
electromagnetic interaction
The electromagnetic interaction, or electromagnetic force, is one of the four fundamental forces of nature. Maxwell first understood at the end of the 19th century that the electric and magnetic
forces we experience in daily life are different manifestations of the same fundamental interaction. In modern physics, based on quantum field theory, electromagnetic interactions are described
by quantum electrodynamics or QED. The force-carrier particle associated with electromagnetic interactions is the photon.
A fermion is a particle with half-integer spin. The quarks and leptons of the Standard Model are fermions with a spin of 1/2. Composite particles can also be fermions. Baryons, such as protons
and neutrons, and atoms of the alkali metals are all fermions. See: alkali metal, baryon, boson, lepton, spin.
In general, a field is a mathematical function that has a value (or set of values) at all points in space. Familiar examples of classical fields are the gravitational field around a massive body
and the electric field around a charged particle. These fields can change in time, and display wave-like behavior. In quantum field theory, fields are fundamental objects, and particles
correspond to vibrations or ripples in a particular field.
In particle physics, the flavor of a particle is a set of quantum numbers that uniquely identify the type of particle it is. The quark flavors are up, down, charm, strange, top, and bottom. The
lepton flavors are electron, muon, tau, and their corresponding neutrinos. A particle will have a flavor quantum number of +1 in its flavor, and its antiparticle has a quantum number of -1 in the
same flavor. For example, an electron has electron flavor +1, and a positron has electron flavor of -1.
force carrier
In quantum field theory, vibrations in the field that correspond to a force give rise to particles called force carriers. Particles that interact via a particular force do so by exchanging these
force carrier particles. For example, the photon is a vibration of the electromagnetic field and the carrier of the electromagnetic force. Particles such as electrons, which have negative
electric charge, repel one another by exchanging virtual photons. The carrier of the strong force is the gluon, and the carrier particles of the weak force are the W and Z bosons. Force carriers
are always bosons, and may be either massless or massive.
Gluons are particles in the Standard Model that mediate strong interactions. Because gluons carry color charge, they can participate in the strong interaction in addition to mediating it. The
term "gluon" comes directly from the word glue, because gluons bind together into mesons.
The graviton is the postulated force carrier of the gravitational force in quantum theories of gravity that are analogous to the Standard Model. Gravitons have never been detected, nor is there a
viable theory of quantum gravity, so gravitons are not on the same experimental or theoretical footing as the other force carrier particles.
Gravity is the least understood of the four fundamental forces of nature. Unlike the strong force, weak force, and electromagnetic force, there is no viable quantum theory of gravity.
Nevertheless, physicists have derived some basic properties that a quantum theory of gravity must have, and have named its force-carrier particle the graviton.
Group is a mathematical term commonly used in particle physics. A group is a mathematical set together with at least one operation that explains how to combine any two elements of the group to
form a third element. The set and its operations must satisfy the mathematical properties of identity (there is an element that leaves other group elements unchanged when the two are combined),
closure (combining any two group elements yields another element in the group), associativity (it doesn't matter in what order you perform a series of operations on a list of elements so long as
the order of the list doesn't change), and invertability (every operation can be reversed by combining the result with another element in the group). For example, the set of real numbers is a
group with respect to the addition operator. A symmetry group is the set of all transformations that leave a physical system in a state indistinguishable from the starting state.
The term hadron refers to the Standard Model particle made of quarks. Mesons and baryons are classified as hadrons.
Handedness, also called "chirality," is a directional property that physical systems may exhibit. A system is "right handed" if it twists in the direction in which the fingers of your right hand
curl if your thumb is directed along the natural axis defined by the system. Most naturally occurring sugar molecules are right handed. Fundamental particles with spin also exhibit chirality. In
this case, the twist is defined by the particle's spin, and the natural axis by the direction in which the particle is moving. Electrons produced in beta-decay are nearly always left handed.
Heisenberg uncertainty principle
The Heisenberg uncertainty principle states that the values of certain pairs of observable quantities cannot be known with arbitrary precision. The most well-known variant states that the
uncertainty in a particle's momentum multiplied by the uncertainty in a particle's position must be greater than or equal to Planck's constant divided by 4
Higgs mechanism
The Higgs mechanism, named for Peter Higgs but actually proposed independently by several different groups of physicists in the early 1960s, is a theoretical framework that explains how
fundamental particles acquire mass. The Higgs field underwent a phase transition as the universe expanded and cooled, not unlike liquid water freezing into ice. The condensed Higgs field
interacts with the different massive particles with different couplings, giving them their unique masses. This suggests that particles that we can measure to have various masses were massless in
the early universe. Although the Higgs mechanism is an internally consistent theory that makes successful predictions about the masses of Standard Model particles, it has yet to be experimentally
verified. The clearest signature of the Higgs mechanism would be the detection of a Higgs boson, the particle associated with vibrations of the Higgs field.
In the terminology of particle physics, a jet is a highly directed spray of particles produced and detected in a collider experiment. A jet appears when a heavy quark is produced and decays into
a shower of quarks and gluons flying away from the center of the collision.
kinetic energy
Kinetic energy is the energy associated with the motion of a particle or system. In classical physics, the total energy is the sum of potential and kinetic energy.
Large Hadron Collider (LHC)
The Large Hadron Collider (LHC) is a particle accelerator operated at CERN on the outskirts of Geneva, Switzerland. The LHC accelerates two counter-propagating beams of protons in the 27 km
synchrotron beam tube formerly occupied by Large Electron-Positron Collider (LEP). It is the largest and brightest accelerator in the world, capable of producing proton-proton collisions with a
total energy of 14 TeV. Commissioned in 2008–09, the LHC is expected to find the Higgs boson, the last undiscovered particle in the Standard Model, as well as probe physics beyond the Standard
The Large Electron-Positron Collider (LEP) is a particle accelerator that was operated at CERN on the outskirts of Geneva, Switzerland, from 1989 to 2000. LEP accelerated counterpropagating beams
of electrons and positrons in a 27 km diameter synchrotron ring. With a total collision energy of 209 GeV, LEP was the most powerful electron-positron collider ever built. Notably, LEP enabled a
precision measurement of the mass of W and Z bosons, which provided solid experimental support for the Standard Model. In 2000, LEP was dismantled to make space for the LHC, which was built in
its place.
The leptons are a family of fundamental particles in the Standard Model. The lepton family has three generations, shown in Unit 1, Fig. 1: the electron and electron neutrino, the muon and muon
neutrino, and the tau and tau neutrino.
A macroscopic object, as opposed to a microscopic one, is large enough to be seen with the unaided eye. Often (but not always), classical physics is adequate to describe macroscopic objects, and
a quantum mechanical description is unnecessary.
The term meson refers to any particle in the Standard Model that is made of one quark and one anti-quark. Murray Gell-Mann arranged the leptons into a periodic-table-like structure according to
their electric charge and strangeness (see Unit 1, Fig. 1). Examples of mesons are pions and kaons.
Nambu-Goldstone theorem
The Nambu-Goldstone theorem states that the spontaneous breaking of a continuous symmetry generates new, massless particles.
Newton's law of universal gravitation
Newton's law of universal gravitation states that the gravitational force between two massive particles is proportional to the product of the two masses divided by the square of the distance
between them. The law of universal gravitation is sometimes called the "inverse square law." See: universal gravitational constant.
nuclear fission
Nuclear fission is the process by which the nucleus of an atom decays into a lighter nucleus, emitting some form of radiation. Nuclear fission reactions power nuclear reactors, and provide the
explosive energy in nuclear weapons.
nuclear fusion
Nuclear fusion is the process by which the nucleus of an atom absorbs other particles to form a heavier nucleus. This process releases energy when the nucleus produced in the fusion reaction is
not heavier than iron. Nuclear fusion is what powers stars, and is the source of virtually all the elements lighter than iron in the universe.
Parity is an operation that turns a particle or system of particles into its mirror image, reversing their direction of travel and physical positions.
In physics, the term phase has two distinct meanings. The first is a property of waves. If we think of a wave as having peaks and valleys with a zero-crossing between them, the phase of the wave
is defined as the distance between the first zero-crossing and the point in space defined as the origin. Two waves with the same frequency are "in phase" if they have the same phase and therefore
line up everywhere. Waves with the same frequency but different phases are "out of phase." The term phase also refers to states of matter. For example, water can exist in liquid, solid, and gas
phases. In each phase, the water molecules interact differently, and the aggregate of many molecules has distinct physical properties. Condensed matter systems can have interesting and exotic
phases, such as superfluid, superconducting, and quantum critical phases. Quantum fields such as the Higgs field can also exist in different phases.
Planck's constant
Planck's constant, denoted by the symbol h, has the value 6.626 x 10^-34 m^2 kg/s. It sets the characteristic scale of quantum mechanics. For example, energy is quantized in units of h multiplied
by a particle's characteristic frequency, and spin is quantized in units of h/2
potential energy
Potential energy is energy stored within a physical system. A mass held above the surface of the Earth has gravitational potential energy, two atoms bound in a molecule have chemical potential
energy, and two electric charges separated by some distance have electric potential energy. Potential energy can be converted into other forms of energy. If you release the mass, its
gravitational potential energy will be converted into kinetic energy as the mass accelerates downward. In the process, the gravitational force will do work on the mass. The force is proportional
to the rate at which the potential energy changes. It is common practice to write physical theories in terms of potential energy, and derive forces and interactions from the potential.
Any quantum system in which a physical property can take on only discrete values is said to be quantized. For instance, the energy of a confined particle is quantized. This is in contrast to a
situation in which the energy can vary continuously, which is the case for a free particle.
quantum electrodynamics
Quantum electrodynamics, or QED, is the quantum field theory that describes the electromagnetic force. In QED, electromagnetically charged particles interact by exchanging virtual photons, where
photons are the force carried of the electromagnetic force. QED is one of the most stringently tested theories in physics, with theory matching experiment to a part in 10^12.
quantum field theory (QFT)
Quantum field theory, or QFT, is a generalization of quantum mechanics capable of describing relativistic particles. It is currently the standard mathematical formalism used in particle physics,
as well as certain areas of condensed matter and atomic physics. In QFT, fields rather than particles are the fundamental objects. Particles correspond to vibrations of these fields. This
formulation puts particles and forces on equal footing, as both are described by fields. An interaction between two particles, which are vibrations in the field that correspond to that type of
particle, proceeds through the exchange of the particle that corresponds to a vibration in the field associated with the force. For example, electrons are vibrations of the electron field, and
photons are vibrations of the electromagnetic field. When two electrons repel, they exchange photons.
A relativistic particle is traveling close enough to the speed of light that classical physics does not provide a good description of its motion, and the effects described by Einstein's theories
of special and general relativity must be taken into account.
relativistic limit
In general, the energy of an individual particle is related to the sum of its mass energy and its kinetic energy by Einstein's equation E^2 = p^2c^2 + m^2c^4, where p is the particle's momentum,
m is its mass, and c is the speed of light. When a particle is moving very close to the speed of light, the first term (p^2c^2) is much larger than the second (m^2c^4), and for all practical
purposes the second term can be ignored. This approximation—ignoring the mass contribution to the energy of a particle—is called the "relativistic limit."
Rutherford scattering
The term Rutherford scattering comes from Ernest Rutherford's experiments that led to the discovery of the atomic nucleus. Rutherford directed a beam of alpha particles (which are equivalent to
helium nuclei) at a gold foil and observed that most of the alpha particles passed through the foil with minimal deflection, but that occasionally one bounced back as if it had struck something
In classical physics, space and time are considered separate things. Space is three-dimensional, and can be divided into a three-dimensional grid of cubes that describes the Euclidean geometry
familiar from high-school math class. Time is one-dimensional in classical physics. Einstein's theory of special relativity combines the three dimensions of space and one dimension of time into a
four-dimensional grid called "spacetime." Spacetime may be flat, in which case Euclidean geometry describes the three space dimensions, or curved. In Einstein's theory of general relativity, the
distribution of matter and energy in the universe determines the curvature of spacetime.
spontaneous symmetry breaking
Spontaneous symmetry breaking is said to occur when the theory that describes a system contains a symmetry that is not manifest in the ground state. A simple everyday example is a pencil balanced
on its tip. The pencil, which is symmetric about its long axis and equally likely to fall in any direction, is in an unstable equilibrium. If anything (spontaneously) disturbs the pencil, it will
fall over in a particular direction and the symmetry will no longer be manifest.
strong interaction
The strong interaction, or strong nuclear force, is one of the four fundamental forces of nature. It acts on quarks, binding them together into mesons. Unlike the other forces, the strong force
between two particles remains constant as the distance between them grows, but actually gets weaker when the particles get close enough together. This unique feature ensures that single quarks
are not found in nature. True to its name, the strong force is a few orders of magnitude stronger than the electromagnetic and weak interactions, and many orders of magnitude stronger than
In the theory of supersymmetry, every Standard Model particle has a corresponding "sparticle" partner with a spin that differs by 1/2. Superpartner is the general term for these partner
particles. The superpartner of a boson is always a fermion, and the superpartner of a fermion is always a boson. The superpartners have the same mass, charge, and other internal properties as
their Standard Model counterparts. See: supersymmetry.
Supersymmetry, or SUSY, is a proposed extension to the Standard Model that arose in the context of the search for a viable theory of quantum gravity. SUSY requires that every particle have a
corresponding superpartner with a spin that differs by 1/2. While no superpartner particles have yet been detected, SUSY is favored by many theorists because it is required by string theory and
addresses other outstanding problems in physics. For example, the lightest superpartner particle could comprise a significant portion of the dark matter.
symmetry transformation
A symmetry transformation is a transformation of a physical system that leaves it in an indistinguishable state from its starting state. For example, rotating a square by 90 degrees is a symmetry
transformation because the square looks exactly the same afterward.
virtual particle
A virtual particle is a particle that appears spontaneously and exists only for the amount of time allowed by the Heisenberg uncertainty principle. According to the uncertainty principle, the
product of the uncertainty of a measured energy and the uncertainty in the measurement time must be greater than Planck's constant divided by 4
weak interaction
The weak interaction, or weak force, is one of the four fundamental forces of nature. It is called "weak" because it is significantly weaker than both the strong force and the electromagnetic
force; however, it is still much stronger than gravity. The weak changes one flavor of quark into another, and is responsible for radioactive decay. | {"url":"http://www.learner.org/courses/physics/unit/unit_gloss.html?unit=2","timestamp":"2014-04-21T02:08:00Z","content_type":null,"content_length":"66063","record_id":"<urn:uuid:08e21d3d-dbef-40ab-ad26-495d419262c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: Elementary Place Value
This page:
place value
Dr. Math
See also the
Dr. Math FAQ:
rounding numbers
T2T FAQ:
place value help
3D and higher
Golden Ratio/
Fibonacci Sequence
Number Sense/
About Numbers
large numbers
place value
prime numbers
square roots
Word Problems
Browse Elementary Place Value
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Place value questions.
Rounding numbers.
Scientific notation.
Significant figures/digits.
If a length of 5.738km is rounded to 5.7km, what is the accuracy?
Why can't you just subtract the two times given in a word problem and get the time elapsed?
Why do we line up decimal points for addition and subtraction but not for division and multiplication?
Give the power of ten that the decimal must be multiplied by to eliminate the decimal point in the numbers: 3.825; 6.91; 19.207.
In the last 5 or 10 years, I have found an increase in the difficulty students have with place value and numeration and rounding. Is there some strategy, exercise, or drill to get these concepts
across to kids, some manipulatives that could be used?
What is a million seconds in weeks, days, hours, minutes, and seconds? What does unit conversion have to do with number bases?
What is the units digit of 6100?
On the one hand, 390 is the closest multiple of 10 to 389.512; on the other hand, it is the closest whole number to 389.512.
Is 1.9949 rounded to 2 decimal places 2.00 or 1.99?
Accurate measurement: What are significant digits? When do we use them? Why?
I came across a question asking for an answer with no more than "2 significant digits" - can you please explain what a significant digit is?
In which place is the digit 6 in the number 3164297 ?
Is it true that in France where we in America use decimals in math, they use commas, and where we use commas they use decimals?
Put 1000 $1 bills into 10 envelopes in such a way that someone can ask you for any amount of money from $1 to $1000 and you can give it to him through a combination of the envelopes.
Can you explain why some bit combinations for 2421 are invalid? For example, to represent decimal 5 (five) in 2421, the correct code is 1011. But why can't we represent decimal 5 in 2421 as 0101,
because if we add the weights as 0*2 + 1*4 + 0*2 + 1*1, we get 5 in decimal.
What types of problems exemplify using order of magnitude in estimation and compensation in estimation at the fourth grade level?
I have to estimate 32 - 15 by rounding the numbers to the nearest 10. I know 32 goes to 30, and rounding rules say 15 should go to 20. But that makes my estimated answer 10, when the real answer
is 17. If I round the 15 to 10, I get an estimate of 20, which is closer to 17. Why does following the rounding rule give me a worse estimate?
An explanation of using borrowing to calculate 12 5/11 minus 2 6/11 is extended to borrowing when subtracting times, liquid measures, and other wholes and parts in general.
My physics class is having a fair amount of trouble with significant digits.
Do commas change the value of a number?
The length of each day of the year changes by about 0.000000002 s. Write a length of time that could be rounded up to this figure.
I don't understand how to convert 50 millimeters into centimeters. Do I multiply 50 by 10 or divide it by 10? How do I decide?
When converting a negative mixed number into an improper fraction, it seems like we ignore the rules for integer addition. For example, to convert -4 1/7 we think -4 1/7 = -4 * 7 + 1 = -28 + 1 =
-27/7. But the correct answer is -29/7. How does -28 + 1 make -29?
I'm tutoring a 6th-grade student, and I'm having a hard time explaining to her why you sometimes need to put a zero in the tenths place, in the quotient.
Correct 0.099 to 1 significant figure. Which one of the following is the answer, 0.1 or 0.10? Why?
What do the different digits in a base 10 number mean?
An interesting look at how the idea of a 'decimal' number is widely misunderstood and the word itself is used incorrectly.
A discussion of some alternatives to the traditional rounding technique.
I have to answer a division problem and then round the quotient to the nearest hundredth...
Can you explain how to move the decimal place when you divide by powers of 10?
On internet communities it is common to reduce figures using 'k' as a shortcut; for example 2k is used instead of 2000. I was under the impression that a number after the k (such as 2k4) meant
2400, and that to get 2004 in this system you'd need to write 2k004. But I've also seen 2k4 used for 2004. Which is correct? Does 2k4 mean 2004 or 2400?
Use front-end estimation to estimate each sum: 428 + 219; 374 + 425.
What is a valid estimation? Alternate ways of rounding off; significant digits.
When estimating sums and differences, I was taught to round each number to the greatest place value position of the smallest number. Is that correct, and can you show me how that works?
Do you think my teacher means to round off after I get the answer, or before I add?
On our testing we have questions about estimating answers and rounding answers. The two methods give different answers. How are the two methods different?
What does it mean to write numbers in 'standard form' and 'expanded form'?
I'm confused about how to determine how many significant digits a given number has. Can you explain it?
How do I convert numbers from one base to another without converting to a base 10 equivalent first?
My math class is working on front end estimation, and I just don't get it. Can you explain it to me?
Page: 1 2 3 [next>] | {"url":"http://mathforum.org/library/drmath/sets/elem_place_value.html?start_at=1&num_to_see=40&s_keyid=40229794&f_keyid=40229795","timestamp":"2014-04-21T05:24:50Z","content_type":null,"content_length":"22031","record_id":"<urn:uuid:96bd70ba-b960-4944-97d9-402e6b91fd0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader).
Alternatively, you can also download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the Download link below.
If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs.
Download this PDF file Fullscreen Fullscreen Off
This work is licensed under a
Creative Commons Attribution 3.0 License | {"url":"http://www.emis.de/journals/EJP-ECP/article/view/2858/2283.html","timestamp":"2014-04-16T07:15:52Z","content_type":null,"content_length":"26572","record_id":"<urn:uuid:73bbd49d-3aaf-4f45-8705-6f1f664c48ac>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 1992 [00072]
[Date Index] [Thread Index] [Author Index]
Re: Would you post this Mathematica question to news group for me, please?
• To: MathGroup at yoda.physics.unc.edu
• Subject: Re: Would you post this Mathematica question to news group for me, please?
• From: keiper
• Date: Mon, 6 Jul 92 08:14:31 CDT
andrew at ll.mit.edu (Andrew Hecht) writes:
> I have a problem with mathematica, version 2.0 (on NeXT): When I
> perform the following iteration using numbers with high precision, I
> get erroneous results. For example, with
> a = 3.75000000000000000000000000000000000000000000000...
> t = a/2
> and using the following iteration technique:
> x = t; Do[Print[x]; x = a x - x^2 ,{140}] ; N[x,80]
> the results for the 135th through 140th iteration are
> 3.0, 2.0, 0.0, 0.0
Yes, that's one of the problems with significance arithmetic. All
uncertainties in the values in any expression are (and must be) treated
as independent. When you do arithmetic with uncertain numbers you
get an answer containing only those digits which are known to be correct
based on the assumption of independence of all uncertainties. In
many iterations the uncertainties are not at all independent and in fact
they tend to cancel each other. A more sophisticated error analysis
would reveal just how well or ill conditioned the iteration may be, but
such analysis would be completely impractical if built into the basic
arithmetic of Mathematica. (It would be possible to do by giving a serial
number to a symbolic representation for each rounding error throughout
the calculation, but the result would be huge and unusable.)
Since it is not practical to build error analysis into basic arithmetic
and the overly pessimistic results produced by the repeated use of the
triangle inequality are often not acceptable you have to somehow prevent
the degradation of precision that occurs with the default arithmetic.
In Version 2.1 you can set the global variable $MinPrecision to some positive
value and then every result will have at least that precision. NOTE HOWEVER
DO YOUR OWN ERROR ANALYSIS TO KNOW HOW MANY DIGITS ARE CORRECT. It is
highly recommended that you only change $MinPrecision and $MaxPrecision
within the confines of a Block[ ] statement to prevent the effects of their
changes interfering with all of your calculations. (Yes, that's Block[ ].
Do not use Module[ ] for this because it won't work.)
Prior to Version 2.1 about all you can do is use SetPrecision[ ] to expliciily
force the precision of each calculation in the iteration to be a certain
Jerry B. Keiper
keiper at wri.com | {"url":"http://forums.wolfram.com/mathgroup/archive/1992/Jul/msg00072.html","timestamp":"2014-04-16T21:57:23Z","content_type":null,"content_length":"36642","record_id":"<urn:uuid:0108ab54-50a6-46ce-bb82-fb0c21de959e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
July 2 - 6, 2007
Applied Probability Summary
Monday - Friday, July 2 - 6, 2007
The first week of the Applied Probability group was a study in contrast. David Levin is leading the Undergraduate Faculty Program through an introduction of Mixing Times, building on his work with
Markov Chains. Of course, that presupposes a knowledge of Markov Chains -- so while the first hour is interesting, the second hour, when we meet on our own, allows us to go back over the highly
theoretical and abstract concepts and put them through more practical examples.
Day 1
You know you're in a Markov Chain when the only thing that matters is what state you were in immediately previous and not what happened before.
Graph theory isn't just for bridges in Konigsberg - they can also be used for weather prediction (provided you have the appropriate probability distribution).
Day 2
Irreducible and Aperiodic situations. The first term means you can get anywhere you want, the second term means there are no set patterns in your ways to get there.
Day 3
Day 3 was our best yet. We looked at a very simple graph and determined the transition matrix and equilibrium row vector. Then we got bold and made the graph a multigraph (more then one edge between
two vertices). Then, we got bolder and made the graph directional. Then we ran out of time (fortunately, because we were in over our heads!)
Day 4
The week wrapped up with Day 4; we tried to make heads or tails out of the idea of couplings. We're still not sure we're comfortable with our understanding but we're going back in next week in an
effort to get it solidified.
The group thus far has been challenged by the material but with Darryl's calm guidance and the group's enthusiasm and willingness to share we're making sense out of it.
Back to Journal Index
PCMI@MathForum Home || IAS/PCMI Home
© 2001 - 2013 Park City Mathematics Institute
IAS/Park City Mathematics Institute is an outreach program of the School of Mathematics
at the Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540
Send questions or comments to: Suzanne Alejandre and Jim King
With program support provided by Math for America
This material is based upon work supported by the National Science Foundation under Grant No. 0314808.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | {"url":"http://mathforum.org/pcmi/hstp/sum2007/wg/probability/journal/week1.html","timestamp":"2014-04-19T04:55:06Z","content_type":null,"content_length":"3874","record_id":"<urn:uuid:013bd5cc-a82c-4ec4-b188-0d18137dc14a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Somerdale, NJ Trigonometry Tutor
Find a Somerdale, NJ Trigonometry Tutor
...My tutoring is guaranteed: During our first session, I will assess your situation and determine a grade that I think you can get with regular tutoring. If you don't get that grade, I will
refund your money, minus any commission I paid to this website. Please note that I only tutor college stude...
11 Subjects: including trigonometry, calculus, statistics, precalculus
...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management
since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university.
13 Subjects: including trigonometry, calculus, algebra 1, geometry
...Previously, I completed undergraduate work at North Carolina State University for a degree in Philosophy. Math is a subject that can be a bit difficult for some folks, so I really love the
chance to break down barriers and make math accessible for students that are struggling with aspects of mat...
22 Subjects: including trigonometry, calculus, geometry, statistics
...Students I tutor are mostly college-age, but range from middle school to adult. As a tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter
this fundamental math subject every day in my professional life. I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping.
9 Subjects: including trigonometry, calculus, physics, geometry
...Mathematical logic is a subfield of mathematics. This includes, but is not limited to, set theory, proofs (such as in geometry) and model theory. Much of the SAT test includes testing the
students' reasoning and logic skills.
22 Subjects: including trigonometry, geometry, statistics, GED
Related Somerdale, NJ Tutors
Somerdale, NJ Accounting Tutors
Somerdale, NJ ACT Tutors
Somerdale, NJ Algebra Tutors
Somerdale, NJ Algebra 2 Tutors
Somerdale, NJ Calculus Tutors
Somerdale, NJ Geometry Tutors
Somerdale, NJ Math Tutors
Somerdale, NJ Prealgebra Tutors
Somerdale, NJ Precalculus Tutors
Somerdale, NJ SAT Tutors
Somerdale, NJ SAT Math Tutors
Somerdale, NJ Science Tutors
Somerdale, NJ Statistics Tutors
Somerdale, NJ Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Ashland, NJ trigonometry Tutors
Barrington, NJ trigonometry Tutors
Clementon trigonometry Tutors
Echelon, NJ trigonometry Tutors
Haddon Heights trigonometry Tutors
Hi Nella, NJ trigonometry Tutors
Laurel Springs, NJ trigonometry Tutors
Lawnside trigonometry Tutors
Magnolia, NJ trigonometry Tutors
Runnemede trigonometry Tutors
Stratford, NJ trigonometry Tutors
Tavistock, NJ trigonometry Tutors
Voorhees trigonometry Tutors
Voorhees Kirkwood, NJ trigonometry Tutors
Voorhees Township, NJ trigonometry Tutors | {"url":"http://www.purplemath.com/Somerdale_NJ_Trigonometry_tutors.php","timestamp":"2014-04-20T19:32:55Z","content_type":null,"content_length":"24505","record_id":"<urn:uuid:743e7d73-470a-46f5-8b9d-396eb38b17e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - "anti" black hole
Yes but instead it has a barrier, nothing can enter it from the outside, not even light.
If one posutlates a point negative gravitational mass, an effective potential analysis (such as the one at
shows that it would require infinite energy to reach r=0. One could come as close to the central point mass as desired, but never reach it with a finite "energy at infinity".
But if one postualtes a distributed, rather than a point, mass, this issue goes away. A distributed negative gravitational mass would have a metric that was well-behavied everywhere, and one could
reach the center of it without any special problems.
So there wouldn't be any event horizon, nor would there be any singularity, or any difficulty reaching r=0, with a finite negative mass of nonzero volume.
Negative mass has a lot of other problems though. Thermodynamically, for instance, it's a real mess. One would expect particles with negative mass to gain negative energy from their surroundings, for
instance, heating up the surroundings while the negative mass particles gain "negative energy". | {"url":"http://www.physicsforums.com/showpost.php?p=1130600&postcount=9","timestamp":"2014-04-16T16:04:49Z","content_type":null,"content_length":"8649","record_id":"<urn:uuid:56a3f965-2dca-4ba3-926a-3b700acf3f67>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 75,
Volume 75, Issue 2, 15 July 1981
View Description Hide Description
The molecular‐beam, laser–rf, double‐resonance technique has been used to make high‐precision measurements of the spin–rotation and hyperfineinteractions in the X ^2Σ (v = 0) electronic ground
state of Ca^79Br and Ca^81Br. The spin–rotation interaction is found to have a strong N dependence. The Frosch–Foley magnetic hyperfine parameters b and c and the electric–quadrupole hfs
parameter e q Q are determined for both molecules.
View Description Hide Description
The joint effect of direct and hydrodynamicinteractions on the dynamic structure S (k,t) of a solution of rigid macromolecules is examined. The initial slope dS/dt and initial curvature d^2 S/dt^
2 of S(k,t) are obtained. The reference frame correction of Kirkwood e t a l. [J. Chem. Phys. 33, 1505 (1960)] is shown to be wave‐vector dependent. Contrary to some previous results, we argue
that the initial slope of S(k,t) is partly due to direct interparticle interactions rather than being due entirely to free‐particle Brownian motion.
View Description Hide Description
This paper and paper I in this series [P.H. Berens and K.R. Wilison, J. Chem. Phys. 74, 4872 (1981)] indicate that infrared and Raman rotational and fundamental vibrational–rotational spectra of
dense systems (high pressure gases, liquids, and solids) are essentially classical, in that they can be computed and understood from a basically classical mechanical viewpoint, with some caveats
for features in which anharmonicity is important, such as the detailed shape of Q branches. It is demonstrated here, using the diatomic case as an example, that ordinary, i.e., nonresonant, Raman
band contours can be computed from classical mechanics plus simple quantum corrections. Classical versions of molecular dynamics, linear response theory, and ensemble averaging, followed by
straightforward quantum corrections, are used to compute the pure rotational and fundamental vibration–rotational Raman band contours of N[2] for the gas phase and for solutions of N[2] in
different densities of gas phase Ar and in liquid Ar. The evolution is seen from multiple peaked line shapes characteristic of free rotation in the gas phase to single peaks characteristic of
hindered rotation in the liquid phase. Comparison is made with quantum and correspondence principle classical gas phase spectral calculations and with experimental measurements for pure N[2] and
N[2] in liquid Ar. Three advantages are pointed out for a classical approach to infrared and Raman spectra. First, a classical approach can be used to compute the spectra of complex molecular
systems, e.g., of large molecules, clusters, liquids,solutions, and solids. Second, this classical approach can be extended to compute the spectra of nonequilibrium and time‐dependent systems,
e.g., infrared and Raman spectra during the course of chemical reactions. Third, a classical viewpoint allows experimental infrared and Raman spectra to be understood and interpreted in terms of
atomic motions with the considerable aid of classical models and of our well‐developed classical intuition.
View Description Hide Description
Using the principles of quantum electrodynamics, the theory of two‐, three‐, and four‐photon absorption in polyatomic gases and liquids is developed. Expressions are derived for the rates of
single‐frequency absorption from plane polarized, circularly polarized, and unpolarized light. It is shown that for n‐photon absorption with n?3, the rate for unpolarized radiation is in each
case expressible as a linear combination of the rates for plane polarized and circularly polarized light; no such relationship exists for four‐photon absorption. For each multiphoton process, it
is demonstrated how the fullest information about the symmetry properties of excited states can be derived by a simple linear processing of the results from experiments with different
polarizations. A detailed examination of the selection rules is made, based on a reduction of the molecular transition tensor into irreducible components, and a new classification scheme is
introduced to assist with the interpretation of experimental results. Finally, it is shown that the theory may also be applied to resonance‐enhanced multiphoton ionizationspectroscopy.
View Description Hide Description
Optical absorption and emission spectra are reported for the cubic Cs[2]NaHoCl[6] elpasolite system. Detailed energy and intensity analyses of the high‐resolution, variable‐temperature spectra
allow characterization of the crystal field energy level structure associated with eleven of the Ho^3+ term levels. The term levels included in these analyses are ^5 I [8], ^5 I [7], ^5 F [5], ^5
F [4], ^5 S [2], ^5 F [3], ^5 F [2], ^3 K [8], ^5 G [6], ^5 G [5], and ^3 K [7]. Intensity calculations are reported for both the pure magnetic dipole transitions and the vibronically induced
electric dipole transitions associated with the ν[3](t [1u ]), ν[4](t [1u ]), and ν[6](t [2u ]) vibrational modes of the octahedral (O [ h ]) HoCl[6] ^3− chromophoric moiety of the Cs[2]NaHoCl[6]
system. The electric dipole intensity model used in these calculations includes contributions from both the s t a t i c‐c o u p l i n g and d y n a m i c‐c o u p l i n g Ho^3+‐ligand interaction
mechanisms. Excellent agreement between observed and calculated intensities is found, and the theoretically calculated intensity results proved crucial to our detailed analysis and assignment of
the observed spectra.
View Description Hide Description
The emission and excitation spectra of the Eu^3+luminescence and the energy transfer phenomena in single crystals of Gd[1−x ]Eu[ x ]Al[3]B[4]O[12] (0<x?1) have been investigated using
time‐resolved site‐selection techniques. In addition, the luminescence properties of single crystals and powders have been compared. The results show that in these compounds there are three major
types of Eu^3+ sites. Most Eu^3+ ions are on the regular crystallographic sites, but a relatively high percentage of these ions are at nonregular (Al^3+ and interstitial) sites. The crystals of
EuAl[3]B[4]O[12] show quenching at lower temperatures than do the powder samples. This behavior can be explained with the help of a model which takes into account diffusion‐limited migration
within the regular Eu^3+ system and quenching by transfer to Mo^3+ impurities. These impurities are incorporated into the crystals unintentionally during their growth. The concentration quenching
in the single crystals is quite strong and can be explained by means of the percolation concept. In the powder samples, however, the concentration and temperature quenching had less influence due
to the relatively low concentration of the quenching centers in these samples. In all samples there was evidence of direct transfer from ’’nonregular’’ Eu^3+ ions to ’’regular’’ ions. The
critical transfer distance was derived.
View Description Hide Description
The effect of pressure on the emission of 3‐hydroxyflavone has been studied in isobutanol, glycerol, and two mixtures of these solvents. Below a viscosity of ∼50 P there is an increase in the
fraction of emission from the tautomeric state with pressure. At higher viscosities the fraction of tautomeric emission decreased. At viscosities less than ∼50 P the lifetimes obtained from the
two emission peaks were the same, indicating that the two excited states were in equilibrium. At higher viscosities, the lifetimes differed, indicating that the distribution was kinetically
controlled. In hexamethylnonane (HMN) only tautomer emission was observed even at very high viscosities; also, only tautomer emission occurs in a rigid polyisobutylene film. A few measurements on
7‐hydroxyflavone are briefly discussed.
View Description Hide Description
Proton and fluorine magnetic resonance absorption, and spin–lattice relaxation timemeasurements, have been carried out on crystalline ammonia–boron trifluoride complex in the temperature ranges
from 4.2 to 300 K in the continuous wave method, and from 89 to 373 K using the pulse technique. The second moment results indicated that the BF[3] motion is hindered at ∼200 K, as previously
reported, while the NH[3] group is considerably mobile at 77 K but an additional motion occurs at 4.2 K. The magnetization decay of ^1H and ^19F after a 180° pulse showed intrinsically
nonexponential behavior. The T [1] data, taken as the initial decay time constant, are compared with approximate theoretical estimates including three dissimilar spins. For the ^19F relaxation at
high temperature, a set of three differential rate equations was reduced to a pair of two differential equations in which the effect of the third atom is taken into account in the diagonal
coefficients. The temperature dependence was almost completely predicted by this treatment. Three dipolar contributions ^19F–^19F, ^19F–^11B, and ^19F–^1H were found to be 1.244, 2.420, and
0.222×10^9 s^−2 for the relaxation constants, respectively. A discrepancy between theory and experiment was observed in the ^1H relaxation data at high temperature and for ^19F at low
temperature, both of which can not be explained by assuming a two‐dissimilar‐spins system and require the consideration of magnetic polarization in the third atom.
View Description Hide Description
A perturbation theory approach is used to analyze the rovibronic structure of the three‐photon resonant absorption of ammonia. The vibronic selection rules are presented in a convenient pyramid
mnemonic in terms of irreducible representations of the rotation group and of the D [3h ] point group. The analysis is presented for each possible situation, namely, all three photons are
different, two photons are identical and one different, and all three photons are identical. The experimentally important case of three identical photons implies only two polarizations and two
vibronic tensor elements for each symmetry type. With both the B̃ and C̃′ systems of ammonia one expects a ratio of 5 to 2 in the absorbance for circularly versus linearly polarized light for the N
, O, S, and T branches. For the P, Q, and R branches one expects a 21 to 4 ratio of the weight‐1 versus weight‐3 tensor contributions to the C̃′ system in linearly polarized light (21 to 6 for B̃).
Weight‐1 terms are absent with circularly polarized light. These predictions are confirmed experimentally. The complete set of rotational line strength factors for three‐photon absorption are
given in algebraic form for the first time. The combination of vibronic factors, rotational line strengths, and statistical weights allows one to predict the three‐photon absorptionspectrum. Such
calculated spectra are presented for ammonia at 45 K and they compare very favorably with the experimental spectra for both the B̃ system (a perpendicular band) and for the C̃′ system (a parallel
band) in both linearly and circularly polarized light.
View Description Hide Description
The avoided‐crossing molecular‐beam method for studying normally forbidden (ΔK≠O) transitions in symmetric tops has been applied to fluoroform (CF[3]H) and fluoroform‐d (CF[3]D), thus marking the
extension of the method to systems which are n o t near spherical rotors. In order to reach the high electric fields required while still retaining the necessarily narrow linewidth, the electric
resonance spectrometer has been equipped with a new pair of Stark plates capable of providing electric fields up to about 20 kV/cm with a homogeneity of 1 part in 10^5 over a length of 3 cm. The
anticrossing (J,K) = (1,0)↔(1,±1) has been studied for both CF[3]H and CF[3]D. In each case, the rotational constantC [0] along the symmetry axis has been obtained to 0.002%. From anticrossing
spectra observed in combined electric and magnetic fields, the signs of the rotational g factors g[∥] and g [⊥] have been shown to be negative. From a conventional molecular beam study for each
isotopic species, a value of the permanent electric dipole moment accurate to 60 ppm was determined and improved values of g [∥] and g [⊥] were obtained. The direction of the electric dipole
moment is shown by two methods to be +HCF[3]−. A brief discussion of the difficulties in these methods is given.
View Description Hide Description
Rotational laser emission by HF has been observed at 33 frequencies between 325 and 1250 cm^−1 from the flash photolysis (1.2 μsec FWHM) of vinyl fluoride and of 1,1‐difluorethylene. The
transitions lie within the v = 0 to v = 5 manifolds and range from J ^″→J ^′ = 8→7 to 31→30. Increasing the atomic weight or the partial pressure of the inert buffer gas (He, Ne, or Ar) raises
the gain of nearly all transitions, showing that collisional relaxation processes are active in pumping the laser emission. The high gains displayed by both precursors in the J = 14→13
transitions for the v = 0,1,2, and 3 manifolds indicate that V→Renergy transfer is pumping molecules into the v ^′, J = 14 state from the near‐resonant v ^′+1, J = 2, 3, and 4 states. In a
similar way, the highest J transitions J = 31→30 to 28→27 with v = 0 and/or 1, are best explained by V→Renergy transfer from near‐resonant low‐J states from much higher vibrational manifoldsv ^′
= 4, 5, and 6. This would imply collision‐induced multiquantum energy transfer with large Δv (up to Δv = 5) and large ΔJ (up to ΔJ = 26) or a rapid succession of steps with smaller Δv and ΔJ. In
contrast, the high gains displayed by the J = 10→9 transitions in the v = 0, 1, and 2 manifolds are best explained in terms of R→T relaxation from a uniform nascent population. While there are
indications that the nascent rotational distributions provided by these photoeliminations probably furnish population to high J states, the gain patterns indicate that the V→R and R→T energy
relaxation processes are strongly influential, the former surely involving multiquantum steps with large ΔJ and probably with Δv>1 as well.
View Description Hide Description
Polarized Raman spectra of ZnCl[2] were obtained in the liquid phase near the melting point (T = 598 K), and in the glassy phase (T = 293 K). Measurements were performed down to a very low
frequency shift (2 cm^−1) from the exciting line. Our analysis of the Raman data provides an interpretation of the collision‐like contribution in terms of a structural relaxation time in the
picosecond range, while the phonon‐like contribution gives an effective Raman density of states. These results are also discussed in terms of existing structureal models.
View Description Hide Description
Multiple‐scattering model calculations of cross sections for dipole transitions from all occupied orbitals of BF[3] to excited bound states and continuum states of electron kinetic energy <30 eV
are presented. The photoelectron angular distribution asymmetry parameters β are also given for all occupied orbitals. The boron and fluorine K‐shell calculations are in qualitative agreement
with, and provide a clear interpretation of, the measured spectra. Two shape resonances are found in the low energy continuum: one of a ^′ [1] symmetry, the other of e′ symmetry. The resonances
are found to be due to trapping of p waves on the fluorine atoms. This atomic localization as well as the dominance of low‐l partial waves outside the molecule put these shape resonances in a
class distinct from those observed in diatomic molecules.
View Description Hide Description
Three isotopic species of a weakly bound T‐shaped π complex formed between acetylene and HCl have been observed through the assignment of their rotational spectra by the use of a pulsed,
Fourier‐transform spectrometer with a high pressure gas being pulsed into an evacuated Fabry–Perot cavity. The spectra clearly show the complex to be a nearly prolate asymmetric top (κ = −0.9898)
with the HCl on the a‐inertial axis and perpendicular to acetylene and with the hydrogen atom pointing towards the middle of the acetylene triple bond. The chlorine is situated, on average, 3.699
Å from the edge of the acetylene molecule. The following molecular constants have been obtained for the C[2]H[2], H^35Cl isotopic species: The rotational constants are A = 36084(838), B =
2481.065(6), and C = 2308.602(6) MHz; the nuclear quadrupole coupling constants are χ[ a a ] = −54.342(4), χ[ b b ] = 26.862(5), and χ[ c c ] = 27.480(9) MHz, and the rotational centrifugal
distortion constants are D [ J ] = 7.9(4) and D [ J K ] = 497(8) kHz.
View Description Hide Description
The rotational spectrum of the ArClCN van der Waals complex has been assigned using pulsed Fourier transformmicrowave spectroscopy in a Fabry‐Perot cavity with a pulsed supersonic nozzle as the
molecular source. ArClCN is T‐shaped and the data were fit to the Watson rotational parameters and an exact expression for the Cl and N nuclear quadrupole coupling. The spectroscopic constants
for Ar^35ClCN are A′′ = 6152.5411(21) MHz, B′′ = 1577.0362(8) MHz, C′′ = 1246.7514(6) MHz, τ[1] = −576.0(2) kHz, τ[2] = −97.28(8) kHz, τ[ a a a a ] = −234.8(17) kHz, τ[ b b b b ] = −55.97(10)
kHz, τ[ c c c c ] = −21.60(7) kHz, χ^Cl [ a a ] = 37.9468(23) MHz, χ^Cl [ b b ] = −79.5239(20) MHz, χ^N [ a a ] = 1.6403(22) MHz, and χ^N [ b b ] = −3.4571(20) MHz. The centrifugal distortion
constants are used to derive the intramolecular force field and a normal coordinate analysis is performed. The Cl nuclear quadrupole coupling tensor indicates that the field gradients in ClCN are
slightly perturbed upon complex formation but not enough to proscribe their use in structural determinations of weakly bound complexes.
View Description Hide Description
Rotational level populations of N[2] were measured downstream from the skimmer in beams of pure N[2] and in mixtures of N[2] with He, Ne, and Ar expanded from room temperature nozzles. The range
of p [0] D was from 5 to 50 Torrcm. The formation of dimers and higher condensates of beam species was monitored during the runs. The effect of condensationenergy release on rotational
populations and parallel temperatures was readily observed. Two different methods for evaluating the rotational population distributions were compared. One method is based on a dipole‐excitation
model and the other on an excitation matrix obtained empirically. Neither method proved clearly superior. Both methods indicated nonequilibrium rotational populations for all of our room
temperature nozzle expansion conditions. Much of the nonequilibrium character appears to be due to the behavior of the K = 2 and K = 4 levels, which may be accounted for in terms of the
rotational energy level spacing. In particular, the overpopulation of the K = 4 level is explained by a near‐resonant transfer of rotational energy between molecules in the K = 6 and K = 0
states, to give two molecules in the K = 4 state. Rotational and vibrational temperatures were determined for pure N[2]beams from nozzles heated up to 1700°K. The heated nozzle experiments
indicated a 40% increase in the rotational collision number between 300 and 1700°K.
View Description Hide Description
The electron‐excited gas phase carbon and oxygen Auger spectra of CO and CO[2] are compared to the spectra calculated by a one‐electron theory. Calculated Auger transition intensities and
energies are generally in good agreement with experiment. The disagreement between certain calculated intensities and experiment, however, illustrates those cases where configuration interaction
can be expected to provide significant corrections to the one‐electron theoretical results. In addition, a comparison of calculated to experimental final state binding energies reveals the
existence, in covalent molecules, of localized two‐hole final states in which two holes are always on the same site. A simple two‐electron theory predicts where such states will occur in the
Auger spectra of homonuclear diatomic molecules.
View Description Hide Description
We discuss two methods, one of them new, for recovering level‐specific differential cross sections in crossed molecular beams experiments from the Doppler profiles of line shapes observed by
laser induced fluorescence. The angular resolutions of the two methods are compared and shown to be complementary. An experiment using both methods can have moderately good angular resolution at
all scattering angles. In the first method, which has previously been demonstrated experimentally, the Dopper profile is taken with the laser beam parallel to the relative velocity of the
collision system. Good angular resolution is obtained between π/4 and 3π/4. In the second method, which is proposed here, the Doppler profile is taken with the laser beam perpendicular to this
relative velocity, and the best angular resolution is obtained in the regions 0 to π/4 and 3π/4 to π. This method requires an integral transform to recover the cross section from the Doppler
profile. A practical implementation of this transform is presented along with a numerical example showing its relative insensitivity to noise in the profile.
View Description Hide Description
The reaction of Si^+ with H[2]O to form SiOH^+ has a measured rate constant of 2.3(−10) cm^3s^−1 at 300 K. This is the major loss of Si^+ ions in the Earth’s atmosphere above ∼90 km. The
three‐body reaction of Si^+ with O[2] (in He) produces stable SiO[2] ^+ ions with a rate constant of 1(−29) cm^6s^−1 at 300 K. The endothermic binary reaction of Si^+ with O[2] to produce SiO^+
has been measured from threshold to ∼2 eV. The association of Si^+ with O[2] is the dominant loss process for Si^+ ions below ∼90 km in the Earth’s atmosphere and leads to siliconoxidation to SiO
[2] since a large fraction of the SiO[2] ^+ ions are produced in excited states which charge transfer with O[2] before relaxation. The reaction of SiO^+ ions with H[2] (and D[2]) is found to have
a thermal energy rate constant of 3.2(−10) cm^3s^−1 [and 2.0(−10) cm^−3s^−1] to produce SiOH^+ (and SiOD^+). This process has been suggested as a step in SiO production in the interstellar
medium. The proton affinity of SiO is found to be 8.1±0.7 eV and the dissociation energy of SiO^+ to be 4.8 eV and definitely less than 4.9 eV.
View Description Hide Description
We compare the results of a classical model for the laser enhancement of the H+LiF→Li+HF reaction to accurate quantum mechanical results. Structure in the reaction probability as a function of
collision energy below the fieldfree threshold predicted classically by Orel and Miller is found to be less pronounced in the quantum results. A much better understanding of the various causes of
the observed maxima and also of the reliability of this classical model are obtained as a result of these calculations. A fundamental inconsistency in the classical model introduced when the
Langer modification is made which can lead to unphysical results is discussed and a procedure for correcting this inconsistency is presented. Improved results are obtained. | {"url":"http://scitation.aip.org/content/aip/journal/jcp/75/2","timestamp":"2014-04-23T16:19:41Z","content_type":null,"content_length":"182186","record_id":"<urn:uuid:e89dff40-7dd3-4bd3-8683-fbb1c81765d5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dialog Box
The MPC Controller block has the following parameter groupings:
MPC controller
You must provide an mpc object that defines your controller using one of the following methods:
● Enter the name of an mpc object in the MPC Controller edit box. This object must be present in the base workspace.
Clicking Design opens the MPC design tool where you can modify the controller settings in a graphical environment. For example, you can:
○ Import a new prediction model.
○ Change horizons, constraints, and weights.
○ Evaluate MPC performance with a linear plant.
○ Export the updated controller to the base workspace.
To see how well the controller works for the nonlinear plant, run a closed-loop Simulink simulation.
● If you do not have an existing mpc object in the base workspace, leave the MPC controller field empty and, with the MPC Controller block connected to the plant, click Design. This action
constructs a default mpc controller by obtaining a linearized model from the Simulink diagram at the default operating point. Continue your controller design in the MPC design tool.
To use this design approach, you must have Simulink Control Design™ software.
Initial controller state
Specifies the initial controller state. If this parameter is left blank, the block uses the nominal values that are defined in the Model.Nominal property of the mpc object. To override the default,
create an mpcstate object in your workspace that represents the initial state, and enter its name in the field.
Required Inports
At each control instant, the mo signal must contain the current output variable measurements. Let n[ym] be the number of measured output variables (MO) defined in your predictive controller. If n
[ym]=1, connect a scalar signal to the mo inport. Otherwise, connect a row or column vector signal containing n[ym] real, double-precision elements.
At each control instant, the ref signal must contain the current reference values (targets or setpoints) for the n[y] output variables (ny = n[ym]+ number of unmeasured outputs). You have the
option to specify future reference values (previewing).
The ref signal must be size N by n[y], where is the number of time steps for which you are specifying reference values and p is the prediction horizon. Each element must be a real
double-precision number. The ref dimension must not change from one control instant to the next.
When N=1, you cannot preview. To specify future reference values, choose N such that to enable previewing. Doing so usually improves performance via feedforward information. The first row
specifies the n[y] references for the first step in the prediction horizon (at the next control interval k=1), and so on for N steps. If N<p, the last row designates constant reference values to
be used for the remaining p-N steps.
For example, suppose n[y]=2 and p=6. At a given control instant, the signal connected to the controller's ref inport is
[2 5 ← k=1
2 6 ← k=2
2 7 ← k=3
2 8] ← k=4
The signal informs the controller that:
● Reference values for the first prediction horizon step (k=1) are 2 and 5.
● The first reference value remains at 2, but the second increases gradually.
● The second reference value becomes 8 at the beginning of the fourth step (k=4) in the prediction horizon.
● Both values remain constant at 2 and 8 respectively for steps 5–6 of the prediction horizon.
mpcpreview shows how to use reference previewing in a specific case. For calculation details on the use of the reference signal, see Optimization Problem.
Required Outports
Manipulated Variables
The mv outport provides a signal defining the manipulated variables, which are to be implemented in the plant. The controller updates its mv outport by solving a quadratic program at each control
instant. The elements are real, double-precision values.
Optional Inports
Measured disturbance
Add an inport (md) to which you can connect a measured disturbance signal.
Your measured disturbance signal (MD) must be size Nxn[md], where is the number of measured disturbances defined in your Model Predictive Controller and is the number of time steps for which the MD
is known. Each element must be a real, double-precision number. The signal dimensions must not change from one control instant to the next.
If N=1 you cannot preview. At each control instant, the MD signal must contain the most recent measurements at the current time k=0 (as a row vector, length n[md]). The controller assumes that the
MDs remain constant at their current values for the entire prediction horizon.
If you are able to predict future MD values, choose N such that to enable previewing. Doing so usually improves performance via feedforward. In this case, the first row must contain the n[md] current
values at k=0, and the remaining rows designate variations over the next N-1 control instants. If N<p+1, the last row designates constant MD values to be used for the remaining p+1-N steps of the
prediction horizon.
For example suppose n[md]=2 and p=6. At a given control instant, the signal connected to the controller's md inport is:
[2 5 ← k=0
2 6 ← k=1
2 7 ← k=2
2 8] ← k=3
This signal informs the controller that:
● The current MDs are 2 and 5 at k=0.
● The first MD remains at 2, but the second increases gradually.
● The second MD becomes 8 at the beginning of the step 3 (k=3) in the prediction horizon.
● Both values remain constant at 2 and 8 respectively for steps 4–6 of the prediction horizon.
mpcpreview shows how to use MD previewing in a specific case.
For calculation details, see Prediction Model and QP Matrices.
Externally supplied MV signals
Add an inport (ext.mv), which you can connect to the actual manipulated variables (MV) used in the plant. The block uses these to update its internal state estimates. For example, suppose the actual
signals saturate at physical limits or the MV is under manual control. In both cases, feeding the actual value back to the MPC Controller block can improve performance significantly, because the
prediction model's state estimates are updated more accurately.
The following example shows how a manual switch may override the controller's output. Also see Turning Controller Online and Offline with Bumpless TransferTurning Controller Online and Offline with
Bumpless Transfer.
Do not connect this option to leave the ext.mv inport unconnected. In either case, the model predictive controller assumes that the plant uses the MV signals sent by the MPC Controller block. In the
preceding example, the external MV signal always provides the model predictive controller that the control signal actually used in the plant. Otherwise, the model predictive controller's internal
state estimate would be inaccurate.
│ Note The MPC Controller block is a discrete-time block with sampling time inherited from the MPC object. The MPC block has direct feedthrough from measured outputs (mo), output references │
│ (ref), and measured disturbances (md) to MPC-manipulated variables (mv). There is no direct feedthrough from externally supplied manipulated variables (ext.mv) to MPC-manipulated variables │
│ (mv). │
Input and output limits
Add inports (umin,umax,ymin,ymax ), which you can connect to run-time constraint signals. If this check box is not selected, the block uses the constant constraint values stored within its mpc
object. Example connections appear in the following model. See Varying Input and Output ConstraintsVarying Input and Output Constraints for an example of using this option.
Each unconnected limit inport, such as ymin in the following model, is treated as an unbounded signal. The corresponding constraint settings in the mpc object must also be unbounded. For connected
limit inports, such as ymax, the signals must be finite and the corresponding variables in the mpc object must also be bounded.
All constraint signals connected to the block must be finite. Also, you cannot change the number or identity of constrained and unconstrained variables. For example, if your mpc object specifies that
your first MV has a lower bound, you must supply a umin signal for it.
Optimization enabling switch
Add an inport (QP Switch) whose input specifies whether the controller performs optimization calculations. If the input signal is zero, the controller behaves normally. If the input signal becomes
nonzero, the MPC Controller block turns off the controller's optimization calculations and sets the controller output to zero. These actions save computational effort when the controller output is
not needed, such as when the system has been placed in manual operation or another controller has taken over. The controller, however, continues to update its internal state estimate in the usual
way. Thus, it is ready to resume optimization calculations whenever the QP Switch signal returns to zero.
If you select this option, the mask automatically selects the Externally supplied MV signal option. Connect this option to the current MV value in the plant. Otherwise, there would be a "bump" each
time the QP Switch signal reactivates optimization.
Optional Outports
Optimal cost
Add an outport (cost) that provides the calculated optimal cost (scalar) of the quadratic program during operation. The computed value is an indication of controller performance. If the controller is
performing well, the value is low. However, if the optimization problem is infeasible, this value is meaningless. (See qp.status.)
Optimal control sequence
Add an outport (mv.seq) that provides the controller's computed optimal MV sequence for the entire prediction horizon from k=0 to k=p-1. If n[u] is the number of MVs and p is the length of the
prediction horizon, this signal is a p by n[u] matrix. The first row represents k=0 and duplicates the block's MV outport.
The following block diagram (from Analysis of Control Sequences Optimized by MPC on a Double Integrator SystemAnalysis of Control Sequences Optimized by MPC on a Double Integrator System) illustrates
how to use this option. The diagram shows how to collect diagnostic data and send it to the To Workspace2 block, which creates the variable, useq, in the workspace. Run the example to see how the
optimal sequence evolves with time.
Optimization status
Add an outport (qp.status) that allows you to monitor the status of the QP solver.
If a QP problem is solved successfully at a given control interval, the qp.status output returns the number of QP solver iterations used in computation. This value is a finite, positive integer and
is proportional to the time required for the calculations. Thus, a large value means a relatively slow block execution at this time interval.
The QP solver may fail to find an optimal solution for the following reasons:
● qp.status = 0 — The QP solver cannot find a solution within the maximum number of iterations specified in the mpc object.
● qp.status = -1 — The QP solver detects an infeasible QP problem. See Monitoring Optimization Status to Detect Controller FailuresMonitoring Optimization Status to Detect Controller Failures for
an example where a large, sustained disturbance drives the OV outside its specified bounds.
● qp.status = -2 — The QP solver has encountered numerical difficulties in solving a severely ill-conditioned QP problem.
For all the previous three failure modes, the MPC block holds its mv output at the most recent successful solution. In a real-time application, you can use status indicator to set an alarm or take
other special action.
The next diagram shows how to use the status indicator to monitor the MPC Controller block in real time. See Monitoring Optimization Status to Detect Controller FailuresMonitoring Optimization Status
to Detect Controller Failures for more details.
Online Tuning Inports
A controller intended for real-time applications should have "knobs" you can use to tune its performance when it operates with the real plant. This group of optional inports serves that purpose.
The diagram shown below displays the MPC Controller block's three tuning knobs. In this simulation context, the knobs are being tuned by prestored signals (the ywt, duwt, and ECRwt variables in the
From Workspace blocks). In practice, you would connect a knob or similar manual adjustment.
Weights on plant outputs
Add an inport (y.wt) whose input is a vector signal defining a nonnegative weight for each controlled output variable (OV). This signal overrides the MPCobj.Weights.OV property, which establishes the
relative importance of OV reference tracking.
For example, if the preceding controller defined 3 OVs, the signal connected to the y.wt inport should be a vector with 3 elements. If the second element is relatively large, the controller would
place a relatively high priority on making OV(2) track the r(2) reference signal. Setting a y.wt signal to zero turns off reference tracking for that OV.
If you do not connect a signal to the y.wt inport, the block uses the OV weights specified in your MPC object, and these values remain constant.
Weights on manipulated variables rate
Add an inport (du.wt), whose input is a vector signal defining nu nonnegative weights, where nu is the number of manipulated variables (MVs). The input overrides the MPCobj.Weights.MVrate property
stored in the mpc object.
For example, if your controller defines four MVs and the second du.wt element is relatively large, the controller would use relatively small changes in the second MV. Such move suppression makes the
controller less aggressive. However, too much suppression makes it sluggish.
If you do not connect a signal to the du.wt inport, the block uses the MVrate weights property specified in your mpc object, and these values remain constant.
Weight on overall constraint softening
Add an inport (ECR.wt), whose input is a scalar nonnegative signal that overrides the MPC Controller block's MPCobj.Weights.ECR property. This inport has no effect unless your controller object
defines soft constraints whose associated ECR values are nonzero.
If there are soft constraints, increasing the ECR.wt value makes these constraints relatively harder. The controller then places a higher priority on minimizing the magnitude of the predicted
worst-case constraint violation.
You may not be able to avoid violations of an output variable constraint. Thus, increasing the ECR.wt value is often counterproductive. Such an increase causes the controller to pay less attention to
its other objectives and does not help reduce constraint violations. You usually need to tune ECR.wt to achieve the proper balance in relation to the other control objectives.
Signal Attributes and Block Sample Time
Output data type
Specify the data type of the manipulated variables (MV) as one of the following:
● double — Double-precision floating point (default).
● single — Single-precision floating point.
You specify the output data type as single if you are implementing the model predictive controller on a single-precision target.
For an example of double- and single-precision simulation and code generation for an MPC controller, see Simulation and Code Generation Using Simulink Coder.
To view the port data types in a model, in the Simulink Editor, select Display > Signals & PortsPort Data Types. For more information, see Display Port Data Types.
Block uses inherited sample time (-1)
Use the sample time inherited from the parent subsystem as the MPC Controller block's sample time.
Inheriting the sample time allows you to conditionally execute the MPC Controller block inside the Function-Call Subsystem or Triggered Subsystem blocks. For an example, see Using MPC Controller
Block Inside Function-Call and Triggered Subsystems.
│ Note: When you place an MPC controller inside a Function-Call Subsystem or Triggered Subsystem block, you must execute the subsystem at the controller's design sample rate. You may see │
│ unexpected results if you use an alternate sample rate. │
To view the sample time of a block, in the Simulink Editor, select Display > Sample Time. Select Colors, Annotations, or All. For more information, see View Sample Time Information. | {"url":"http://www.mathworks.com/help/mpc/ref/mpccontroller.html?nocookie=true","timestamp":"2014-04-21T01:00:23Z","content_type":null,"content_length":"62704","record_id":"<urn:uuid:f635bd87-dd16-44c4-a6cc-fba2e125e3cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are non-PL manifolds CW-complexes?
up vote 11 down vote favorite
Can every topological (not necessarily smooth or PL) manifold be given the structure of a CW complex?
I'm pretty sure that the answer is yes. However, I have not managed to find a reference for this.
@algori : I thought you had posted an (important sounding) comment? Why did you delete it? – A grad student Aug 27 '10 at 4:48
3 It turns out that my first comment was a bit wrong. Here are the slides of A. Ranicki's talk in Orsay. www.maths.ed.ac.uk/~aar/slides/orsay.pdf It says on p. 5 there that a compact manifold of
dimension other than 4 is a CW complex. There is a related conjecture that says that each closed manifold of dimension $\geq 5$ is homeomorphic to a polyhedron (there are 4-manifolds for which
this is false). See arxiv.org/pdf/math/0212297. I'm not sure what if anything is known about the noncompact case. – algori Aug 27 '10 at 4:50
Update: recent work of Davis, Fowler, and Lafont front.math.ucdavis.edu/1304.3730 shows that in every dimension ≥6 there exists a closed aspherical manifold that is not homeomorphic to a
simplicial complex. – Lee Mosher May 1 '13 at 16:10
Hatcher's Algebraic Topology p. 529 has a paragraph answering this question very clearly for compact manifolds (not including results in 2013 of course). However his references are to two long
dense books, without page specification. – hsp Sep 3 '13 at 15:47
add comment
2 Answers
active oldest votes
Kirby and Siebenmann's paper "On the triangulation of manifolds and the Hauptvermutung" Bull AMS 75 (1969) is the standard reference for this, I believe.
up vote 8 The result is that compact topological manifolds have the homotopy-type of CW-complexes, to be precise.
down vote
I think the fact that they have the homotopy type of a CW complex is due to Milnor (it is in his paper about spaces homotopy equivalent to CW complexes). Do Kirby-Siebenmann just prove
this, or do they prove that all compact manifolds are homeomorphic to CW complexes? Also, how about the noncompact case? – A grad student Aug 27 '10 at 4:08
But I thought the question was whether each has the "homeomorphism type" of a CW complex. – Dev Sinha Aug 27 '10 at 4:22
It's been a while since I've looked at that Milnor paper -- I suspect maybe he's arguing that manifolds have the homotopy-type of countable CWs, while Kirby-Siebenmann deal with
compact manifolds and finite CWs. ? – Ryan Budney Aug 27 '10 at 4:23
@Ryan : Yes, I think that is what Milnor proved (it's also been a long time since I looked at it). – A grad student Aug 27 '10 at 4:27
1 @Ryan, the open problem is not whether any compact manifold is homeomorphic to a CW complex (this was proved by Kirby-Siebenmann). The open problem is whether it has a
(non-combinatorial) triangulation. @grad student, whatever is known in the noncompact case must be Kirby-Siebenmann's book. – Igor Belegradek Aug 27 '10 at 13:15
show 1 more comment
See http://arxiv.org/abs/math/0609665
up vote 3 down vote
4 That manifold isn't 2nd countable. Like most mathematicians, I only care about manifolds that are Hausdorff and 2nd countable. – A grad student Aug 27 '10 at 3:56
12 I hope that the fact that you only care about those does not preclude you from enjoying learning about the rest. – Mariano Suárez-Alvarez♦ Aug 27 '10 at 4:12
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/36838/are-non-pl-manifolds-cw-complexes?sort=newest","timestamp":"2014-04-19T05:00:59Z","content_type":null,"content_length":"67200","record_id":"<urn:uuid:82b3c05e-458d-4918-96fd-99fe315886a2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Multilevel Time Series Question
Anonymous posted on Tuesday, March 12, 2002 - 5:05 am
I have a rather complex data set that I'm going to examine.
1. Survey data were collected at three distinct time periods of an event (pre-, mid-, and post-event - approximately 9 months from beginning to end) from one cohort. Each time period the sampling
frame was sampled and about 700 respondents completed the survey all three times. An additional 500+/- responded to the survey, of which some may have completed surveys for two of the three time
periods (I have not received the database yet so I do not know a more accurate figure).
2. The respondents are members of distinct subgroups and group level data were collected to allow for conceptually relevant multi-level analysis.
3. At time one, group level variables (3 variables) and one individual predictor variable were collected. At time two, individual predictor variables (4 variables) and outcome variables (3 variables)
were collected in addition to group level variables described. At time three the same variables as time two were collected.
1. I would like to use as much of the time one thru three data as possible. Is it possible to use SEM and Mplus to model the missing values for those respondents who did not answer all three time
periods. Is there a reference you are aware of that provides a model for such a procedure.
2. Each of the variables, I conjecture, are latent factors rather than manifest indicators (although, composite scores for the variables can be derived by summing the values for each variable). Do I
have too many latent varibles to adequately model this or should I consider a SEM path analysis strategy using manifest variables?
3. I'm not certain how to include covariates (two) in a SEM model? please clarify
Thanks in advance
Steve Lewis
bmuthen posted on Tuesday, March 12, 2002 - 9:39 am
This type of analysis should not be problematic. It sounds like you have longitudinal data with missingness, where the interest is not in growth over time, but rather a path analysis model with
variables on both individual and group levels. Missing data can be handled in SEM using standard ML estimation under MAR (see missing data references under the Reference section on this web site);
here you use all available data. I am not clear on the groups you mention - have you sampled the groups so that this could be considered a random effect (such as sampling schools), or are they fixed?
The former leads to multilevel modeling and the latter to multiple-group or MIMIC modeling using covariates (for basic SEM concepts, see Bollen and other ref's under References on this web site).
Mplus 2.02 cannot handle multilevel SEM with missing data, but this is forthcoming in version 2.1. Latent variables typically need at least 2 manifest indicators, and preferrably more.
Anonymous posted on Wednesday, March 27, 2002 - 2:47 pm
Thank you for responding so quickly. I finally received the database and am now looking at the patterns of missingness and have questions about minimum number of observations to obtain unbiased
estimates using a FIML estimation for missing values. Here is how the database is arrayed:
Completed T1 & T2 = 178
Completed T2 & T3 = 482
Completed T1 & T3 = 132
Completed all 3 = 186
Completed only 1 time = 3,789
I've done some reading on missing values and ML but have not run across acceptable missingness levels.
Finally, The groups are random & unbalanced (military units) so I plan to test the data using a multilevel path analysis. I've used Mplus for CFA and basic SEM models so i'm venturing into new
territory and trying to aviod as many mistakes as possible.
Thanks in advance,
Steve Lewis
Linda K. Muthen posted on Thursday, March 28, 2002 - 9:16 am
Good references for acceptable missingness levels would be the Little and Rubin and Shafer books that are referenced on our website. You will have computational difficulties if you have more than 90%
missing. Information about this is given in the coverage output. However, with a large percentage missing, the analysis relies very strongly on model and missing data assumptions
The current version of Mplus cannot combine multilevel and missing. Version 2.1 which is due out in a few months (a free update for Version 2 users) will allow this.
Alicia Merline posted on Friday, April 07, 2006 - 3:01 pm
I want to estimate a multilevel model where individuals are nested within couples and the dependent variable is measured repeatedly and the main predictor of interest is measured repeatedly as well.
There are no latent components to my model.
I am not interested in estimating change over time. Rather, I would like to estimate the overall relation between X and Y, but take into consideration the non-independence of my data
I have looked through the manual, the handouts from the one week Mplus training and documents you provide on your website. The only examples of multilevel repeated measures modeling I can find
estimate latent curves.
On page 72 of your publication “Multilevel modeling with latent variables using Mplus” There is a model estimating the intercept and slope in math scores, but data on attendance are available at all
4 time points. Using this example, what if we wanted to know the effect of attendance on math scores in any given year? How would the model be altered so that we would be estimating the association
between attendance and math scores?
Here is my stab at some syntax using your example on page 7 :
VARIABLE: NAMES ARE cohort id school weight math7 math8 math9 math10 att7 att8
Att9 att10 gender mothed homres;
USEOBS: (gender EQ 1 AND cohort EQ 2);
MISSING = ALL (999);
USEVAR = math7-math10 att7-10 mothed homers;
Cluster = school;
ANALYSIS: TYPE = TWOLEVEL;
ESTIMATOR = MMUL;
Math7 ON att7; within individual time-varying
Math8 ON att8;
Math9 ON att9;
Math10 ON att10;
Math7 ON mothed; within individual time-invariant
Math8 ON mothed;
Math9 ON mothed;
Math10 ON mothed;
I can use the widelong command to change math7-10 to an across-time “math”, but I have no way to “telling” Mplus that the repeated observations are not independent. Is there a way to estimate a
repeated measures TWOLEVEL model without making the latent slope the DV?
Bengt O. Muthen posted on Friday, April 07, 2006 - 5:37 pm
You can do this in a couple of different ways. Growth modeling is not needed; it is ok to consider only a regression of y on x. One way is to use a multivariate approach to indviduals within couples
(since there are only 2), taking care of the couple correlation, and let the time dependence be handled by Type = Complex. This means that you would say
cluster = couple;
and have your data arranged as
y1 y2 x1 x2
where the subscript refers to person within couple for a given time point. So you give the data in long form wrt time - each couple has as many rows as there are time point. The number of rows a
couple has is their "cluster size". In this way, Type = Complex computes SEs that take the correlatedness over time within couple into account.
Your model statement would be:
y1 on x1;
y2 on x2;
where the x's are correlated by default and so are the residuals of the y's.
Bengt O. Muthen posted on Friday, April 07, 2006 - 6:17 pm
Just to add a clarification, your variable list would be:
Names = couple y1 y2 x1 x2;
Usev = y1 y2 x1 x2;
and your data set would then have the same couple value for the rows of that couple (the repeated measures on y and x).
Alicia Merline posted on Monday, April 10, 2006 - 12:15 pm
Thanks, your response cleared things very well for me.
Alicia Merline posted on Wednesday, April 12, 2006 - 11:10 am
Just a follow-up. If I combine husbands and wives into one line of data, I will be modeling men and women seperately, so I will not be estimating any effect of gender on the DV. Because I have to
estimate the regressions seperately for each time I would not get a time-independent estimate of the regression Y ON X.
What I am interested in is the effect of marital status on mental health. Because there is no invervention in this study and because the points of data collection don't have developmental
significance, I'd like an overall estimate of the association between marital status and depression, irrespective of time. Because the data are nested in individuals, who are nested in couples, I
feel the data require a multilevel model. Otherwise, this would simply be a single logistic regression. But I feel I cannot ignore the nestedness here.
It is looking like I can't do what I'm trying to do if I use MPLUS.
I suppose there is another option if I select only cases where no remarriage takes place after divorce and then align the data among divorced individuals such that all respondents are married at
times 1 and 2 and divorced at times 3 through 5 (data from non-divorced people would be left as is). Then I could estimate a piecewise lcm at the WITHIN level and regress the intercepts and slopes on
the within and between subjects variables. This would allow me to do things like see if couple-level characteristics relate to slope before divorce differently than they relate to slope after
divorce. If I do this, is there also a way to compare intercept 1 to intercept 2 and slope 1 to slope 2?
Thanks again,
Bengt O. Muthen posted on Thursday, April 13, 2006 - 10:47 am
Your first paragraph suggests that you misunderstood my recommendation. You don't do the regression separately for each time. My suggestion implies that you do get a time-independent estimate of the
regression of y on x - you get only one intercept and one slope. You do take into account the nestedness of the data by using Type = Complex.
You are right that my suggestion estimates separate regressions for men and women, so allowing both different intercepts and slopes.
You don't need to do multilevel modeling to take nestedness into account. But you can do multilevel modeling in Mplus if that is what you want.
If this is unclear, let me know how I can help clarify further.
ide katrine birkeland posted on Sunday, April 15, 2012 - 11:58 pm
Dear Muthen(s).
I am turning to Mplus because of my need for a complex analysis. I'd like to test for mediation and moderation (in separate analyses) in a longitudinal dataset consisting of three waves. I would like
to do this using multilevel, in order to investigate both changes within and between subjects. I have been searching for papers that show examples of how to this and preferably perhaps even something
even more hands on, but have not been successful. Do you have any reading tips or online courses where I can learn more? Perhaps also a syntax or example to practise with?
Thank you in advance.
Linda K. Muthen posted on Monday, April 16, 2012 - 8:51 am
I think what you want is the model in Example 9.3 at three time points.
ide katrine birkeland posted on Tuesday, April 17, 2012 - 2:06 am
Excellent, thank you!
ide katrine birkeland posted on Tuesday, April 24, 2012 - 2:06 am
Dear Prof.
I apologize in advance for any errors in my interpretation of my current problems.
I have now read more on the suggested literature and see how example 9.3 might apply to my mediated model. I only have a couple of questions.
My mediated research model is with continuous variables (except time and person). Thus, I wonder if having u as a dependent variable does not makes sense in this case. It seems more corrects that y
is the DV, x2 is the M and x1 is the IV. If I then understand Bollen & Curran (2006) correct, time should then be regressed (or multiplied with?) on all the included variables. Further, I have to
admit, I am not sure if u should represent time or person, and this goes for w as well. I see that w is meant to represent the cluster level covariate, but since this is longitudinal all my variables
are at the same level, thus I’m confused.
My next question pertains to a moderated research model. This is based on the same dataset, where time is within person, and the moderator variable is time-variant like the rest of the variables.
Again, if I read B&C correctly, time should then be multiplied with all the variables, how to describe this in the syntax and how to interpret the results is very difficult to understand.
Thank you very much in advance.
Linda K. Muthen posted on Tuesday, April 24, 2012 - 8:06 am
I see now that you don't have clustering except for time. In this case, you don't need multilevel modeling. When data are in a wide multivariate format, you have a single-level model. The
multivariate analysis takes care of clustering due to repeated measures.
ide katrine birkeland posted on Wednesday, April 25, 2012 - 11:37 pm
OK, thank you very much. I'm still not sure how to calculate the moderating variable and write syntax to test for moderation with longitudinal data, though. I've been investigating the examples in
ch. 6, but I can't seem to find anything similar. Any suggestions? I apologize for my ignorance and thank you very much for you time.
Linda K. Muthen posted on Thursday, April 26, 2012 - 1:46 pm
Moderation can be tested for categorical moderators using multiple group analysis. If a parameter is not equal across groups, this is an interaction. It can also be tested by including an interaction
in the regression:
y ON x1 x2 x1x2;
ide katrine birkeland posted on Monday, April 30, 2012 - 1:50 am
Thank you very much. How do you then include the time variable in this? I am familiar with moderation, but have never done it with a longitudinal data set. My hypotheses is whether and how x1x2
moderates the x1-y relationship over time. It is theorethically relevant to investigate this both within and between subjects. My intial thought was to do a triple interaction in multilevel, x1x2t,
however, it seems problematic as t is then a categorical variable.
Here's an idea I had, but I see that it's just a start. Can I write the syntax something like this?
ix1 sx2 | x1@0 x1@1 x1@2;
iy sy | y@0 y@1 y@2;
ix1x2 sx1x2 | x1x2@0 x1x2@1 x1x2@2;
iy ON x1;
sy ON x1;
ix1x2 ON x1 XWITH y;
sx1x2 ON x1 XWITH y;
ide katrine birkeland posted on Monday, April 30, 2012 - 1:53 am
*Continues from post above:
...or maybe something like this?
ix1 sx2 | x1@0 x1@1 x1@2;
iy sy | y@0 y@1 y@2;
ix1x2 sx1x2 | x1x2@0 x1x2@1 x1x2@2;
iy ON x1 x1x2;
sy ON x1 x1x2;
Linda K. Muthen posted on Monday, April 30, 2012 - 2:13 pm
Please keep your posts to one window.
I am confused about your model. The | symbol for growth is used for wide format data where you would have variables x1, x2, and x3. It seems to me you have long format data because you show only x1
three times. See Example 9.16 of the user's guide. Then you would create an interaction with time using the DEFINE statement, for example,
int = time*moderator;
And then in the MODEL command, you would have:
s1 | y ON time;
s2 | y ON int;
ide katrine birkeland posted on Wednesday, May 02, 2012 - 3:10 am
Thank you very much for you patience, and my apologies for the double posting.
Actually, I have both long and wide versions of the data set, so with regards to your comment I am now using the long format of my data.
Using Example 9.16, I have tried to create a new syntax.
When I run it I get the following error: *** ERROR in MODEL command
Unknown variable: INT
Below is parts of the syntax I ran:
Usevariable = OPce POSxOP CY ID time;
Cluster = ID;
DEFINE: int = time*POSxOP;
ANALYSIS: Type = Twolevel Random;
s1 | CY ON time;
s2 | CY ON int;
CY ON OPce;
CY s1 s2 ON OPce POSxOP;
CY WITH s1 s2;
Linda K. Muthen posted on Wednesday, May 02, 2012 - 6:26 am
Any new variable created in DEFINE that is used in the analysis must be put at the end of the USEVARIABLES list.
ide katrine birkeland posted on Thursday, May 10, 2012 - 12:30 am
Dear Prof. Muthen,
Thank you, I am now able to run the full model, but I get the following message:
THE RESIDUAL CORRELATION BETWEEN INT AND TIME IS 0.999
TO A LARGE VALUE.
Of course there is logic in time and int being correlated as int is defined by time*moderator. However, I do not understand why the correlation is so high, in SPSS, the correlation is -.058**.
Second, I'm am unsure how I set the "ALGORITHM=EM AND MCONVERGENCE
TO A LARGE VALUE."
Thank you for your help.
Linda K. Muthen posted on Thursday, May 10, 2012 - 11:19 am
Please send the output and your license number to support@statmodel.com.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=12&page=171","timestamp":"2014-04-18T15:58:10Z","content_type":null,"content_length":"57424","record_id":"<urn:uuid:282c004b-037c-44e8-94d6-938e37b24721>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using the Inverse Trigonometric Functions - Problem 2
Every once in a while you will see a problem like this on your homework; simplify secant squared of inverse tangent of x. You are going to get some kind of algebraic expressions as a result of this
problem, and they're sort of interesting because it shows you that compositions of trig functions, and inverse trig functions have an algebraic interpretation.
Well, let's make a little substitution. Let's call this theta, and remember the inverse trig functions give you an angle. If theta equals inverse tangent of x, then by the definition of inverse
tangent, x equals tangent theta. So what I want to do in order to understand this relationship better, I'm going to draw a right triangle and I'm going to label one of the acute angles theta.
I want to label the sides so that tangent of theta equals x. You can label them however you like, but it's best to label them in the best way possible. Remember that tangent is the side opposite
theta over this side adjacent theta. So I could label the sides x and 1, and if I do then tangent theta equals x over 1, that works. And for now I'll just call my hypotenuse h.
Now I have the secant squared of theta that I have to calculate. Well let's look at this picture, what's the secant of theta? Remember that secant is one over cosine, so let's first find the cosine
of theta. Now cosine is side adjacent over hypotenuse, it's 1 over h. That means secant is h, and secant squared is h². Now what is h?
We could use the Pythagorean theorem in order to get a value for h, 1² plus x² equals h², and the we substitute, secant squared theta equals 1 plus x². 1² plus x² is 1 plus x², and then we realize
that we're pretty much done. Theta is inverse tangent of x, secant squared of inverse tangent x equals 1 plus x², and that's what we wanted to show.
We wanted to find an algebraic interpretation for secant squared of inverse tangent x and it is 1 plus x² and that's it.
The key to a problem like this is to draw a right triangle that has the relationship that you need. Make a substitution for the inverse trig function inside. You can always call it theta, or some
other Greek letter, just remember that inverse trig functions always give you angles.
So you can set up a right triangle with that angle in it and then label the triangle appropriately, in this case we needed to make the triangle have tangent theta equal x. So it's kind of interesting
that two trig functions and an inverse trig function can give you 1 plus x².
secant tangent inverse tangent | {"url":"https://www.brightstorm.com/math/precalculus/advanced-trigonometry/using-the-inverse-trigonometric-functions-problem-2/","timestamp":"2014-04-17T13:05:57Z","content_type":null,"content_length":"65210","record_id":"<urn:uuid:752c1ecb-4b9f-43c2-b8fa-d54d86b34cd0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
This bouncing ball model illustrates the concept of entropy change as a result of the redistribution of energy in a system to available microstates. The system in this case is a simulation of a
rubber ball composed of a number of particles that are bound together with Hookian springs, with particular equilibrium length and spring constants. The ball of particles starts above the ground and
when allowed to fall under the influence of gravity rebounds off the ground. Energy is conserved in this process, nevertheless the height of the ball of particles decreases with each bounce as the
initial potential energy of the system is transferred to disordered vibrations of the internal particles. External mechanical energy, such as potential energy and ordered kinetic energy is
transferred to internal potential energy of the springs and disordered kinetic energy. It is possible to investigate how the rate at which this energy is redistributed relates to factors such as the
strength of the springs and number of particles.
powered by NetLogo
view/download model file: Bouncing_Ball.nlogo
Particles are created randomly in a small area near the top of the world view. In the simulation each particle interacts pair-wise with every other particle via a Hooke's law F=-kx force, where x is
the distance between the two interacting particles minus the equilibrium length of the spring. (Note: the particles do not interact in any other way, which means that they can pass through each other
if they move fast enough to overcome the repulsion from the springs). The motion is simulated numerically based on Newton's second law F=ma. In order to have precision with the simulation and to
conserve energy we use a fourth order Runge Kutta, rather than Euler's method. (Euler's method tends to result in energy loss over time due to numerical errors). There are options to include a number
of other forces in the simulation. The initial default is to add a damping force that serves to gradually dampen out the motion so that the particles can reach an equilibrium configuration. When this
equilibrium is established, damping can be turned off and gravity turned on to allow the ball to fall. It is also possible to have both damping and gravity at the same time, and neither. The elastic,
gravitational and kinetic energy of each particle is calculated at each time step. The internal energy is set equal to the total elastic energy of the particles and the kinetic energy that is in
excess of the center of mass kinetic energy. The total energy is the total gravitational potential energy and the center of mass kinetic energy.
HOW TO USE IT
Choose the number of particles, the rest length of the springs connecting them and the spring constant and the click setup. The particles are in a group near the top of the screen. Now click go. The
default mode is "damping", in order that the particles will reach an equilibrium configuration corresponding to a ball of particles. When the particles have reached equilibrium change the mode to
"gravity". Now gravity acts on the particles and there is no damping. Total energy is conserved, but as the ball bounces the distribution of energy between internal and external energy shifts, with
the result that the ball of particles does not return to its original height.
It is possible to change the mode so that both damping and gravity apply or neither applies. The later mode is interesting to try out if you want to see how a large number of particles with springs
interact. It is possible to add additional particles by using the add-particle button and clicking on the screen. Be aware that if you add particles a long way from the other particles you will be
adding a large amount of elastic energy to the system, which may result in particles disappearing from the screen.
Notice that when the ball of particles bounces external energy is always converted to internal energy. The bounce allows other energy states (namely vibrations of the particles) to become available.
Notice also that external energy is never totally lost, as long as damping and gravity are not both on at the same time.
Try changing the spring constant and rest length of the spring and see how this changes how quickly the ball of particles looses its external energy to internal energy. You can make a quantitative
comparison using the behavior space feature of NetLogo.
Initially, with damping on, the particles settle into a configuration with symmetry depending on the number of particles. Try starting with a low number of particles and then adding particles. You
should observe definite symmetry breaking transitions at particular numbers of particles.
With about 25 particles try putting gravity and damping on. The ball of particles will reach terminal velocity and settle on the ground in a configuration. Observe how the configuration changes with
shape with different values of k.
Allow about 25 particles to come to an equilibrium configuration with damping on. Then turn the spring constant off. Now turn on gravity only and observe the particles falling. Because there is no
internal interaction the particles gradually loose their cohesion, due to the difference in time it takes for the different parts of the ball to drop during each bounce.
It would be interesting to change the interaction force to be a more realistic force like the Lenard-Jones 6-12 potential, which is Hookian in a small range around the equilibrium length, but
diminishes for larger extensions and becomes very large for high compression.
See the other Entropy Models in this series
Copyright 2006 David McAvity
This model was created at the Evergreen State College, in Olympia Washington
as part of a series of applets to illustrate principles in physics and biology.
Funding was provided by the Plato Royalty Grant.
The model may be freely used, modified and redistributed provided this copyright is included and it not used for profit.
Contact David McAvity at mcavityd@evergreen.edu if you have questions about its use.
globals [g b dt k external-energy internal-energy total-energy gravity? damping? zero-point]
turtles-own[x y vx vy ax ay xtemp ytemp vxtemp vytemp kinetic-energy elastic-energy
kx1 kx2 kx3 ky1 ky2 ky3 jx1 jx2 jx3 jy1 jy2 jy3]
to setup
set dt 0.01
set mode "damping"
ask patches [ if pycor < (5 + min-pycor) [set pcolor green ]]
;ask patches with [pycor = 10 ][set pcolor white]
crt particle-number [;set color blue
set shape "circle"
set x (random-float (2 * rest-length + 1)) - rest-length
set y (random-float (2 * rest-length + 1)) - rest-length + 10
set vx 0
set vy 0
set ax 0
set ay 0
set elastic-energy 0
set kinetic-energy 0
setxy x y]
to go
if mode = "damping" [set gravity? false set damping? true]
if mode = "gravity" [set gravity? true set damping? false]
if mode = "neither" [set gravity? false set damping? false]
if mode = "both" [set gravity? true set damping? true]
ifelse gravity? [set g 9.81][set g 0]
ifelse damping? [set b 2.0][set b 0 ]
set k spring-constant
;; Here we solve the equations of motion using Runge-Kutta
ask turtles [find-acceleration ]
ask turtles [find-k1]
ask turtles [find-acceleration ]
ask turtles [find-k2]
ask turtles [find-acceleration ]
ask turtles [find-k3]
ask turtles [find-acceleration-and-energy ]
ask turtles [update-coordinates]
if mode = "damping" [set zero-point internal-energy]
if gravity? [ do-plots ]
ask turtles [set elastic-energy 0 ]
; find new velocity and position of particles based on Runge Kutta.
to update-coordinates
let kx4 vx
let ky4 vy
let jx4 (ax - b * vx )
let jy4 (ay - b * vy - g)
set vx vxtemp + (dt / 6)*(jx1 + (2 * jx2) + (2 * jx3) + jx4)
set vy vytemp + (dt / 6)*(jy1 + (2 * jy2) + (2 * jy3) + jy4)
set x xtemp + (dt / 6)*(kx1 + (2 * kx2) + (2 * kx3) + kx4)
set y ytemp + (dt / 6)*(ky1 + (2 * ky2) + (2 * ky3) + ky4)
set ax 0
set ay 0
set kinetic-energy (vx * vx + vy * vy) / 2
;; If particle are found below the green ground then give them positive velocity. This process conserves
;; energy and corresponds to a perfectly elastic collision with the ground.
if y < 5 + min-pycor [ set vy abs vy ]
;; Here we hide particles that have moved off screen. They are still part of the simulation, and
;; if they return on screen they become visible again.
ifelse (y > max-pycor) or (y < min-pycor)
[ set hidden? true
setxy x max-pycor]
[set hidden? false
setxy x y ]
;; The following three procedures are steps in the Runge Kutta process.
to find-k1
set xtemp x
set ytemp y
set vxtemp vx
set vytemp vy
set kx1 vx
set ky1 vy
set jx1 (ax - b * vx )
set jy1 (ay - b * vy - g)
set vx vxtemp + jx1 * (dt / 2)
set vy vytemp + jy1 * (dt / 2)
set x xtemp + kx1 * (dt / 2)
set y ytemp + ky1 * (dt / 2)
set ax 0
set ay 0
to find-k2
set kx2 vx
set ky2 vy
set jx2 (ax - b * vx )
set jy2 (ay - b * vy - g)
set vx vxtemp + jx2 * (dt / 2)
set vy vytemp + jy2 * (dt / 2)
set x xtemp + kx2 * (dt / 2)
set y ytemp + ky2 * (dt / 2)
set ax 0
set ay 0
to find-k3
set kx3 vx
set ky3 vy
set jx3 (ax - b * vx )
set jy3 (ay - b * vy - g)
set vx vxtemp + jx3 * dt
set vy vytemp + jy3 * dt
set x xtemp + kx3 * dt
set y ytemp + ky3 * dt
set ax 0
set ay 0
;; Using Newton's second law F=ma, find the acceleration of each particle based on the interaction with
;; each of the other particles. The mass is set to 1.
to find-acceleration
ask turtles with [self != myself]
let xrel x - ( [x] of myself )
let yrel y - ( [y] of myself )
let d sqrt( (xrel * xrel) + (yrel * yrel) ) ;; distance between particles
if d = 0 [set d 0.0001] ;; This avoids divide by zero errors
;; when particles are on top of each other
let extension (rest-length - d)
set ax ax + ( k * extension ) * xrel / d
set ay ay + ( k * extension ) * yrel / d ]
;; This procedure is almost the same as above, but computes the eleastic potentail energy of the particles
;; as well as the acceleration. The energy stored in a spring is kx^2/2, but since each spring is
;; connects two particles we add kx^2/4 for each particle
to find-acceleration-and-energy
ask turtles with [self != myself]
let xrel x - ( [x] of myself )
let yrel y - ( [y] of myself )
let d sqrt( (xrel * xrel) + (yrel * yrel) )
if d = 0 [set d 0.0001]
let extension (rest-length - d)
set ax ax + ( k * extension ) * xrel / d
set ay ay + ( k * extension ) * yrel / d
set elastic-energy elastic-energy + k * ( extension * extension ) / 4 ]
;; computes total graviational energy, kinetic energy of particles and elastic energy and then separates
;; these into internal and external energy.
to find-energy
let total-kinetic-energy sum [kinetic-energy] of turtles
let x-cm mean [x] of turtles
let y-cm mean [y] of turtles
let vx-cm mean [vx] of turtles
let vy-cm mean [vy] of turtles
let kinetic-cm (count turtles )*((vx-cm * vx-cm) + (vy-cm * vy-cm)) / 2
let total-elastic-energy sum [elastic-energy] of turtles
set internal-energy total-elastic-energy + total-kinetic-energy - kinetic-cm
set external-energy (9.81 * (sum [y + max-pycor - 5] of turtles) + kinetic-cm )
set total-energy ( internal-energy + external-energy )
;; This procedure allows the user to add a new particle to the system at anytime by clicking
;; the mouse down. To avoid adding multiple particle there is a sligh delay after a particle is added.
to add-particle
if mouse-down? [
crt 1 [set color yellow
set shape "circle"
set x mouse-xcor
set y mouse-ycor
set vx 0
set vy 0
set ax 0
set ay 0
setxy x y ]
wait 0.2]
;; plot energies
to do-plots
set-current-plot-pen "total"
plot total-energy - zero-point
set-current-plot-pen "internal"
plot internal-energy - zero-point
set-current-plot-pen "external"
plot external-energy | {"url":"http://academic.evergreen.edu/m/mcavityd/NetLogo/Bouncing_Ball.html","timestamp":"2014-04-25T05:11:26Z","content_type":null,"content_length":"43981","record_id":"<urn:uuid:afdb2665-53bb-4a23-b316-758a899556b8>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity Reviews - Beginniner Algorithm Question
On Jun 28, 8:39*am, Patricia Shanahan <p...@acm.org> wrote:
> Ray Leon wrote:
> > Dear Sirs or Madam:
> > I have completed what I believe to be a correct answer to an algorithm. The
> > orignal question was posed as an incomplete algorithm, whereas I was to
> > complete it and move on to a flowchart of the same. I appreciate any help.
> > The link to the picture is here:
> >
> > Thank You,
> > Ray Leon
> > popeye...@qwest.net
> It appears from the instructor's note quoted in the web page that you
> have an instructor who has given carefully chosen help: "Without giving
> you too much, can you see where I'm going with this?". If you have
> doubts about your algorithm, you should discuss them with the instructor.
> For future reference, it is very, very important to indicate what the
> code is supposed to do. Also, there are a lot of ways of organizing and
> indenting if-then-else structures. It doesn't matter much which you
> choose, but you should not mix them in any one piece of code.
> Patricia- Hide quoted text -
> - Show quoted text -
I believe I have figured out the algorithm and flowchart for A = 3, B
= 1, C = 2 but doesn't this alogrithm,and flowchart work for any
combination of three values assigned to a, b, or c? | {"url":"http://www.velocityreviews.com/forums/printthread.php?t=622744","timestamp":"2014-04-16T22:34:46Z","content_type":null,"content_length":"10504","record_id":"<urn:uuid:16bc8f9a-392a-4e7f-b1ee-697db2e90eb2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Freeman Dyson and William Press' minirevolution in game theory
Prisoners facing a dilemma recommended not to cooperate any longer
Fred Singer (*1924) has pointed out an interesting Physics arXiv Blog's review of a new preprint by Adami and Hintze that mainly builds on the important March 2012 paper
Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent (PNAS)
Edge.org review by William Poundstone (with interviews)
and that, aside from more important things to be discussed momentarily, challenges the stereotypes that creative scientists and math thinkers should be below 30 or 40. Fred is almost 88 years old and
you may think that he would refer to somewhat younger people's research but you would be wrong. The paper above was written by William Press (*1948) and Freeman Dyson (*1923). ;-)
Researchers in game theory are currently fixing the holes in their lore. If you want to excite the community of game theorists, the 2012 data suggest that the optimum age for you could be 89 years or
64 years. ;-) What happened?
Consider the prisoner's dilemma i.e. the following game or situation.
Notorious climate alarmists Al Gore (A) and Bill McKibben (B) are finally arrested and each of them is allowed to either snitch or remain silent. If both men remain silent, both of them only get one
year for a lesser crime. If one of them snitches, he is freed but the other one gets 6 years in prison. But snitching isn't a universal recipe for freedom: if both snitch, each of them gets 3 years
in the jail.
(The original problem was talking about months, not years, in the prison but given my choice of the names, the timing looked preposterous so I had to change it to years, too.)
If Al and Bill communicate in advance, it's collectively better for them to agree to remain silent: when they "cooperate" (with one another), they only get one year. In that way, they get 2 years in
total while any snitching would mean that they get 6 years in total. However, if you can't communicate with your accomplice, it's better to avoid the maximum prison term by snitching – by "defecting"
(effectively trying to hurt the other guy). In that way, you get 0 or 3 years, depending on your partner's decision, which is better than 6 or 1 year. So at least the arithmetic average makes it
better to snitch.
But is it the relevant average? If you knew something about the other fearmonger, your decision could be different and perhaps better. You don't have to spend the same time in the jail as he will,
after all. But in the one-round game, you don't really know anything about the thinking or strategy of the other guy so the solution is ambiguous and depends on the priors.
Things become more interesting and complex if Al and Bill and arrested many times because in that case, you can no longer claim that you know nothing about the other villain's behavior. What is the
best strategy for Bill McKibben if he's arrested together with Al Gore many times?
Until this year, it's been a part of the widely believed game theorists' lore that the best strategy for Bill McKibben is to copy the decision of Al Gore from the investigation of the previous crime
(a "tit-for-tat" or fair strategy); in this description, I am not assuming anything about Al Gore's Al Gore Rhythm. What happens if Bill copies Al in this way? They asymptotically serve the same
total prison term.
Why? Use the equations \(a_i=0,1\) and \(b_i=0,1\) when the \(i\)-the decision by Al or Bill is "silent" or "snitch", respectively. The difference between the total prison terms is 3 years times the
sum \(\sum_i (a_i-b_i)\) because if their responses are the same, they get the same prison term. However, \(\sum_i(a_i)-\sum_i(b_i)\) vanishes up to the first and last term if \(b_{i+1}=a_i\) – the
sums are just shifted.
For example, if Al is silent all the time, Bill will also be silent and each of them will get 1 year for each crime. If Al snitches all the time, so will Bill and each of them will get 3 years for
each of the many crimes. If Bill notices that Al is always silent, it would be better for Bill to snitch all the time and remain free. So why did we say he should remain silent? It's because Al would
notice he's getting 6 years all the time and he would (probably) adapt and modify his strategy, perhaps, after some time. So at some moment, Al's behavior becomes "hard to decipher".
Such an answer (Bill should just delay Al's answer) may have looked natural because of the symmetry that "should" be respected in some way, people thought, and because of computer models that
provided people with some "evidence" that this strategy is optimal. Well, all this evidence was just a sloppy collection of prejudices. In Spring 2012, Press and Dyson found an ingenious strategy
that is better. With this strategy, the villain may decide about his former partner's overall score; or set an "extortionate" linear relationship between both men's scores.
Yes, as far as I know, it's the first paper ever published in PNAS that, when properly formulated, envisions a repeated prison term for the two notorious climate alarmists.
This is at least a minirevolution in game theory because it changes a slogan upside down. The widely believed optimum strategy used to have this slogan:
To maximize your freedom, don't be too clever, don't be too unfair. Mediocre egalitarian folks will prevail.
Press and Dyson found out that it's better to be clever – and unfair in a clever way – after all. ;-) In other words, you may fool evolution and you may be better off with unfair strategies. (Karl
Sigmund and Martin Nowak pointed out that a more accurate adjective than "evolutionary" for these strategies would be "adaptive" but I will keep on using the inaccurate adjective "evolutionary"
here.) Dyson apparently did the maths for the paper, Press did the game theory.
The paper was quickly followed by another PNAS paper (mostly a review, with one table of simulation results) by
Alexander Stewart and Joshua Plotkin
(full text here, 2-page PDF). I know Joshua as my ex-fellow Junior Fellow.
Let me say what the insight is in different words. People would believe that the best strategies would be the "evolutionary strategies" – trying what works and what doesn't and choosing the better
strategies from the trials. Press and Dyson showed that such strategies may be beaten by someone who knows more sophisticated maths. Their superior strategy may be expressed by setting a particular
determinant to zero; it is a zero-determinant strategy. Graphically, all these strategies lie on a plane.
So the pragmatists, empiricists, crowd-sources may seem clever enough but there can be cleverer folks who may assure the pragmatists about their inferiority. When a pragmatist is facing such a
superior player, his evolutionary strategy will lead him to conclude that it's best for himself to admit his own inferiority and accept strategies that, despite their relative quality (within the
evolutionary strategies), keep the pragmatist in the jail for a longer time. ;-)
William Press realized the basic "qualitative insights" that something could be totally different than believed when he was trying to do computer modeling of such things. His computer, assuming that
your own strategy always affects your own score, was crashing at various points. In some parameter space, the events were located on a plane! It was good news because the crashing of the computer
model actually proved that the assumption was wrong. If you play against a zero-determinant-equipped former accomplice, your total score doesn't depend on your strategy at all!
Dyson and Press have also showed that if your foe only remembers a certain number of recent rounds, there is no reason for you to have a greater memory: this can't improve your strategies.
The zero-determinant strategy itself depends on two parameters, \(\chi\) and \(\phi\). If \(\chi=1\), it becomes a "fair" strategy that leads to the same score of both players; the tit-for-tat is a
special example for \(\chi=1\) and \(\phi\), a less intuitive parameter (let me call it an axion), set to the extreme value of its a priori allowed interval. Different values of \(\phi\) for \(\chi=1
\) lead to other cooperative strategies that differ from tit-for-tat by subtle nuances only.
However, you may pick \(\chi\gt 1\) which means that you will have an advantage over your (new) foe. You may also be "generous" to your (new) foe and pick \(\chi\lt 1\). In that case, he will be
better off. If both players are using the zero-determinant strategy, each of them may set the foe's score but not his own. In such a situation, they may even agree on "enforceable treaties".
In the edge.org interview, Press says most of the things. Dyson recommends you the
Stewart-Plotkin paper
. The table 1 listing different strategies is quite impressive. It shows that the score is a decreasing function of the wins (for different strategies). Computer models were wrong, cooperation loses,
and defection wins. It's kind of cute that these proved yet surprising conclusions are characteristically associated with Freeman Dyson, the world's most achieved non-PhD physicist, a maverick, and a
critic of the climate models. ;-)
In the new paper, Adami and Hintze study those things in some detail and ask whether the zero-determinant strategies are evolutionary stable. Their being evolutionary stable would mean that no other
strategy could start to spread and overtake a large population that was using the zero-determinant strategies at the beginning.
They find out that the zero-determinant strategies are not evolutionary stable. It's pretty much because they don't perform well against each other which gives an advantage to the champions of other
strategies. Also, the generic evolution of the strategies drives them away from the zero-determinant subclass. There are lots of surprisingly complex ideas – the role of camouflaging in the stability
of such strategies, time scales over which the knowledge of your foe is an advantage, and so on. But this blog entry was supposed to be just an introduction.
snail feedback (6) :
Hi Lubos,I learned about the prisoners dilemma in a book by Hofstadter who also wrote the famous Gödel, Escher, Bach.
I am surprised that there is a new solution and less coorparation should be the key. Could it be that their are different ways around how one defines the problem?. From the way you talk about the
problem I get the idea that the goal is "winning" which means less years in prison than your opponent. But actually this would be not your goal in real life. In real life the goal would be least
numbers of years in prison for yourself only. For this to signal coorparation is always key because otherwise you may easily spiral into a loop of mutual retaliation. From an evolutionary point
of view it is important that evolution works on the level of the single individuum and also on the level of the species or tribe. Too much individual competition and your species may suffer.
What difference is there between the higgs field and the aether ? Anyone ?
Well, they could inject a regulator.
I have some other issues in that mathematicians are still injecting certain prejudices and assumptions about behavior that are not consistent with human behavior or even quantum physics (e.g.
interim communication is not a required for consistent cooperation) but hey what the heck, classical modeling without complex components is surely the way the world works right?
Hi! The luminiferous aether of the 19th century was supposed to be composed of some building blocks similar to atoms - in fact, famous physicists had created a working model of an eather out of
wheels and gears.
Because of this composition, the aether would choose a preferred reference frame - one in which the average velocity of its building blocks is zero. In all other reference frames, there would be
an "aether wind".
However, the Higgs field isn't created out of any localized building blocks and it therefore chooses no preferred reference frame. Even though the vacuum condensation value of the field is
nonzero, this "condensate" preserves the Lorentz symmetry. It follows that the special relativity continues to hold even in the presence of the Higgs condensate - the laws are Lorentz-symmetric.
For this reason, the vacuum with the Higgs condensate may still be considered "empty". After all, it's a matter of convention whether we call the vacuum expectation value h=0 or h=v.
You may "generalize" the word aether so that it includes Lorentz-invariant entities such as the Higgs field with a vev - but that wasn't the expectation of the 19th century advocates of the
concept of the aether.
And another point, after reading the paper, it should be pointed out to Dyson that plays can only occur at the rate of the slowest member, so if players have finite lifespan, the max number of
plays is dictated by the member with the lowest score (so they control the rate of play simply because they are the one's out of the game most of the time). So if the evolutionary player is
consistently losing, then they are also consistently controlling the rate of play. So there is a feedback mechanism that places a level of control on the ZD player. So while over some number of
plays the ZD player can achieve a higher score, within some time envelope, where players have finite life, then the ZD player can still be beaten since they need multiple plays to understand the
evolutionary players behavior. So the players could run out of time before the ZD player ever figured out what his opponent is actually up to. Dyson even almost states this sort of dilemma in
para 2 on page 4 of his own paper. Not to mention, the assumption is that the evolutionary player is actual intelligent enough to try to optimize their strategy, an it would take some number of
iterations for ZD to realize that possibility.
Only years for Bill and Al? Shouldn't it be decades? | {"url":"http://motls.blogspot.com/2012/08/freeman-dyson-and-william-press.html","timestamp":"2014-04-17T21:23:22Z","content_type":null,"content_length":"209131","record_id":"<urn:uuid:fd786599-b3e5-496a-b224-f5fd9bb4ea6c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gradient of parabola and specific max of y with 4 unknowns
April 5th 2008, 04:22 PM #1
Apr 2008
Gradient of parabola and specific max of y with 4 unknowns
i am needing help with a Maths equation that i cannot figure out. i need the gradient at x=45 to equal 3.21609 (y'=-25.593*cos(pi/25x - 4pi/5)). i also need the max of the parabola to equal
(x,55) where x is unknown. a , b and c are not given. it is really bugging me because i have wasted about 20 pages and still cannot find it.
if i have missed something please ask me.
where y'=0, x=-B/2A and y=(-AB^2)/(4A^2)+C
therefore 55 + (AB^2)/(4A^2)=C
y = Ax^2 +BX +C
y'= 2Ax + B
therefore at x = 45
3.21609 - 90A=B
using these i altered the equations but got nowhere. is the problem there (equations) if anyone can help?
equation found
if it helps anyone, i set a new axis to (0,0) and solved for y=AX^2 where y= drop in my case -40.383 (14.617-55). i let its y' = 3.21609 and got x = in terms of A. used that in y=AX^2 and got A.
(3.21609^2)/4A = -40.383, a's cancel. then used y=AX^2 + BX +C and y' = 2AX + B where x was 45 and A was found value. this got B. then i used y=AX^2 + BX +C where y = 55 and x = -B/2A and got
final equation = -0.064x^2 + 8.98X -259.773
I am sorry
this is incoherent what aer you looking for?...what is $a$ in $ax^2+bx+c$?
i am looking for a, b and c in parabola general equation. if you know a bit about parabola's im sure that if you read the whole thing you will understand what i am saying.
i had to get the gradient of the derivitive to equal that of another equation, and have a maximum of a particular y-value. that was all i was given. no x value for the max, no a,b or c for the
general equation...nothing!
i am needing help with a Maths equation that i cannot figure out. i need the gradient at x=45 to equal 3.21609 (y'=-25.593*cos(pi/25x - 4pi/5)). i also need the max of the parabola to equal
(x,55) where x is unknown. a , b and c are not given. it is really bugging me because i have wasted about 20 pages and still cannot find it.
if i have missed something please ask me.
where y'=0, x=-B/2A and y=(-AB^2)/(4A^2)+C
therefore 55 + (AB^2)/(4A^2)=C
y = Ax^2 +BX +C
y'= 2Ax + B
therefore at x = 45
3.21609 - 90A=B
using these i altered the equations but got nowhere. is the problem there (equations) if anyone can help?
I'm sorry... I also have trouble understanding your actual question. From what I can tell...
* You want the derivative of y'=-25.593*cos(pi/25x - 4pi/5) when x = 45 to equal 3.21609
* You also want the local maximum point of a parabola to be (x, 55)
The problem here is we are confused at what you are actually trying to do, could you please post the questions and given information ONLY.
Note: This may help. The general equations for a parabola are:
General form: $y=a(x-h)^2+k$
Turning Point form: $y=ax^2+bx+c$
April 5th 2008, 06:02 PM #2
Apr 2008
April 5th 2008, 06:53 PM #3
April 6th 2008, 12:12 AM #4
Apr 2008
April 6th 2008, 01:31 AM #5
Mar 2008 | {"url":"http://mathhelpforum.com/pre-calculus/33316-gradient-parabola-specific-max-y-4-unknowns.html","timestamp":"2014-04-17T03:55:48Z","content_type":null,"content_length":"42303","record_id":"<urn:uuid:04624947-f99e-4969-b2ec-ed9256b0bc4e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kim, Yong Jung - Division of Applied Mathematics, Korea Advanced Institute of Science and Technology
• Lecture note on convection and diffusion by Yong-Jung Kim
• Dates 2011.02.22~2011.02.26 Place NationalInstituteforMathematicalSciences,Daejeon,Korea
• , routine . conner kick
• Probing Nanotribological and Electrical Properties of Organic Molecular Films with Atomic Force Microscopy
• Quick GuideWindows 7 Wi-Fi set up POSTECH Wireless Network Connection Setup Guide for Windows 7
• Reconstruction of conductivity using the dual-loop method with one injection current in MREIT This article has been downloaded from IOPscience. Please scroll down to see the full text article.
• MAS501 Analysis for Engineers Spring 2011
• Proof. Let f(x) = xn - -x -1, F(x) = (x -1)f(x) = xn+1
• MAS091 25.100 3:1:3 , 10:30-11:45
• Asymptotic agreement of moments and higher order contraction in the Burgers equation
• The way to Department of Mathematical Sciences, KAIST, Daejeon, Korea. (The lecture room is #1501 in the building E6.)
• 790-784 San 31. Hyoja-dong Nam-gu Pohang Gyungbuk Korea Math Dept. TEL +82-54-279-8030 FAX +82-54-279-2799 WEB :http://math.postech.ac.kr
• Development and Evaluation of 3-D SiP with Vertically Interconnected Through Silicon Vias (TSV) Dong Min Jang', Chunghyun Ryul*, Kwang Yong Lee2, Byeong Hoon Cho',
• Diffusion beyond Fick's law: theory for a general Brownian motion
• J. Math. Pures Appl. 86 (2006) 4267 www.elsevier.com/locate/matpur
• Relative Newtonian potentials of radial functions and asymptotics in nonlinear
• 1. : , , I, II, III 2. : .
• IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001 85 Near-Time-Optimal Trajectory Planning for Wheeled
• J. Differential Equations 192 (2003) 202224 Asymptotic behavior of solutions to scalar
• CVGIP: IMAGE UNDERSTANDING Vol. 59, No. 2. March, pp. 171-182, 1994
• Proof. It is enough to show that n N, ci R (i = 1, , n) satisfying i=1 ci = 0,
• Copyright by SIAM. Unauthorized reproduction of this article is prohibited. SIAM J. MATH. ANAL. c 2009 Society for Industrial and Applied Mathematics
• INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Problems 19 (2003) 12131225 PII: S0266-5611(03)61611-4
• Preliminaries Vinberg's Results Choi's Results Known Examples New Results From now on Deformation spaces of projective structures
• Long time asymptotics in a logistic model with a non-Fickian heterogeneous diffusion
• Reconstruction of conductivity using the dual-loop method with one injection current in MREIT This article has been downloaded from IOPscience. Please scroll down to see the full text article.
• Tipping Point Analysis of SIR Model in Social Networks with Heterogeneous Contact Rates August 5, 2011
• 478 IEEE TRANSACTIONS ON COMPONENTS AND PACKAGING TECHNOLOGIES, VOL. 31, NO. 2, JUNE 2008 Reliability and Failure Analysis of Lead-Free
• Mathematical Modelling and Numerical Analysis ESAIM: M2AN Modelisation Mathematique et Analyse Numerique Vol. 35, No 3, 2001, pp. 463480
• Invariance Property of a Conservation Law without Convexity
• KAIST Geometric Topology Fair 1 Central Functions and SL(2, )-Character Varieties
• KAIST Geometric Topology Fair 1 Generators of SL(2, )-Character Varieties of
• IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 18, NO. 3, MARCH 1999 231 Statistical Textural Features for Detection of
• Capacity Bounds for Two-Way Relay Channels Wooseok Nam, Sae-Young Chung, and Yong H. Lee
• COLOR EXTENDED VISUAL CRYPTOGRAPHY USING ERROR DIFFUSION InKoo Kang, Gonzalo R. Arce
• Information Spreading in Complex Networks Applied Algorithm Lab
• Computational Fluid Dynamics Evaluation of Good Combustion Performance in Waste Incinerators
• Exceptional Dehn fillings July 9, 2007
• J. Differential Equations 199 (2004) 269289 An Oleinik-type estimate for a
• Proof. Let x = xi/2i be a binary notation of x. Now we want to compute d(2k-1
• Introduction Trace Diagrams and Their Properties
• Introduction to R Applied Algorithm Lab.
• Explicit solutions to a convection-reaction equation and defects of numerical schemes q
• IEEE TRANSACTIONS ON SYSTEMS, MAN. AND CYBERNETICS, VOL. 25, NO. 6, JUNE 1995 985 REFERENCES A Fuzzy Approach to Elevator Group Control System
• This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research
• Connectedness gives a unified generalization of Oleinik or Aronson-Benilan type one-sided
• C. R. Acad. Sci. Paris, Ser. I 341 (2005) 157162 http://france.elsevier.com/direct/CRASS1/
• An improved theory for regenerative pump performance
• J. Differential Equations 244 (2008) 4051 www.elsevier.com/locate/jde
• Master's Thesis A study on trace functions of closed curves
• ARMA manuscript No. (will be inserted by the editor)
• Accommodation Information in Pohang Name Room Type Price(including Taxes) Distance to POSTECH
• Announcements: The 8th KAIST Geometric Topology Fair
• Doctoral Program Feb. 2011 -Present Department of Mathematical Sciences
• The Central Function Basis Trace Diagrams and Representation Theory
• 2011. 11 23 (social media)
• A direction to Yong Jung Kim's office at Department of Mathematical Sciences, KAIST, Daejeon(), Korea Or Hotel Spapia.
• Noncovalent functionalization of graphene with end-functional polymers Eun-Young Choi,ab
• On the Rate of Convergence and Asymptotic Profile of Solutions to
• ARMA manuscript No. (will be inserted by the editor)
• Channel Adaptive CQI Reporting Schemes for UMTS High-Speed Downlink Packet Access
• Main Results Background Classical Results Modern point of view Infinitesimal & Actual Deformations Deformations of Hyperbolic Coxeter Orbifolds
• IEEE Wireless Communications December 2007 951536-1284/07/$20.00 2007 IEEE ACCEPTED FROM OPEN CALL
• Damage Detection in Composite Plates by Using an Enhanced Time Reversal Method
• Data Analysis using R Applied Algorithm Lab.
• Ecological diffusion for biological organisms under heterogeneous Brownian movements
• Manuscript submitted to Website: http://AIMsciences.org AIMS' Journals
• A self-similar viscosity approach for the Riemann problem in isentropic gas dynamics
• A characterization of cones in the projective space
• A Time-Interleaved Flash-SAR Architecture for High Speed A/D Conversion
• 1047-7047/00/1101/0083$05.00 1526-5536 electronic ISSN
• IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 51, NO. 8, AUGUST 2004 1241 The Effect of the Discharge Aging Process on the
• Person Wide Web: Active Location based Web Service Architecture using Wireless Infrastructure
• Identification of RFID Tags in Dynamic Framed Slotted ALOHA
• IEEE MICROWAVE AND WIRELESS COMPONENTS LETTERS, VOL. 14, NO. 9, SEPTEMBER 2004 443 Balanced Topology to Cancel Tx
• Solution of KAIST POW 2012-1 Hun-Min, Park
• Boron-doped amorphous diamondlike carbon as a new p-type window material in amorphous silicon p-i-n solar cells
• A Comparison of Binarization Methods for Historical Archive Documents J. He, Q. D. M. Do*, A. C. Downton and J. H. Kim*
• Adaptive Handoff Algorithms for Dynamic Traffic Load Distribution in 4G Mobile Networks
• Proof. For any n numbers x1, x2, , xn satisfying iA xi = 0 for all nonempty proper subset A of {1, 2, , n}, define the function fn
• Problem of the Week 2012-1 February 14, 2012
• 1. : , , I, II, III 2. : .
• IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 55, NO. 6, JUNE 2007 1363 A Wideband CMOS Variable Gain Amplifier
• Problem of the Week 2012-2 February 20, 2012
• Jong-Hwan Kim, Hong-Kook Chae, Jeong-Yul Jeon, and Seon-Woo Lee his article proposes a novel evolutionary algorithm, called
• Development of a three-axis hybrid mesh isolator using the pseudoelasticity of a shape memory alloy
• Computers and Concrete, Vol. 1, No. 1 (2004) 77-98 77 Cracking behavior of RC shear walls
• January 19, 2012 ROSAEC-HKUST CSE Joint Workshop
• A STATE-SPACE MODELING OF NON-IDEAL DC-DC CONVERTERS C. T.Rim , G. B. Joung and G. H . Cho
• Sum of squares Minjae Park | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/25/112.html","timestamp":"2014-04-20T14:08:40Z","content_type":null,"content_length":"23363","record_id":"<urn:uuid:154e2910-f7a9-4e4f-b8a7-9c2f0af638f3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
This article needs rewriting to enhance its relevance to psychologists..
Please help to improve this page yourself if you can..
File:Integral example.png
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
Integration is an important concept in mathematics, specifically in the field of calculus and, more broadly, mathematical analysis. Given a function ƒ of a real variable x and an interval [a, b] of
the real line, the integral
$\int_a^b f(x)\,dx \, ,$
is defined informally to be the net signed area of the region in the xy-plane bounded by the graph of ƒ, the x-axis, and the vertical lines x = a and x = b.
The term "integral" may also refer to the notion of antiderivative, a function F whose derivative is the given function ƒ. In this case it is called an indefinite integral, while the integrals
discussed in this article are termed definite integrals. Some authors maintain a distinction between antiderivatives and indefinite integrals.
The principles of integration were formulated independently by Isaac Newton and Gottfried Leibniz in the late seventeenth century. Through the fundamental theorem of calculus, which they
independently developed, integration is connected with differentiation: if ƒ is a continuous real-valued function defined on a closed interval [a, b], then, once an antiderivative F of ƒ is known,
the definite integral of ƒ over that interval is given by
$\int_a^b f(x)\,dx = F(b) - F(a)\, .$
Integrals and derivatives became the basic tools of calculus, with numerous applications in science and engineering. A rigorous mathematical definition of the integral was given by Bernhard Riemann.
It is based on a limiting procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated
notions of integral began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two
or three variables, and the interval of integration [a, b] is replaced by a certain curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece
of a surface in the three-dimensional space. Integrals of differential forms play a fundamental role in modern differential geometry. These generalizations of integral first arose from the needs of
physics, and they play an important role in the formulation of many physical laws, notably those of electrodynamics. Modern concepts of integration are based on the abstract mathematical theory known
as Lebesgue integration, developed by Henri Lebesgue.
Pre-calculus integration
Integration can be traced as far back as ancient Egypt, circa 1800 BC, with the Moscow Mathematical Papyrus demonstrating knowledge of a formula for the volume of a pyramidal frustum. The first
documented systematic technique capable of determining integrals is the method of exhaustion of Eudoxus (circa 370 BC), which sought to find areas and volumes by breaking them up into an infinite
number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes and used to calculate areas for parabolas and an approximation to the area of a
circle. Similar methods were independently developed in China around the 3rd Century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by
Chinese father and son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.^[1] That same century, the Indian mathematician Aryabhata used a similar method in order to find the
volume of a cube.^[2]
The next major step in integral calculus came in the 11th century, when the Iraqi mathematician, Ibn al-Haytham (known as Alhazen in Europe), devised what is now known as "Alhazen's problem", which
leads to an equation of the fourth degree, in his Book of Optics. While solving this problem, he performed an integration in order to find the volume of a paraboloid. Using mathematical induction, he
was able to generalize his result for the integrals of polynomials up to the fourth degree. He thus came close to finding a general formula for the integrals of polynomials, but he was not concerned
with any polynomials higher than the fourth degree.^[3] Some ideas of integral calculus are also found in the Siddhanta Shiromani, a 12th century astronomy text by Indian mathematician Bhāskara II.
The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay
the foundations of modern calculus. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation.
Newton and Leibniz
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between
integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus
allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it
allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
Formalizing integrals
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigor. Bishop Berkeley memorably attacked infinitesimals as "the ghosts of departed quantities".
Calculus acquired a firmer footing with the development of limits and was given a suitable foundation by Cauchy in the first half of the 19th century. Integration was first rigorously formalized,
using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered, to which Riemann's
definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and
Lebesgue's approaches, were proposed.
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with $\dot{x}$ or $x'\,\!$, which Newton
used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
The modern notation for the indefinite integral was introduced by Gottfried Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, "∫", from an elongated letter
"s", standing for summa (Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the
French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231). In so-called modern Arabic mathematical notation, which aims at pre-university levels of
education in the Arab world and is written from right to left, an inverted integral symbol 22px is used (W3C 2006).
Terminology and notation
If a function has an integral, it is said to be integrable. The function for which the integral is calculated is called the integrand. The region over which a function is being integrated is called
the domain of integration. If the integral does not have a domain of integration, it is considered indefinite (one with a domain is considered definite). In general, the integrand may be a function
of more than one variable, and the domain of integration may be an area, volume, a higher dimensional region, or even an abstract space that does not have a geometric structure in any usual sense.
The simplest case, the integral of a real-valued function f of one real variable x on the interval [a, b], is denoted by
$\int_a^b f(x)\,dx .$
The ∫ sign, an elongated "s", represents integration; a and b are the lower limit and upper limit of integration, defining the domain of integration; f is the integrand, to be evaluated as x varies
over the interval [a,b]; and dx is the variable of integration. In correct mathematical typography, the dx is separated from the integrand by a space (as shown). Some authors use an upright d (that
is, $\mathrm{d}x$ instead of dx).
The variable of integration dx has different interpretations depending on the theory being used. For example, it can be seen as strictly a notation indicating that x is a dummy variable of
integration, as a reflection of the weights in the Riemann sum, a measure (in Lebesgue integration and its extensions), an infinitesimal (in non-standard analysis) or as an independent mathematical
quantity: a differential form. More complicated cases may vary the notation slightly.
Integrals appear in many practical situations. Consider a swimming pool. If it is rectangular, then from its length, width, and depth we can easily determine the volume of water it can contain (to
fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations
may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements.
File:Integral approximations.svg
Approximations to integral of √x from 0 to 1, with ■ 5 right samples (above) and ■ 12 left samples (below)
To start off, consider the curve y = f(x) between x = 0 and x = 1, with f(x) = √x. We ask:
What is the area under the function f, in the interval from 0 to 1?
and call this (yet unknown) area the integral of f. The notation for this integral will be
$\int_0^1 \sqrt x \, dx \,\!.$
As a first approximation, look at the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1. Its area is exactly 1. As it is, the true value of the integral must be somewhat
less. Decreasing the width of the approximation rectangles shall give a better result; so cross the interval in five steps, using the approximation points 0, ^1⁄[5], ^2⁄[5], and so on to 1. Fit a box
for each step using the right end height of each curve piece, thus √^1⁄[5], √^2⁄[5], and so on to √1 = 1. Summing the areas of these rectangles, we get a better approximation for the sought integral,
$\textstyle \sqrt {\frac {1} {5}} \left ( \frac {1} {5} - 0 \right ) + \sqrt {\frac {2} {5}} \left ( \frac {2} {5} - \frac {1} {5} \right ) + \cdots + \sqrt {\frac {5} {5}} \left ( \frac {5} {5}
- \frac {4} {5} \right ) \approx 0.7497.\,\!$
Notice that we are taking a sum of finitely many function values of f, multiplied with the differences of two subsequent approximation points. We can easily see that the approximation is still too
large. Using more steps produces a closer approximation, but will never be exact: replacing the 5 subintervals by twelve as depicted, we will get an approximate value for the area of 0.6203, which is
too small. The key idea is the transition from adding finitely many differences of approximation points multiplied by their respective function values to using infinitely fine, or infinitesimal
As for the actual calculation of integrals, the fundamental theorem of calculus, due to Newton and Leibniz, is the fundamental link between the operations of differentiating and integrating. Applied
to the square root curve, f(x) = x^1/2, it says to look at the antiderivative F(x) = ^2⁄[3]x^3/2, and simply take F(1) − F(0), where 0 and 1 are the boundaries of the interval [0,1]. (This is a case
of a general rule, that for f(x) = x^q, with q ≠ −1, the related function, the so-called antiderivative is F(x) = (x^q+1)/(q + 1).) So the exact value of the area under the curve is computed formally
$\int_0^1 \sqrt x \,dx = \int_0^1 x^{\frac{1}{2}} \,dx = \int_0^1 d \left({\textstyle \frac 2 3} x^{\frac{3}{2}}\right) = {\textstyle \frac 2 3}.$
The notation
$\int f(x) \, dx \,\!$
conceives the integral as a weighted sum, denoted by the elongated "s", of function values, f(x), multiplied by infinitesimal step widths, the so-called differentials, denoted by dx. The
multiplication sign is usually omitted.
Historically, after the failure of early efforts to rigorously interpret infinitesimals, Riemann formally defined integrals as a limit of weighted sums, so that the dx suggested the limit of a
difference (namely, the interval width). Shortcomings of Riemann's dependence on intervals and continuity motivated newer definitions, especially the Lebesgue integral, which is founded on an ability
to extend the idea of "measure" in much more flexible ways. Thus the notation
$\int_A f(x) \, d\mu \,\!$
refers to a weighted sum in which the function values are partitioned, with μ measuring the weight to be assigned to each value. Here A denotes the region of integration.
Differential geometry, with its "calculus on manifolds", gives the familiar notation yet another interpretation. Now f(x) and dx become a differential form, ω = f(x)dx, a new differential operator d
, known as the exterior derivative appears, and the fundamental theorem becomes the more general Stokes' theorem,
$\int_{A} \bold{d} \omega = \int_{\part A} \omega , \,\!$
from which Green's theorem, the divergence theorem, and the fundamental theorem of calculus follow.
More recently, infinitesimals have reappeared with rigor, through modern innovations such as non-standard analysis. Not only do these methods vindicate the intuitions of the pioneers, they also lead
to new mathematics.
Although there are differences between these conceptions of integral, there is considerable overlap. Thus the area of the surface of the oval swimming pool can be handled as a geometric ellipse, as a
sum of infinitesimals, as a Riemann integral, as a Lebesgue integral, or as a manifold with a differential form. The calculated result will be the same for all.
Formal definitions
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other
definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals.
Riemann integral
Main article: Riemann integral
File:Integral Riemann sum.png
Integral approached as Riemann sum based on tagged partition, with irregular sampling positions and widths (max in red). True value is 3.76; estimate is 3.648.
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [a,b] be a closed interval of the real line; then a tagged partition of [a,
b] is a finite sequence
$a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_{n-1} \le t_n \le x_n = b . \,\!$
File:Riemann sum convergence.png
Riemann sums converging as intervals halve, whether sampled at ■ right, ■ minimum, ■ maximum, or ■ left.
This partitions the interval [a,b] into i sub-intervals [x[i−1], x[i]], each of which is "tagged" with a distinguished point t[i] ∈ [x[i−1], x[i]]. Let Δ[i] = x[i]−x[i−1] be the width of sub-interval
i; then the mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, max[i=1…n] Δ[i]. A Riemann sum of a function f with respect to such a tagged partition is
defined as
$\sum_{i=1}^{n} f(t_i) \Delta_i ;$
thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The
Riemann integral of a function f over the interval [a,b] is equal to S if:
For all ε > 0 there exists δ > 0 such that, for any tagged partition [a,b] with mesh less than δ, we have
$\left| S - \sum_{i=1}^{n} f(t_i)\Delta_i \right| < \epsilon.$
When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the
Riemann integral and the Darboux integral.
Lebesgue integral
Main article: Lebesgue integration
The Riemann integral is not defined for a wide range of functions and situations of importance in applications (and of interest in theory). For example, the Riemann integral can easily integrate
density to find the mass of a steel beam, but cannot accommodate a steel ball resting on it. This motivates other definitions, under which a broader assortment of functions is integrable (Rudin 1987
). The Lebesgue integral, in particular, achieves great flexibility by directing attention to the weights in the weighted sum.
The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a,b] is its width, b − a, so that the Lebesgue integral
agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
To exploit this flexibility, Lebesgue integrals reverse the approach to the weighted sum. As Folland (1984, p. 56) puts it, "To compute the Riemann integral of f, one partitions the domain [a,b] into
subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f".
One common approach first defines the integral of the indicator function of a measurable set A by:
$\int 1_A d\mu = \mu(A)$.
This extends by linearity to a measurable simple function s, which attains only a finite number, n, of distinct non-negative values:
\begin{align} \int s \, d\mu &{}= \int\left(\sum_{i=1}^{n} a_i 1_{A_i}\right) d\mu \\ &{}= \sum_{i=1}^{n} a_i\int 1_{A_i} \, d\mu \\ &{}= \sum_{i=1}^{n} a_i \, \mu(A_i) \end{align}
(where the image of A[i] under the simple function s is the constant value a[i]). Thus if E is a measurable set one defines
$\int_E s \, d\mu = \sum_{i=1}^{n} a_i \, \mu(A_i \cap E) .$
Then for any non-negative measurable function f one defines
$\int_E f \, d\mu = \sup\left\{\int_E s \, d\mu\, \colon 0 \leq s\leq f\text{ and } s\text{ is a simple function}\right\};$
that is, the integral of f is set to be the supremum of all the integrals of simple functions that are less than or equal to f. A general measurable function f, is split into its positive and
negative values by defining
\begin{align} f^+(x) &{}= \begin{cases} f(x), & \text{if } f(x) > 0 \\ 0, & \text{otherwise} \end{cases} \\ f^-(x) &{}= \begin{cases} -f(x), & \text{if } f(x) < 0 \\ 0, & \text{otherwise} \end
{cases} \end{align}
Finally, f is Lebesgue integrable if
$\int_E |f| \, d\mu < \infty , \,\!$
and then the integral is defined by
$\int_E f \, d\mu = \int_E f^+ \, d\mu - \int_E f^- \, d\mu . \,\!$
When the measure space on which the functions are defined is also a locally compact topological space (as is the case with the real numbers R), measures compatible with the topology in a suitable
sense (Radon measures, of which the Lebesgue measure is an example) and integral with respect to them can be defined differently, starting from the integrals of continuous functions with compact
support. More precisely, the compactly supported functions form a vector space that carries a natural topology, and a (Radon) measure can be defined as any continuous linear functional on this space;
the value of a measure at a compactly supported function is then also by definition the integral of the function. One then proceeds to expand the measure (the integral) to more general functions by
continuity, and defines the measure of a set as the integral of its indicator function. This is the approach taken by Bourbaki (2004) and a certain number of other authors. For details see Radon
Other integrals
Although the Riemann and Lebesgue integrals are the most important definitions of the integral, a number of others exist, including:
Properties of integration
• The collection of Riemann integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of
$f \mapsto \int_a^b f \; dx$
is a linear functional on this vector space. Thus, firstly, the collection of integrable functions is closed under taking linear combinations; and, secondly, the integral of a linear combination
is the linear combination of the integrals,
$\int_a^b (\alpha f + \beta g)(x) \, dx = \alpha \int_a^b f(x) \,dx + \beta \int_a^b g(x) \, dx. \,$
• Similarly, the set of real-valued Lebesgue integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the
Lebesgue integral
$f\mapsto \int_E f d\mu$
is a linear functional on this vector space, so that
$\int_E (\alpha f + \beta g) \, d\mu = \alpha \int_E f \, d\mu + \beta \int_E g \, d\mu.$
• More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact
topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞,
$f\mapsto\int_E f d\mu, \,$
that is compatible with linear combinations. In this situation the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special
cases arise when K is R, C, or a finite extension of the field Q[p] of p-adic numbers, and V is a finite-dimensional vector space over K, and when K=C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalisation for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the
approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See (Hildebrandt 1953)
for an axiomatic characterisation of the integral.
Inequalities for integrals
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
• Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f(x) ≤ M for all x in [a, b]. Since the lower
and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that
$m(b - a) \leq \int_a^b f(x) \, dx \leq M(b - a).$
• Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
$\int_a^b f(x) \, dx \leq \int_a^b g(x) \, dx.$
This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b].
• Subintervals. If [c, d] is a subinterval of [a, b] and f(x) is non-negative for all x, then
$\int_c^d f(x) \, dx \leq \int_a^b f(x) \, dx.$
$(fg)(x)= f(x) g(x), \; f^2 (x) = (f(x))^2, \; |f| (x) = |f(x)|.\,$
If f is Riemann-integrable on [a, b] then the same is true for |f|, and
$\left| \int_a^b f(x) \, dx \right| \leq \int_a^b | f(x) | \, dx.$
Moreover, if f and g are both Riemann-integrable then f ^2, g ^2, and fg are also Riemann-integrable, and
$\left( \int_a^b (fg)(x) \, dx \right)^2 \leq \left( \int_a^b f(x)^2 \, dx \right) \left( \int_a^b g(x)^2 \, dx \right).$
This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable
functions f and g on the interval [a, b].
• Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|^p and |g|^q are also
integrable and the following Hölder's inequality holds:
$\left|\int f(x)g(x)\,dx\right| \leq \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} \left(\int\left|g(x)\right|^q\,dx\right)^{1/q}.$
For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
• Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then |f|^p, |g|^p and |f + g|^p are also Riemann integrable and the following Minkowski
inequality holds:
$\left(\int \left|f(x)+g(x)\right|^p\,dx \right)^{1/p} \leq \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} + \left(\int \left|g(x)\right|^p\,dx \right)^{1/p}.$
An analogue of this inequality for Lebesgue integral is used in construction of L^p spaces.
In this section f is a real-valued Riemann-integrable function. The integral
$\int_a^b f(x) \, dx$
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x[0] ≤ x[1] ≤ . . . ≤ x[n] = b whose values x[i] are
increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [x[i], x[i+1]] where an interval with a higher index lies to the right of one
with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
• Reversing limits of integration. If a > b then define
$\int_a^b f(x) \, dx = - \int_b^a f(x) \, dx.$
This, with a = b, implies:
• Integrals over intervals of length zero. If a is a real number then
$\int_a^a f(x) \, dx = 0.$
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One
reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that:
• Additivity of integration on intervals. If c is any element of [a, b], then
$\int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x) \, dx.$
With the first convention the resulting relation
\begin{align} \int_a^c f(x) \, dx &{}= \int_a^b f(x) \, dx - \int_c^b f(x) \, dx \\ &{} = \int_a^b f(x) \, dx + \int_b^c f(x) \, dx \end{align}
is then well-defined for any cyclic permutation of a, b, and c.
Instead of viewing the above as conventions, one can also adopt the point of view that integration is performed on oriented manifolds only. If M is such an oriented m-dimensional manifold, and M' is
the same manifold with opposed orientation and ω is an m-form, then one has (see below for integration of differential forms):
$\int_M \omega = - \int_{M'} \omega \,.$
Fundamental theorem of calculus
Main article: Fundamental theorem of calculus
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original
function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be
Statements of theorems
$F(x) = \int_a^x f(t)\, dt.$
then F is continuous on [a, b]. If f is continuous at x in [a, b], then F is differentiable at x, and F′(x) = f(x).
• Second fundamental theorem of calculus. Let f be a real-valued integrable function defined on a closed interval [a, b]. If F is a function such that F′(x) = f(x) for all x in [a, b] (that is, F
is an antiderivative of f), then
$\int_a^b f(t)\, dt = F(b) - F(a).$
• Corollary. If f is a continuous function on [a, b], then f is integrable on [a, b], and F, defined by
$F(x) = \int_a^x f(t) \, dt$
is an anti-derivative of f on [a, b]. Moreover,
$\int_a^b f(t) \, dt = F(b) - F(a).$
Improper integrals
Main article: Improper integral
File:Improper integral.svg
The improper integral
$\int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} = \pi$
has unbounded intervals for both domain and range.
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these
conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity.
$\int_{a}^{\infty} f(x)dx = \lim_{b \to \infty} \int_{a}^{b} f(x)dx$
If the integrand is only defined or finite on a half-open interval, for instance (a,b], then again a limit may provide a finite result.
$\int_{a}^{b} f(x)dx = \lim_{\epsilon \to 0} \int_{a+\epsilon}^{b} f(x)dx$
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases,
limits are required at both endpoints, or at interior points.
Consider, for example, the function $\tfrac{1}{(x+1)\sqrt{x}}$ integrated from 0 to ∞ (shown right). At the lower bound, as x goes to 0 the function goes to ∞, and the upper bound is itself ∞, though
the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of $\tfrac{\pi}{6}$. To integrate from 1 to ∞, a
Riemann sum is not possible. However, any finite upper bound, say t (with t > 1), gives a well-defined result, $\tfrac{\pi}{2} - 2\arctan \tfrac{1}{\sqrt{t}}$. This has a finite limit as t goes to
infinity, namely $\tfrac{\pi}{2}$. Similarly, the integral from ^1⁄[3] to 1 allows a Riemann sum as well, coincidentally again producing $\tfrac{\pi}{6}$. Replacing ^1⁄[3] by an arbitrary positive
value s (with s < 1) is equally safe, giving $-\tfrac{\pi}{2} + 2\arctan\tfrac{1}{\sqrt{s}}$. This, too, has a finite limit as s goes to zero, namely $\tfrac{\pi}{2}$. Combining the limits of the two
fragments, the result of this improper integral is
\begin{align} \int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} &{} = \lim_{s \to 0} \int_{s}^{1} \frac{dx}{(x+1)\sqrt{x}} + \lim_{t \to \infty} \int_{1}^{t} \frac{dx}{(x+1)\sqrt{x}} \\ &{} = \lim_{s \
to 0} \left( - \frac{\pi}{2} + 2 \arctan\frac{1}{\sqrt{s}} \right) + \lim_{t \to \infty} \left( \frac{\pi}{2} - 2 \arctan\frac{1}{\sqrt{t}} \right) \\ &{} = \frac{\pi}{2} + \frac{\pi}{2} \\ &{} =
\pi . \end{align}
This process is not guaranteed success; a limit may fail to exist, or may be unbounded. For example, over the bounded interval 0 to 1 the integral of $\tfrac{1}{x^2}$ does not converge; and over the
unbounded interval 1 to ∞ the integral of $\tfrac{1}{\sqrt{x}}$ does not converge.
It may also happen that an integrand is unbounded at an interior point, in which case the integral must be split at that point, and the limit integrals on both sides must exist and must be bounded.
\begin{align} \int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} &{} = \lim_{s \to 0} \int_{-1}^{-s} \frac{dx}{\sqrt[3]{x^2}} + \lim_{t \to 0} \int_{t}^{1} \frac{dx}{\sqrt[3]{x^2}} \\ &{} = \lim_{s \to 0} 3
(1-\sqrt[3]{s}) + \lim_{t \to 0} 3(1-\sqrt[3]{t}) \\ &{} = 3 + 3 \\ &{} = 6. \end{align}
But the similar integral
$\int_{-1}^{1} \frac{dx}{x} \,\!$
cannot be assigned a value in this way, as the integrals above and below zero do not independently converge. (However, see Cauchy principal value.)
Multiple integration
File:Volume under surface.png
Double integral as volume under a surface.
Integrals can be taken over regions other than intervals. In general, an integral over a set E of a function f is written:
$\int_E f(x) \, dx.$
Here x need not be a real number, but can be another suitable quantity, for instance, a vector in R^3. Fubini's theorem shows that such integrals can be rewritten as an iterated integral. In other
words, the integral can be calculated by integrating one coordinate at a time.
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of
two variables represents the volume of the region between the surface defined by the function and the plane which contains its domain. (The same volume can be obtained via the triple integral — the
integral of a function in three variables — of the constant function f(x, y, z) = 1 over the above-mentioned region between the surface and the plane.) If the number of variables is higher, then the
integral represents a hypervolume, a volume of a solid of more than three dimensions that cannot be graphed.
For example, the volume of the cuboid of sides 4 × 6 × 5 may be obtained in two ways:
$\iint_D 5 \ dx\, dy$
of the function f(x, y) = 5 calculated in the region D in the xy-plane which is the base of the cuboid. For example, if a rectangular base of such a cuboid is given via the xy inequalities 2 ≤ x
≤ 7, 4 ≤ y ≤ 9, our above double integral now reads
$\int_2^7 \int_4^9 \ 5 \ dx\, dy$
From here, integration is conducted with respect to either x or y first; in this example, integration is first done with respect to x as the interval corresponding to x is the inner integral.
Once the first integration is completed via the $F(b) - F(a)$ method or otherwise, the result is again integrated with respect to the other variable. The result will equate to the volume under
the surface.
$\iiint_\mathrm{cuboid} 1 \, dx\, dy\, dz$
of the constant function 1 calculated on the cuboid itself.
Line integrals
Main article: Line integral
A line integral sums together elements along a curve.
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces. Such integrals are known as line integrals and surface integrals respectively.
These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed
curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on
the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler
integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force multiplied by distance
may be expressed (in terms of vector quantities) as:
$W=\vec F\cdot\vec d$;
which is paralleled by the line integral:
$W=\int_C \vec F\cdot d\vec s$;
which sums up vector components along a continuous path, and thus finds the work done on an object moving through a field, such as an electric or gravitational field
Surface integrals
Main article: Surface integral
File:Surface integral illustration.png
The definition of surface integral relies on splitting the surface into small surface elements.
A surface integral is a definite integral taken over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be
integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface
elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that we have a fluid flowing through S, such
that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, we need to take the dot product of v with
the unit surface normal to S at each point, which will give us a scalar field, which we integrate over the surface:
$\int_S {\mathbf v}\cdot \,d{\mathbf {S}}.$
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the
classical theory of electromagnetism.
Integrals of differential forms
Main article: differential form
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology and tensors. The modern notation for the differential form, as well as the idea of the
differential forms as being the wedge products of exterior derivatives forming an exterior algebra, was introduced by Élie Cartan.
We initially work in an open set in R^n. A 0-form is defined to be a smooth function f. When we integrate a function f over an m-dimensional subspace S of R^n, we write it as
$\int_S f\,dx^1 \cdots dx^m.$
(The superscripts are indices, not exponents.) We can consider dx^1 through dx^n to be formal objects themselves, rather than tags appended to make integrals look like Riemann sums. Alternatively, we
can view them as covectors, and thus a measure of "density" (hence integrable in a general sense). We call the dx^1, …,dx^n basic 1-forms.
We define the wedge product, "∧", a bilinear "multiplication" operator on these elements, with the alternating property that
$dx^a \wedge dx^a = 0 \,\!$
for all indices a. Note that alternation along with linearity implies dx^b∧dx^a = −dx^a∧dx^b. This also ensures that the result of the wedge product has an orientation.
We define the set of all these products to be basic 2-forms, and similarly we define the set of products of the form dx^a∧dx^b∧dx^c to be basic 3-forms. A general k-form is then a weighted sum of
basic k-forms, where the weights are the smooth functions f. Together these form a vector space with basic k-forms as the basis vectors, and 0-forms (smooth functions) as the field of scalars. The
wedge product then extends to k-forms in the natural way. Over R^n at most n covectors can be linearly independent, thus a k-form with k > n will always be zero, by the alternating property.
In addition to the wedge product, there is also the exterior derivative operator d. This operator maps k-forms to (k+1)-forms. For a k-form ω = f dx^a over R^n, we define the action of d by:
${\bold d}{\omega} = \sum_{i=1}^n \frac{\partial f}{\partial x_i} dx^i \wedge dx^a.$
with extension to general k-forms occurring linearly.
This more general approach allows for a more natural coordinate-free approach to integration on manifolds. It also allows for a natural generalisation of the fundamental theorem of calculus, called
Stokes' theorem, which we may state as
$\int_{\Omega} {\bold d}\omega = \int_{\partial\Omega} \omega \,\!$
where ω is a general k-form, and ∂Ω denotes the boundary of the region Ω. Thus in the case that ω is a 0-form and Ω is a closed interval of the real line, this reduces to the fundamental theorem of
calculus. In the case that ω is a 1-form and Ω is a 2-dimensional region in the plane, the theorem reduces to Green's theorem. Similarly, using 2-forms, and 3-forms and Hodge duality, we can arrive
at Stokes' theorem and the divergence theorem. In this way we can see that differential forms provide a powerful unifying view of integration.
Computing integrals
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. It proceeds like this:
1. Let f(x) be the function of x to be integrated over a given interval [a, b].
2. Find an antiderivative of f, that is, a function F such that F' = f on the interval.
3. Then, by the fundamental theorem of calculus, provided the integrand and integral have no singularities on the path of integration,
$\int_a^b f(x)\,dx = F(b)-F(a).$
Note that the integral is not actually the antiderivative, but the fundamental theorem allows us to use antiderivatives to evaluate definite integrals.
The difficult step is often finding an antiderivative of f. It is rarely possible to glance at a function and write down its antiderivative. More often, it is necessary to use one of the many
techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include:
Even if these techniques fail, it may still be possible to evaluate a given integral. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the
resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer
G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite
sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
Symbolic algorithms
Main article: Symbolic integration
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over
the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or
tedious tasks, including integration. Symbolic integration presents a special challenge in the development of such systems.
A major mathematical difficulty in symbolic integration is that in many cases, a closed formula for the antiderivative of a rather simple-looking function does not exist. For instance, it is known
that the antiderivatives of the functions exp ( x^2), x^x and sin x /x cannot be expressed in the closed form involving only rational and exponential functions, logarithm, trigonometric and inverse
trigonometric functions, and the operations of multiplication and composition; in other words, none of the three given functions is integrable in elementary functions. Differential Galois theory
provides general criteria that allow one to determine whether the antiderivative of an elementary function is elementary. Unfortunately, it turns out that functions with closed expressions of
antiderivatives are the exception rather than the rule. Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function.
On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may be still be possible to decide whether the antiderivative of a given function can be expressed using
these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica and other computer algebra systems,
does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions of physics (like the Legendre
functions, the hypergeometric function, the Gamma function and so on). Extending the Risch-Norman algorithm so that it includes these functions is possible but challenging.
Most humans are not able to integrate such general formulae, so in a sense computers are more skilled at integrating highly complicated formulae. Very complex formulae are unlikely to have
closed-form antiderivatives, so how much of an advantage this presents is a philosophical question that is open for debate.
Numerical quadrature
Main article: numerical integration
The integrals encountered in a basic calculus course are deliberately chosen for simplicity; those found in real applications are not always so accommodating. Some integrals cannot be found exactly,
some require special functions which themselves are a challenge to compute, and others are so complex that finding the exact answer is too slow. This motivates the study and application of numerical
methods for approximating integrals, which today use floating point arithmetic on digital electronic computers. Many of the ideas arose much earlier, for hand calculations; but the speed of
general-purpose computers like the ENIAC created a need for improvements.
The goals of numerical integration are accuracy, reliability, efficiency, and generality. Sophisticated methods can vastly outperform a naive method by all four measures (Dahlquist & Björck 2008;
Kahaner, Moler & Nash 1989; Stoer & Bulirsch 2002). Consider, for example, the integral
$\int_{-2}^{2} \tfrac15 \left( \tfrac{1}{100}(322 + 3 x (98 + x (37 + x))) - 24 \frac{x}{1+x^2} \right) dx ,$
which has the exact answer ^94⁄[25] = 3.76. (In ordinary practice the answer is not known in advance, so an important task — not explored here — is to decide when an approximation is good enough.) A
“calculus book” approach divides the integration range into, say, 16 equal pieces, and computes function values.
Spaced function values
x −2.00 −1.50 −1.00 −0.50 0.00 0.50 1.00 1.50 2.00
f(x) 2.22800 2.45663 2.67200 2.32475 0.64400 −0.92575 −0.94000 −0.16963 0.83600
x −1.75 −1.25 −0.75 −0.25 0.25 0.75 1.25 1.75
f(x) 2.33041 2.58562 2.62934 1.64019 −0.32444 −1.09159 −0.60387 0.31734
File:Numerical quadrature 4up.png
Numerical quadrature methods: ■ Rectangle, ■ Trapezoid, ■ Romberg, ■ Gauss
Using the left end of each piece, the rectangle method sums 16 function values and multiplies by the step width, h, here 0.25, to get an approximate value of 3.94325 for the integral. The accuracy is
not impressive, but calculus formally uses pieces of infinitesimal width, so initially this may seem little cause for concern. Indeed, repeatedly doubling the number of steps eventually produces an
approximation of 3.76001. However 2^18 pieces are required, a great computational expense for so little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision
becomes an obstacle.
A better approach replaces the horizontal tops of the rectangles with slanted tops touching the function at the ends of each piece. This trapezium rule is almost as easy to calculate; it sums all 17
function values, but weights the first and last by one half, and again multiplies by the step width. This immediately improves the approximation to 3.76925, which is noticeably more accurate.
Furthermore, only 2^10 pieces are needed to achieve 3.76000, substantially less computation than the rectangle method for comparable accuracy.
Romberg's method builds on the trapezoid method to great effect. First, the step lengths are halved incrementally, giving trapezoid approximations denoted by T(h[0]), T(h[1]), and so on, where h[k+1]
is half of h[k]. For each new step size, only half the new function values need to be computed; the others carry over from the previous size (as shown in the table above). But the really powerful
idea is to interpolate a polynomial through the approximations, and extrapolate to T(0). With this method a numerically exact answer here requires only four pieces (five function values)! The
Lagrange polynomial interpolating {h[k],T(h[k])}[k=0…2] = {(4.00,6.128), (2.00,4.352), (1.00,3.908)} is 3.76+0.148h^2, producing the extrapolated value 3.76 at h = 0.
Gaussian quadrature often requires noticeably less work for superior accuracy. In this example, it can compute the function values at just two x positions, ±^2⁄[√3], then double each value and sum to
get the numerically exact answer. The explanation for this dramatic success lies in error analysis, and a little luck. An n-point Gaussian method is exact for polynomials of degree up to 2n−1. The
function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. (Cancellation also benefits the Romberg method.)
Shifting the range left a little, so the integral is from −2.25 to 1.75, removes the symmetry. Nevertheless, the trapezoid method is rather slow, the polynomial interpolation method of Romberg is
acceptable, and the Gaussian method requires the least work — if the number of points is known in advance. As well, rational interpolation can use the same trapezoid evaluations as the Romberg method
to greater effect.
Quadrature method cost comparison
Method Trapezoid Romberg Rational Gauss
Points 1048577 257 129 36
Rel. Err. −5.3×10^−13 −6.3×10^−15 8.8×10^−15 3.1×10^−15
Value $\textstyle \int_{-2.25}^{1.75} f(x)\,dx = 4.1639019006585897075\ldots$
In practice, each method must use extra evaluations to ensure an error bound on an unknown function; this tends to offset some of the advantage of the pure Gaussian method, and motivates the popular
Gauss–Kronrod quadrature formulas. Symmetry can still be exploited by splitting this integral into two ranges, from −2.25 to −1.75 (no symmetry), and from −1.75 to 1.75 (symmetry). More broadly,
adaptive quadrature partitions a range into pieces based on function properties, so that data points are concentrated where they are needed most.
This brief introduction omits higher-dimensional integrals (for example, area and volume calculations), where alternatives such as Monte Carlo integration have great importance.
A calculus text is no substitute for numerical analysis, but the reverse is also true. Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. For
example, improper integrals may require a change of variable or methods that can avoid infinite function values; and known properties like symmetry and periodicity may provide critical leverage.
See also
External links
• Function Calculator from WIMS
• Mathematical Assistant on Web online calculation of integrals, allows to integrate in small steps (includes also hints for next step which cover techniques like by parts, substitution, partial
fractions, application of formulas and others, powered by Maxima (software))
Online books | {"url":"http://psychology.wikia.com/wiki/Integral","timestamp":"2014-04-17T16:27:38Z","content_type":null,"content_length":"210532","record_id":"<urn:uuid:d13a5541-3226-4619-b8c8-ae3fb9b2d318>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
gauravsaggis1 @ PaGaLGuY
Set 163:-
my ans is......
set 146
no clue for ques 1......
somebody please explain
pavan's share shud b=82.5 rs
distance travelled by car in 1/2 hr= dist travelled by bus in 1 hr
so speed ratio=sc/sb=2
total distance travelled by both in whole travel=2X80=160km
distance share covered by car =2/3of 160 in 80 mins
distance travelled by car in 1/2 hr= dist travelled by bus in 1 hr
so speed ratio=sc/sb=2
total distance travelled by both in whole travel=2X80=160km
distance share covered by car =2/3of 160
(Photo credit: Bud Caddell) The concepts of Set Theory are applicable not only in Quant, Data Interpretation and Logical Reasoning but also in solving syllogism questions. Let us first understand the
basics of the Venn Diagram before we move on to the concept of maximum and minimum. A large number of students ...
185 79 Comments
@ravihanda In a school of 100 students each of them play....
how much water is in large container... more data is required.....
i think.
wats the ans????
is it 2..
can solve it by remainder theorm.....
the milk present initially shud be 30 ltr..
correct me if m wrong. | {"url":"http://www.pagalguy.com/u/gauravsaggis1","timestamp":"2014-04-18T00:15:08Z","content_type":null,"content_length":"112550","record_id":"<urn:uuid:14f4ab29-4f49-4209-bb48-cca0ddb2e268>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How to embed Hamming space to Hilbert one with minimum distortion?
Replies: 0
How to embed Hamming space to Hilbert one with minimum distortion?
Posted: Mar 23, 2010 6:30 PM
Good day!
Hamming space is trivial linear space under $\mathbb{F}_2$-field. Is
it possible to embed Hamming space $\mathbb{B}^n$ to Hilbert one with
translating linear variety under operations from $\mathbb{F}_2$ to
linear variety under new field (may be $\mathbb{R}$) operations with
some small distortion? This problem arised becuase I have to have a
dot product that unifies a distance and norm (this is false for
Hamming space because of modulo 2 operations).
Thank you very much! | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2055478","timestamp":"2014-04-20T04:22:26Z","content_type":null,"content_length":"14007","record_id":"<urn:uuid:705a228a-dcf6-4bb6-9d4a-a5ac89656794>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: TimeAdaptive Algorithms for Synchronization \Lambda
Rajeev Alur y Hagit Attiya z Gadi Taubenfeld x
We consider concurrent systems in which there is an unknown upper bound on
memory access time. Such a model is inherently different from asynchronous model
where no such bound exists, and also from timingbased models where such a bound
exists and is known a priori. The appeal of our model lies in the fact that while it
abstracts from implementation details, it is a better approximation of real concurrent
systems compared to the asynchronous model. Furthermore, it is stronger than the
asynchronous model enabling us to design algorithms for problems that are unsolvable
in the asynchronous model.
Two basic synchronization problems, consensus and mutual exclusion, are investi
gated in a shared memory environment that supports atomic read/write registers. We
show that \Theta(\Delta log \Delta
log log \Delta ) is an upper and lower bound on the time complexity of consen
sus, where \Delta is the (unknown) upper bound on memory access time. For the mutual
exclusion problem, we design an efficient algorithm that takes advantage of the fact that
some upper bound on memory access time exists. The solutions for both problems are
even more efficient in the absence of contention, in which case their time complexity is
a constant. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/648/3755219.html","timestamp":"2014-04-18T22:41:19Z","content_type":null,"content_length":"8508","record_id":"<urn:uuid:daae0eaf-5de6-4ef2-8c2e-09d9da719ec3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating vector field with critical points
September 27th 2009, 05:12 PM
Generating vector field with critical points
For a school project I am asked to analyze a vector field. To analyze it, we need to generate a vector field with several critical points(focus, spiral, saddle..)
I have two questions about it:
1. I can generate most of them by manipulating functions (such as the function of a circle at (x,-y) has a vector field that is classified as focus) but how can I create one with all of them at
different points? Such as a focus at around -2,2 and a spiral at 1,1 and so on?
2. If I was given vector field, then I could analyze the critical points by the Jacobian method but the other way around is quite difficult for some classes of critical points. For instance a
spiral is formed if the eigenvalues of the Jacobian is complex conjugates (with nonzero imaginary values), so how do I go about coming up with a function of sort?
September 28th 2009, 04:14 AM
Would be quite a challenge to come up with a system that has all eight styles. But as far as generating each one separately, do you know about the Trace-Determinant? If not, look it up. Now draw
a circle around the origin in the T-D coordinate system. As the point (t,d) moves over this circle, it will pass through regions corresponding to each of the dynamic types. Now, calculate the
coefficients corresponding to (t,d) for each region for the system:
and the phase-space diagram for that system will pass through all eight of the dynamic styles as the point (t,d) travels around the circle. | {"url":"http://mathhelpforum.com/differential-geometry/104698-generating-vector-field-critical-points-print.html","timestamp":"2014-04-18T08:44:39Z","content_type":null,"content_length":"4824","record_id":"<urn:uuid:bded84c9-d185-472b-ab31-5e468609b223>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
change of variables formula for integrals
April 7th 2008, 11:32 PM #1
Junior Member
Jan 2008
Waipahu, HI
change of variables formula for integrals
Does anyone have any hints on how to figure out a change of variables formula for integrals? There aren't many tips in the book, and I just can't figure this one out. I think the fact that $f
(x,y)$ isn't linear is what's messing me up.
Here is the problem:
$\int\int_R\sin(9x^2+4y^2)dA$ where $R$ is the region in the first quadrant bounded by the ellipse $9x^2+4y^2=1$.
I guess I could do $u=9x^2+4y^2$, but what about $v$? What am I missing? Thank you!
Does anyone have any hints on how to figure out a change of variables formula for integrals? There aren't many tips in the book, and I just can't figure this one out. I think the fact that $f
(x,y)$ isn't linear is what's messing me up.
Here is the problem:
$\int\int_R\sin(9x^2+4y^2)dA$ where $R$ is the region in the first quadrant bounded by the ellipse $9x^2+4y^2=1$.
I guess I could do $u=9x^2+4y^2$, but what about $v$? What am I missing? Thank you!
Mind you, I don't actually know how to do this integral directly, but if we use your substitution your integral is
$\int_0^{1/3}\int_0^{(1/2)\sqrt{1 - 9x^2}} \sin(9x^2+4y^2)~dydx$
For the y integral, take $u = 9x^2 + 4y^2$ and recall that for this integration, x is simply a constant. So $du = 8y~dy$ which means your integral is
$= \int_0^{1/3}\int_{9x^2}^1 \sin(u)~\left [ 8 \cdot \left ( \frac{1}{2} \cdot \sqrt{u - 9x^2} \right ) \right ] ^{-1}~dudx$
Given the result I have a feeling this is not the best way to attack this problem.
I would suggest, perhaps adjusting your variables so that you are integrating over the unit circle rather than an ellipse by using
$u = 3x$
$v = 2y$
Then your integral becomes:
$\int_0^{1/3}\int_0^{(1/2)\sqrt{1 - 9x^2}} \sin(9x^2+4y^2)~dydx = \int_0^1\int_0^{\sqrt{1 - u^2}} sin(u + v) \cdot \left ( \frac{\partial (u, v)}{\partial (x, y)} \right ) ~dvdu$
where the quantity in ( ) is the Jacobian determinant. This is probably more along the lines of what you are looking for.
OK, that made the integral very easy. So in this case, we're not trying to simplify the integrand, we're trying to simplify the region. Thank you!
April 8th 2008, 02:26 AM #2
April 8th 2008, 01:02 PM #3
Junior Member
Jan 2008
Waipahu, HI | {"url":"http://mathhelpforum.com/calculus/33617-change-variables-formula-integrals.html","timestamp":"2014-04-21T15:05:02Z","content_type":null,"content_length":"40853","record_id":"<urn:uuid:f3cb5734-676c-4531-81b9-b840919be1d2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
What if everyone jumped? | Science | WIRED
• By Rhett Allain
• 08.26.10 |
• 3:48 pm |
Might as well jump. Jump. Go ahead, jump. – Van Halen
Suppose everyone in the world got together and jumped. Would the Earth move? Yes. Would it be noticeable? Time for a calculation. Note: I am almost certain that I have done this before, but I can’t
find where.
Starting assumptions.
• 7 billion people.
• Average weight: 50 kg (you know, kids and stuff)
• Average vertical jump (center of mass): 0.3 meters – and I think that is generous.
• Mass of the Earth: 6 x 10^24 kg
• Gravitational field near the surface of the Earth is constant with a magnitude of 9.8 N/kg
• Ignore the interaction with the Sun and Moon
Basic physics
Suppose I take the Earth and the people as my system. In this case, there are essentially no external forces on the system (see assumptions above). There will be two conserved quantities – momentum
and energy. Here, the term conserved means that that quantity does not change. I can write:
What does the “1″ and “2″ mean? These could be any two times. For this situation, let me say that time 1 is right after the people jump (and still moving up) and time 2 is when the people are at
their highest point.
Energy is also conserved. If I take the people plus the Earth as the system, then I can have both kinetic energy (K) and gravitational potential energy (U[g]). Using the 1 to represent the people
just jumping and 2 to represent them at their highest point, then:
About gravitational potential. First, it is the potential energy of the system, not of each object. Second, in this approximate linear form (mgh), the change is what really matters. This means that I
can set the potential at point 1 as 0 Joules. Also, the mass of the Earth does matter in this potential – that is where the 9.8 N/kg comes from.
The calculation
A couple of important things to start with. At position (and time) number 1, the Earth and the people are moving but there is zero gravitational potential energy. At position 2, the Earth and the
people are 0.3 meters apart and not moving (at the highest point). Finally, momentum is a vector – but this is a one-dimensional problem. I am going to let the y-direction be in the direction the
people jump.
This gives a momentum conservation equation of:
Now, I can use the energy equation to get an expression for the initial velocity of the people:
Just a quick check with reality. If you want to jump a height h, you would need a speed of:
This is what you get if you assume the velocity of the Earth is super small from above. Ok, I am going to put these two equations (momentum and energy) together. This looks bad, but it really isn’t
too bad. The problem is the velocity of the people from the work-energy method still has the velocity of the Earth. Avert your eyes if you are algebra-allergic.
Not finished quite yet – I need to now solve for the velocity of the Earth.
See, that wasn’t too bad. You can open your eyes now. Now for the numbers. If I use the values form above, I get a recoil speed of the Earth as:
Maybe you don’t like my starting values. But you know what? It doesn’t really matter – the mass of the Earth is so huge that it is going to be pretty darn difficult to get a detectable speed. Also,
there is the whole issue of getting everyone at the same place at the same time and getting them to jump at the same time.
I seem to recall the last time I did this calculation (that I can’t find) that I also estimated how many people you could get in one spot of the Earth. | {"url":"http://www.wired.com/2010/08/what-if-everyone-jumped/","timestamp":"2014-04-18T06:50:47Z","content_type":null,"content_length":"105042","record_id":"<urn:uuid:7be25e3c-868c-4037-ad0a-e941301fe3be>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 1
A Simple and Accurate Approximation to the SEP
of Rectangular QAM in Arbitrary Nakagami-m
Fading Channels
Himal A. Suraweera, Student Member, IEEE, and Jean Armstrong, Senior Member, IEEE
Abstract-- Recently, Karagiannidis presented a closed-form
solution for an integral which can be used to compute the average
symbol error probability of general order rectangular quadrature
amplitude modulation (QAM) in Nakagami-m fading channels
with integer fading parameters. In this letter, using an accurate
exponential bound for the Gaussian Q-function, we derive a
simple approximate solution for that integral. In particular, the
solution can be used to compute the average SEP of general order
rectangular QAM over arbitrary Nakagami-m fading. Numerical
results are presented to verify the accuracy of the solution.
Index Terms-- Quadrature amplitude modulation (QAM),
symbol error probability, Nakagami-m fading channels, bounds,
Gaussian Q-function.
MICROWAVE and mobile high speed communication | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/098/1551273.html","timestamp":"2014-04-17T16:45:29Z","content_type":null,"content_length":"8189","record_id":"<urn:uuid:50186e97-c1e2-4eb7-b705-d614481aa68c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: epat_@_ochester.rr.com
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: epat_@_ochester.rr.com
User Profile for: epat_@_ochester.rr.com
UserID: 1740
Name: Ed Patt
Registered: 12/3/04
Total Posts: 38
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=1740","timestamp":"2014-04-16T08:10:52Z","content_type":null,"content_length":"12548","record_id":"<urn:uuid:78b702f6-335e-4b9b-b93e-ef716c65b977>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
dual of locally free sheaf
up vote 1 down vote favorite
For simplicity, assume everythings occur on a smooth projective variaty $X$.
Dual bundle of the given line bundle $\mathcal L$ is determined by $\mathcal L$ and $c_1(\mathcal L)$.
$\mathcal L^*= \mathcal L (-2c_1(\mathcal L)) $
My question is, is their any similar relation between a vector bundle of rank >1 and its dual? Can dual vector bundle be described as a combination of original bundle and its data(e.g chern classes)?
Even for a rank 2 case, I have no idea about this problem.
If you have one, please give me some short proof or sketch. Good reference is also very preferable. I appreciate any help.
ag.algebraic-geometry vector-bundles
1 What exactly do you mean by that formula for $\mathcal{L}^∗$? The chern class $c_1(\mathcal{L}^∗)\in H^2(X,\mathbb{Z})$ does not canonically determine a line bundle.. – J.C. Ottem Apr 26 '12 at
Algebro-geometric definition and differential geometric definition is slightly different. In algebraic geometry, Chern class lives in the Chow ring, which is the space of cycles modulo rational
equivalance. For the line bundle, the divisor of a secrion is the first chern class. Lelong formula says its cohomology class recovers the cohomologically defined chern class, so they are closely
related. – Choa Apr 26 '12 at 2:22
2 @Choa: Essentially the statement you are making is that $\mathcal{O}(D)^*=\mathcal{O}(-D)=\mathcal{O}(D)\otimes \mathcal{O}(-2D)$, which while true, is hardly a numerical characterization of the
dual. Expecting anything less tautological in the case of a more general vector bundle is unrealistic. – Daniel Litt Apr 26 '12 at 2:45
@Daniel: Your answer somehow clearifies my unrealistic thought...... Your right. – Choa Apr 26 '12 at 3:04
add comment
1 Answer
active oldest votes
If $E$ is locally free of rank 2 then $E^* \cong E(-c_1(E))$. The isomorphism is induced by the nondegenerate pairing $E\otimes E \to \Lambda^2E \cong O(c_1(E))$. For higher
up vote 2 down vote rank one can check that in general $E^*$ is not a twist of $E$.
Thank you! At least for rank 2 case, I can go on. – Choa Apr 26 '12 at 3:05
1 While your negative statement is true, it is worth generalizing to $E^*\cong (\bigwedge^{n-1} E)(-c_1E)$. – Ben Wieland Apr 26 '12 at 3:44
@Ben Wieland: Of course, and the reason is the same. – Sasha Apr 26 '12 at 5:57
1 Oh my! It is just a special case of koszul complex and its self duallity! I have to study this topic more deeply. – Choa Apr 26 '12 at 6:05
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry vector-bundles or ask your own question. | {"url":"http://mathoverflow.net/questions/95217/dual-of-locally-free-sheaf?sort=votes","timestamp":"2014-04-17T10:12:47Z","content_type":null,"content_length":"59224","record_id":"<urn:uuid:67808471-f6cd-4495-bf12-bb4cd0b718b6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Theory of Stochastic Processes
Results 1 - 10 of 276
- Wireless Personal Communications , 1998
"... Abstract. This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also
begin to look into how these limits might be approached. We examine exploitation of multi-element array (M ..."
Cited by 1530 (7 self)
Add to MetaCart
Abstract. This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin
to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve
wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building
wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to
Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of
antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is
available. Compared to the baseline n = 1 case, which by Shannon’s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the
scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB.
For over 99%
- Psychol. Rev , 1978
"... A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes
the search set on the basis of probe-memory item relatedness, just as a ringing tuning fork evokes sympath ..."
Cited by 380 (73 self)
Add to MetaCart
A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the
search set on the basis of probe-memory item relatedness, just as a ringing tuning fork evokes sympathetic vibrations in other tuning forks. Evidence is accumulated in parallel from each probe-memory
item comparison, and each comparison is modeled by a continuous random walk process. In item recognition, the decision process is self-terminating on matching comparisons and exhaustive on
nonmatching comparisons. The mathematical model produces predictions about accuracy, mean reaction time, error latency, and reaction time distributions that are in good accord with experimental data.
The theory is applied to four item recognition paradigms (Sternberg, prememorized list, study-test, and continuous) and to speed-accuracy paradigms; results are found to provide a basis for
comparison of these paradigms. It is noted that neural network models can be interfaced to the retrieval theory with little difficulty and that semantic memory models may benefit from such a
retrieval scheme. At the present time, one of the major deficiencies in cognitive psychology is the lack of explicit theories that encompass more than a single experimental paradigm. The lack of such
theories and some of the unfortunate consequences have been discussed recently by
- Review of Financial Studies , 1997
"... This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space
Markov chain in credit ratings. The parameters of this process are easily estimated using observable data ..."
Cited by 237 (12 self)
Add to MetaCart
This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space
Markov chain in credit ratings. The parameters of this process are easily estimated using observable data. This model is useful for pricing and hedging corporate debt with imbedded options, for
pricing and hedging OTC derivatives with counterparty risk, for pricing and hedging (foreign) government bonds subject to default risk (e.g., municipal bonds), for pricing and hedging credit
derivatives, and for risk management. This article presents a simple model for valuing risky debt that explicitly incorporates a firm's credit rating as an indicator of the likelihood of default. As
such, this article presents an arbitrage-free model for the term structure of credit risk spreads and their evolution through time. This model will prove useful for the pricing and hedging of
corporate debt with We would like to thank John Tierney of Lehman Brothers for providing the bond index price data, and Tal Schwartz for computational assistance. We would also like to acknowledge
helpful comments received from an anonymous referee. Send all correspondence to Robert A. Jarrow, Johnson Graduate School of Management, Cornell University, Ithaca, NY 14853. The Review of Financial
Studies Summer 1997 Vol. 10, No. 2, pp. 481--523 1997 The Review of Financial Studies 0893-9454/97/$1.50 imbedded options, for the pricing and hedging of OTC derivatives with counterparty risk, for
the pricing and hedging of (foreign) government bonds subject to default risk (e.g., municipal bonds), and for the pricing and hedging of credit derivatives (e.g. credit sensitive notes and spread
adjusted notes). This model can also...
- Review of Financial Studies , 1996
"... Different continuous-time models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated
nonparametrically. We do not replace the continuous-time model by discrete approximations, even though the data are rec ..."
Cited by 194 (7 self)
Add to MetaCart
Different continuous-time models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically.
We do not replace the continuous-time model by discrete approximations, even though the data are recorded at discrete intervals. The principal source of rejection of existing models is the strong
nonlinearity of the drift. Around its mean, where the drift is essentially zero, the spot rate behaves like a random walk. The drift then mean-reverts strongly when far away from the mean. The
volatility is higher when away from the mean. The continuous-time financial theory has developed extensive tools to price derivative securities when the underlying traded asset(s) or nontraded factor
(s) follow stochastic differential equations [see Merton (1990) for examples]. However, as a practical matter, how to specify an appropriate stochastic differential equation is for the most part an
unanswered question. For example, many different continuous-time The comments and suggestions of Kerry Back (the editor) and an anonymous referee were very helpful. I am also grateful to George
- IEEE Transactions on Computer-Aided Design , 1992
"... Reliability assessment is an important part of the design process of digital integrated circuits. We observe that a common thread that runs through most causes of run-time failure is the extent
of circuit activity, i.e., the rate at which its nodes are switching. We propose a new measure of activity ..."
Cited by 148 (24 self)
Add to MetaCart
Reliability assessment is an important part of the design process of digital integrated circuits. We observe that a common thread that runs through most causes of run-time failure is the extent of
circuit activity, i.e., the rate at which its nodes are switching. We propose a new measure of activity, called the transition density, which may be defined as the "average switching rate" at a
circuit node. Based on a stochastic model of logic signals, we also present an algorithm to propagate density values from the primary inputs to internal and output nodes. To illustrate the practical
significance of this work, we demonstrate how the density values at internal nodes can be used to study circuit reliability by estimating (1) the average power & ground currents, (2) the average
power dissipation, (3) the susceptibility to electromigration failures, and (4) the extent of hot-electron degradation. The density propagation algorithm has been implemented in a prototype density
simulator. Using ...
- Psychological Review , 2004
"... The authors evaluated 4 sequential sampling models for 2-choice decisions—the Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models—by fitting them to the
response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented wi ..."
Cited by 113 (30 self)
Add to MetaCart
The authors evaluated 4 sequential sampling models for 2-choice decisions—the Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models—by fitting them to the
response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented with assumptions of variability across trials in the rate of accumulation of evidence from
stimuli, the values of response criteria, and the value of base RT across trials. Although there was substantial model mimicry, empirical conditions were identified under which the models make
discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed
(exponential) distributions of criteria, although the last was unable to produce error RTs shorter than correct RTs. The relationship between these models and 3 recent, neurally inspired models was
also examined. A common feature of many tasks studied by experimental psychologists is that they involve a simple decision about some feature of a stimulus that is expressed as a choice between two
alternative responses. Because decisions of this type are so fundamental to theory development and evaluation, their study has been
- In Bayesian Statistics 4 , 1992
"... When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest
focuses on quantiles of functionals of the posterior distribution, we describe an easily-implemented metho ..."
Cited by 97 (5 self)
Add to MetaCart
When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses
on quantiles of functionals of the posterior distribution, we describe an easily-implemented method for determining the total number of iterations required, and also the number of initial iterations
that should be discarded to allow for "burn-in". The method uses only the Gibbs iterates themselves, and does not, for example, require external specification of characteristics of the posterior
density. Here the method is described for the situation where one long run is generated, but it can also be easily applied if there are several runs from different starting points. It also applies
more generally to Markov chain Monte Carlo schemes other than the Gibbs sampler. It can also be used when several quantiles are to be estimated, when the quantities of interest are probabilities
- Psychological Review , 1988
"... David Meyer and colleagues have recently developed a new technique for examining the time course of information processing. The technique is a variant of the response signal procedure: On some
trials subjects are presented with a signal that requires them to respond, whereas on other trials they res ..."
Cited by 73 (37 self)
Add to MetaCart
David Meyer and colleagues have recently developed a new technique for examining the time course of information processing. The technique is a variant of the response signal procedure: On some trials
subjects are presented with a signal that requires them to respond, whereas on other trials they respond normally. These two types of trials are randomly intermixed so subjects are unable to
anticipate which kind of trial is to be presented next. For data analysis, it is assumed that on the signal trials observed reaction times are a probability mixture of regular responses and guesses
based on partial information. The accuracy of guesses based on partial information can be determined by using the data from the regular trials and a simple race model to remove the contribution of
fastfinishing regular trials from signal trial data. This analysis shows that the accuracy of guesses is relatively low and is either approximately constant or grows slowly over the time course of
retrieval. Meyer and colleagues have argued that this pattern of results rules out most continuous models of information processing. But the analyses presented in this article show that this pattern
is consistent with several stochastic reaction time models': the simple random walk, the runs, and the continuous diffusion models. The diffusion model is assessed with data from a new experiment
using the studytest recognition memory procedure. Fitting the diffusion model to the data from regular trials fixes
- IEEE Journal on Selected Areas in Communications , 1990
"... We propose to characterize the burstiness of packet arrival processes with indices of dispersion for intervals and for counts. These indices, which are functions of the variance of intervals and
counts, are relatively straightforward to estimate and convey much more information than simpler indic ..."
Cited by 62 (0 self)
Add to MetaCart
We propose to characterize the burstiness of packet arrival processes with indices of dispersion for intervals and for counts. These indices, which are functions of the variance of intervals and
counts, are relatively straightforward to estimate and convey much more information than simpler indices, such as the coefficient of variation, that are often used to describe burstiness
, 1999
"... Introduction The integrate-and-fire (IF) neuron has become popular as a simplified neural element in modeling the dynamics of large-scale networks of spiking neurons. A simple version of an IF
neuron integrates the input current as an RC circuit (with a leakage current proportional to the depolariz ..."
Cited by 60 (19 self)
Add to MetaCart
Introduction The integrate-and-fire (IF) neuron has become popular as a simplified neural element in modeling the dynamics of large-scale networks of spiking neurons. A simple version of an IF neuron
integrates the input current as an RC circuit (with a leakage current proportional to the depolarization) and emits a spike when the depolarization crosses a threshold. We will refer to it as the RC
neuron. Networks of neurons schematized in this way exhibit a wide variety of characteristics observed in single and multiple neuron recordings in cortex in vivo. With biologically plausible time
constants and synaptic efficacies, they can maintain spontaneous activity, and when the network is subjected to Hebbian learning (subsets of cells are repeatedly activated by the external stimuli),
it shows many stable states of activation, each corresponding to a different attractor of the network dynamics, in coexistence with spontaneous activity (Amit & Brunel, 1997a). These s | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=110995","timestamp":"2014-04-20T07:09:12Z","content_type":null,"content_length":"43076","record_id":"<urn:uuid:3783dc92-1482-47ad-abf1-5ff20386daf0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use a specific example to show how supply and demand affects prices of a specific product or service. - Homework Help - eNotes.com
Use a specific example to show how supply and demand affects prices of a specific product or service.
Supply and demand work together to set the prices of goods and services. Let us look at the supply of and demand for teachers as a way of looking at how this works.
The demand for teachers comes from the number of schools that exist in a given area. As more schools come to exist, or as schools get bigger, more teachers are needed. The supply of teachers comes
from the number of people who are willing and able to get themselves the necessary qualifications to become teachers. There is a complicated relationship between the supply and demand of teachers
and their price. Both the supply and the demand of teachers will be affected by the price of teachers. At the same time, the price of teachers will affect the supply and the demand.
Let us say that many new schools go up and there is a high demand for teachers. If this happens and there are not enough teachers in the area (low supply) the price of teachers will go up. This
will happen as schools compete to hire the few teachers who are around. In this way, high demand and low supply cause prices to rise.
But as prices rise, the supply and demand will change as well. When prices rise, more people will want to be teachers. After all, if you are going to be paid more for a certain job, that job will
look more attractive. At the same time, the high prices will cause demand to drop. School districts will not be able to afford as many teachers. They will start to try to increase class sizes so
they won’t have to hire as many teachers.
From this, we can see that supply and demand work together to set prices. At the same time, prices affect supply and demand.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/communication-economics-377442","timestamp":"2014-04-18T18:13:51Z","content_type":null,"content_length":"27335","record_id":"<urn:uuid:d4b9ca39-9836-4eab-9f72-547191dff942>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Mathematical Surveys and Monographs
2008; 759 pp; hardcover
Volume: 150
ISBN-10: 0-8218-4186-6
ISBN-13: 978-0-8218-4186-0
List Price: US$123
Member Price: US$98.40
Order Code: SURV/150
The interplay between finite dimensional algebras and Lie theory dates back many years. In more recent times, these interrelations have become even more strikingly apparent. This text combines, for
the first time in book form, the theories of finite dimensional algebras and quantum groups. More precisely, it investigates the Ringel-Hall algebra realization for the positive part of a quantum
enveloping algebra associated with a symmetrizable Cartan matrix and it looks closely at the Beilinson-Lusztig-MacPherson realization for the entire quantum \(\mathfrak {gl}_n\).
The book begins with the two realizations of generalized Cartan matrices, namely, the graph realization and the root datum realization. From there, it develops the representation theory of quivers
with automorphisms and the theory of quantum enveloping algebras associated with Kac-Moody Lie algebras. These two independent theories eventually meet in Part 4, under the umbrella of Ringel-Hall
algebras. Cartan matrices can also be used to define an important class of groups--Coxeter groups--and their associated Hecke algebras. Hecke algebras associated with symmetric groups give rise to an
interesting class of quasi-hereditary algebras, the quantum Schur algebras. The structure of these finite dimensional algebras is used in Part 5 to build the entire quantum \(\mathfrak{gl}_n\)
through a completion process of a limit algebra (the Beilinson-Lusztig-MacPherson algebra). The book is suitable for advanced graduate students. Each chapter concludes with a series of exercises,
ranging from the routine to sketches of proofs of recent results from the current literature.
Graduate students and research mathematicians interested in quantum groups and finite-dimensional algebras.
"...prove[s] to be a valuable reference to researchers working in the field. It contains and collects many results which have not appeared before in book form."
-- Mathematical Reviews | {"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-150","timestamp":"2014-04-17T05:26:59Z","content_type":null,"content_length":"17516","record_id":"<urn:uuid:4a1b71c4-5d6b-4922-82f2-118e8547b739>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summation: math and shakespeare
I was at a dinner party not too long ago and one of the other attendees did something very interesting ... he chastised one of the guests for not really knowing Shakespeare. Then, a few minutes
later, this same chastiser was bragging about how little he cared for math and science -- he said other people could focus on that.
Is Shakespeare really more important than math and science?
Too often people think what they know is REALLY important and what their ignorant of is something easily done by others.
As the dinner progressed, I asked a question to the other nine guests: you roll 5 dice, what is the probability of getting at least one four. Turns out, no one knew. Now granted, it is actually a
hard question for someone that hasn't studied probability ... the smattering of Stanford B-school and Harvard Law School grads hadn't studied math and statistics in college like I had. And one can
get through life fine without knowing simple probability … just like many get through life without knowing Shakespeare.
Whether it is the Monty Hall problem or the Birthday problem, people have a real lack of understand of their chance of something. Maybe that explains gambling. Or playing credit card roulette. it
seems math and science is quite important for any learned person to master.
Now I don’t know much about Shakespeare … but it isn’t something I brag about. In fact, I see that as one of my deficiencies that I'm not proud of. So I get taken aback when people feel that what
they know is so much more important than what they don't know.
Recent Comments | {"url":"http://blog.summation.net/2007/04/math_and_shakes.html","timestamp":"2014-04-24T18:33:10Z","content_type":null,"content_length":"33977","record_id":"<urn:uuid:1ed13609-a168-409f-a770-b0600d10f16d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mileski Law (or Mileski Theorem) is first introduced in 2006 by Robert Mileski, born in Struga, Republic of Macedonia. The law is defined for the complete set of integers.
This rule applies to a monotonically increasing sequence of positive integers or monotonically decreasing sequence of negative integers, and states that the sum of two consecutive elements of the
sequence is smaller or equal to the difference of their squares.
For a given sequence:
the theorem states that:
For a monotonically increasing integer sequence defined as:
the equality in the equation is achieved:
1. For the set of positive integers
2. For monotonically increasing integer sequence defined as
3. For the set of negative integers
External Links:
If you want to test the theorem how it works, I've developed a small web page, where you can run your sequences and test: http://mileskitheorem.brinkster.net/default.aspx
I would like to thank Aneta Koseska for supporting me in my work, as well as guiding me how to make this better. | {"url":"http://mileski-theorem.tripod.com/","timestamp":"2014-04-17T03:49:08Z","content_type":null,"content_length":"23651","record_id":"<urn:uuid:1ebe8c00-3a10-4805-80bb-99a98b17c372>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a percolation threshold in the hard discs model?
up vote 12 down vote favorite
Take a random configuration of $n$ non-overlapping discs of radius $r$ in the unit square $[0,1]^2$. (You could think of this as taking $n$ points uniform randomly in $[r,1-r]^2$ and then restricting
attention to the case that every pair of points $\{ x,y \}$ satisfies $d(x,y) \ge 2r$.)
From the point of view of statistical mechanics, the interesting case is when $r^2 n \to C$ for some constant $C > 0 $ as $n \to \infty$, and various kinds of phase transitions have been studied
experimentally, but little seems known mathematically.
What I am wondering about is whether anyone has studied the following kind of percolation. Set $\lambda >1$ to be a fixed parameter to measure proximity. Define a graph by considering the centers of
the $n$ discs to be the vertices, and connecting a pair $\{ x,y \}$ by an edge whenever $d(x,y) < 2 \lambda$.
My question is: given some choice of $\lambda$ does there exist a critical threshold $C_t$ (depending on $\lambda$) such that whenever $C > < C_t$ all the connected components of this graph are
likely to be small, of order $O(\log n)$ or even $o(n)$, and whenever $C > C_t$ there is a giant component, of order $\Omega(n)$?
What I know about is that for geometric random graphs on i.i.d. random points, percolation is known to occur for fairly general distibutions, and that this is closely related to bond percolation on a
lattice. But in the hard spheres distibution points are far from being independent.
I would also be interested to hear about percolation on other kinds of repulsive point processes -- Matern, Strauss, etc.
pr.probability statistical-physics stochastic-processes
What's typically known as "continuum percolation" is an overlapping disc model. This isn't quite what you're asking about, but it might be a good place to start. See the book Continuum Percolation
by Meester and Roy, or the continuum percolation chapter in Grimmett's Percolation. – Tom LaGatta Oct 1 '10 at 15:56
Thanks Tom, I'll look into it. – Matthew Kahle Oct 1 '10 at 17:19
This paper arxiv.org/abs/1110.0527 reports on simulations on a related but different model. Instead of hard disks, they consider 2D disks with a short range harmonic repulsion, and this allows them
to consider only the case $\lambda=r$ in your notation. Interestingly, they find a percolation threshold at $\phi_P\approx0.558$, well below the onset of rigidity (jamming) at $\phi_J\approx0.84$,
and they seem to find some exponents which agree with continuum percolation. – j.c. Oct 5 '11 at 9:32
add comment
2 Answers
active oldest votes
This kind of question is an active area of research. I don't think the answer to your question is known, but here are the two most closely-related bits of research I'm aware of.
(1) A Poisson hard-sphere process in $\mathbb{R}^d$ is a set $S$ of spheres with non-overlapping interiors whose centres form a homogeneous Poisson process in $\mathbb{R}^d$. (The spheres
are allowed to have differing diameters.) It is invariant if $S+z$ and $S$ have the same distribution for all $z \in \mathbb{R}^d$. Cotar, Holroyd and Revelle show that for all $d \geq 45$,
there exists a translation-invariant Poisson hard-sphere process $\Lambda$ which percolates (contains an infinite connected component) almost surely. If you look at their proof, it seems
up vote that in fact they show the existence of such a process with the additional property that no sphere has radius larger than $K$ for some non-random $K=K(d)$.
11 down
vote (2) The lilypond model consists in simultaneously growing balls at unit speed around each point, with each ball ceasing to expand when it touches another ball. Häggström and Meester showed
that the lilypond model on a homogeneous Poisson point process doesn't percolate. More recently, Last and Penrose showed that in $d \geq 2$, there exists a critical constant $\lambda_c(d)$
such that if you enlarge all balls by a proportion at least $\lambda > \lambda_c(d)$ then there is percolation with probability one, and below $\lambda_c(d)$ there is no percolation with
probability one.
@Louigi: your links are mismatched and the Last and Penrose one points to the https and not the http. – Ori Gurel-Gurevich Oct 2 '10 at 5:33
Thanks Ori! Fixed. – Louigi Addario-Berry Oct 2 '10 at 10:29
add comment
Perhaps you are already aware of asymptotics for component sizes in the continuum percolation model. It is there in the book of M. Penrose.
Since you asked about percolation on repulsive point processes, here is a compressed version of our results of non-trivial phase transition on sub-Poisson point processes , point processes
up vote 4 that are less clustered than a Poisson in a certain sense. Our main example is a perturbed lattice. We also show that stationary determinantal point processes have a non-zero critical
down vote intensity as regards percolation. I think these point processes can be considered as repulsive point processes.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability statistical-physics stochastic-processes or ask your own question. | {"url":"https://mathoverflow.net/questions/40679/is-there-a-percolation-threshold-in-the-hard-discs-model","timestamp":"2014-04-18T05:53:43Z","content_type":null,"content_length":"66779","record_id":"<urn:uuid:7ab5e48f-559e-4068-86ab-523a4e955f28>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: "Programming With Mathematica" Exercise help
Replies: 2 Last Post: Apr 12, 2013 2:16 AM
Messages: [ Previous | Next ]
Re: "Programming With Mathematica" Exercise help
Posted: Apr 12, 2013 2:15 AM
I put most expressions in FullForm or close to FullForm
The following with no Hold
In[1]:= Clear[z,a];
Out[4]= 14
Now with Hold
In[5]:= Clear[z,a];
Out[8]= 12
As you pointed out in the clipping of the exercise question, "use the Hold
.. in the compound expression .. value of 12". The solution is in the
So I put a Hold on z where z is being initially Set in the compound
-----Original Message-----
From: plank.in.sequim@gmail.com [mailto:plank.in.sequim@gmail.com]
Sent: Thursday, April 11, 2013 3:14 AM
Subject: "Programming With Mathematica" Exercise help
I'm loving Paul Wellin's book "Programming with Mathematica: An
Introduction" and am trying to diligently do all the exercises. Most of
them have answers in the back but I'm stuck on Section 4.2, Exercise 2 and
there's no answer given. It gives the following Mathematica code:
z = 11;
a = 9;
z + 3 /. z -> a
So "z+3" is being evaluated to 14 and then the substitution has no effect.
He asks how to "use the Hold function in the compound expression to obtain a
value of 12". I don't seem to be able to get this to work. My original
thought was to hold z+3, but then the z in the replacement part gets
evaluated so the replacement is actually 11->3 which doesn't match in the
held z+3 expression. In fact, if you replace "z+3" with Hold[11+3] then
you'll end up with Hold[9+3]. Curiously, this works differently if you use
In: Replace[Hold[11+3],11->9]
Out: Replace[Hold[11+3],11->9]
In: Hold[11+3]/.11->9
out: Hold[9+3]
I thought these two were supposed to be equivalent so I'm a bit confused
In any event, I've tried all the commonsense ideas I've had and then spent
some time flailing about randomly with Hold's but nothing seems to work
correctly. Can anybody help me understand this? Thanks!
Date Subject Author
4/12/13 Re: "Programming With Mathematica" Exercise help hmichel@cox.net
4/12/13 Re: "Programming With Mathematica" Exercise help Murray Eisenberg | {"url":"http://mathforum.org/kb/message.jspa?messageID=8892785","timestamp":"2014-04-16T14:15:17Z","content_type":null,"content_length":"19154","record_id":"<urn:uuid:47095771-3c36-4ec5-b32a-f68abc42a95f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crash Course Chapter 11: How Much Is A Trillion?
During the Crash Course you will often encounter numbers that are expressed in trillions. How much is a trillion?
You know what? I’m not really sure myself.
A trillion is a very, very big number, and I think it would be worth spending a couple of minutes trying to get our arms around the concept.
First, a numerical review.
A thousand is a one with three zeros after it.
A million is a thousand times bigger than that and it’s a one with six zeros after it. At this level I can really get my mind around the difference between these two numbers. A million dollars in the
bank is a very different concept from a thousand dollars in the bank. I get that.
A billion then is a thousand times bigger than a million, and it’s a one followed by 9 zeros.
And a trillion is a thousand times bigger than that, and it’s a one followed by 12 zeros.
So a trillion is a thousand billions, which means it is a million millions. You know what? I don’t know what that means! I can’t visualize that, so let’s take a different tack on this.
Suppose I gave you a thousand dollar bill and said you and a friend had to spend it all in a single evening out on the town. You’d have a pretty good time.
Now suppose you had a stack of thousand dollar bills that was four inches in height. If you did, you know what? Congratulations, you’d be a millionaire.
Now suppose you wanted to enter the super-elite of the wealthy and have a billion dollars. How tall of a stack of thousand dollar bills would that be?
The answer is a stack only 358 feet high, seen here barely reaching 1/3rd of the way up the Petronas towers.
Now how about a stack of thousand dollar bills to equal a trillion dollars? How tall would that stack be? Think of an answer.
Well, that stack would be 67.9 miles high.
And I meant stack, not laid end to end or anything cheesy like that. A solid stack of thousand dollar bills, 67.9 miles high. Now that’s a trillion dollars.
That still doesn’t do it for you?
Okay, I want you to imagine that you’re in a car on a roadway that is lined at the side with a sideways stack of thousand dollar bills. A nice, compact, rectangular column of thousand dollar bills is
snaking along the roadside next to you as you drive.
You drive along brrrrrrrrrrrrr without stopping for a little more than an hour, and the entire way there’s that stack of thousand dollar bills right next you, on the side of the road, the whole way.
Said another way, the amount of money created in the past 4.5 months in our economic system, if it had been printed up as thousand dollar bills and stacked along the side of the road, would stretch
from Springfield, Massachusetts to Albany, New York.
So there it is. Either you can visualize the stack better by driving along next to it, or by standing on top if it, or any other way you wish to express this statement.
But make no mistake, a trillion is a very, very big number and we should not be lulled into complacency simply because it is too big to really get our minds around. That should drive us to action
Keep this lesson in mind as we discuss the total accumulated debts and liabilities of the US, which are many tens of trillions of dollars. | {"url":"http://www.peakprosperity.com/print/228","timestamp":"2014-04-20T06:22:24Z","content_type":null,"content_length":"17455","record_id":"<urn:uuid:e11b2954-52b7-47d4-a6b1-fc1fd5af033b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
generalized scheme
A general idea of Alexander Grothendieck was that to study a geometry more general than schemes instead of the gluing of affine schemes as ringed spaces, one glues the functors of points; hence a
space is simply a sheaf of sets on some site $Loc$ of local models with a Grothendieck topology $\tau$ on it.
An algebraic scheme $X$ is a ringed space that is locally isomorphic to an affine scheme. Alternatively (see Gabriel-Demazure), it is a presheaf of sets on $Aff=CommRing^{op}$ locally representable
in Zariski topology on $Aff$. The second approach Alexander Grothendieck calls functor of points approach.
To recall the equivalence between the two points of view, every scheme $X$ gives rise to a representable presheaf on the formal dual of commutative rings
$X(-) : CRing \to Set$
$A \mapsto Hom_{CRing}(Spec A, X)$
and this is a sheaf with respect to Zariski Grothendieck topology on $Aff$. Sheaves in any other fixed subcanonical topology $\tau$ on $Aff$ are called $\tau$-locally affine spaces. The usual schemes
are obtained for $\tau=Zariski$ and $Loc=Aff$. Algebraic spaces are another example. In other fields like analytic spaces, sheaves on other categories of local models $Loc$ instead of $Aff$ are
considered in classical works.
In general various generalizations which do not have exactness properties of Zariski coverings or étale coverings, are usually among algebraic geometers called generalized spaces rather than
(generalized) schemes; thus the terminology almost scheme is OK because though the local objects are more general the exactness properties are basically the same (similarly for derived schemes of
Toen et al. noncommutative schemes of Rosenberg etc.).
There are various way to generalize the scope of the functor of points approach.
There are many generalizations of schemes, some are even called by their respective authors generalized schemes (e.g. Lurie, Durov). Deligne in Catégories Tannakiennes suggested algebraic geometry in
arbitrary symmetric monoidal category. Aspects of toric geometry and the foundations of the geometry over a field of one element (Smirnov-Kapranov, Dietmar, Connes…) can be founded using structure
sheaves of monoids, not rings. Another example is tropical geometry. Rings are sometimes noncommutative (e.g. D-schemes of Beilinson); the underlying topological space can be replaced by a site,
locale, topos or a non-distributive lattice by localizations. Usual commutative unital rings suffice for manifolds, rigid analytic spaces, schemes, formal schemes and so on. The emphasis in Lurie is
to categorify the space and to take the homotopy version of a ring, restating a formalism fitting the derived algebraic geometry, mainly of Simpson’s school.
Several different definitions by several authors exist.
Locally affine structured (∞,1)-toposes
Generalized schemes of Durov
N. Durov replaces the commutative rings by commutative algebraic monads (aka generalized rings) in sets and defines spectra in that context, and glues them together. This way he defines what he calls
generalized schemes: in a nutshell generalized schemes are schemes glued from affine spectra of generalized rings. The corresponding category of quasicoherent $\mathcal{O}$-modules is not abelian in
general. See also the separate entry generalized scheme after Durov.
Other generalized schemes
O. Gabber considers replacing rings by almost rings, this results in the theory of almost schemes.
One should note that Grothendieck school has occasionally studied ringed sites where ring is not required to be commutative and considered quasicoherent sheaves and cohomology in that context.
D-schemes of Beilinson are an example where this formalism is useful.
Rosenberg considers generalized relative schemes as categories over an arbitrary base category with a relatively affine cover satisfying some exactness conditions. The scheme as a category is in fact
abstracting the category of quasicoherent sheaves over some generalized scheme. Rosenberg calls the Zariski version of that formalism noncommutative scheme; some other versions of locally affine
spaces can be also relativized.
Rigid analytic geometry is featuring locally affinoid spaces (affinoid spaces are spectra of Banach algebras over complete ultrametric fields which belong to a special class called affinoid algebras;
Berkovich spectra are most often used) in so-called G-topology. | {"url":"http://www.ncatlab.org/nlab/show/generalized+scheme","timestamp":"2014-04-17T03:53:13Z","content_type":null,"content_length":"40327","record_id":"<urn:uuid:e8f94009-a040-4819-813d-7ac5eb0a06fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 99
The square on a second apotome of a medial straight line applied to a rational straight line produces as breadth a third apotome.
Let AB be a second apotome of a medial straight line, and CD rational, and to CD let there be applied CE equal to the square on AB producing CF as breadth.
I say that CF is a third apotome.
Let BG be the annex to AB, therefore AG and GB are medial straight lines commensurable in square only which contains a medial rectangle.
Apply CH, equal to the square on AG, to CD producing CK as breadth, and apply KL, equal to the square on BG, to KH producing KM as breadth. Then the whole CL equals the sum of the squares on AG and
GB. Therefore CL is also medial.
And it is applied to the rational straight line CD producing CM as breadth, therefore CM is rational and incommensurable in length with CD.
Now, since the whole CL equals the sum of the squares on AG and GB, and, in these, CE equals the square on AB, therefore the remainder LF equals twice the rectangle AG by GB.
Bisect FM at the point N, and draw NO parallel to CD. Then each of the rectangles FO and NL equals the rectangle AG by GB.
But the rectangle AG by GB is medial, therefore FL is also medial.
And it is applied to the rational straight line EF producing FM as breadth, therefore FM is also rational and incommensurable in length with CD.
Since AG and GB are commensurable in square only, therefore AG is incommensurable in length with GB. Therefore the square on AG is also incommensurable with the rectangle AG by GB.
But the sum of the squares on AG and GB is commensurable with the square on AG, and twice the rectangle AG by GB with the rectangle AG by GB, therefore the sum of the squares on AG and GB is
incommensurable with twice the rectangle AG by GB.
But CL equals the sum of the squares on AG and GB, and FL equals twice the rectangle AG by GB, therefore CL is also incommensurable with FL.
But CL is to FL as CM is to FM, therefore CM is incommensurable in length with FM.
And both are rational, therefore CM and MF are rational straight lines commensurable in square only, therefore CF is an apotome.
I say next that it is also a third apotome.
Since the square on AG is commensurable with the square on GB, therefore CH is also commensurable with KL, so that CK is also commensurable with KM.
Since the rectangle AG by GB is a mean proportional between the squares on AG and GB, CH equals the square on AG, KL equals the square on GB, and NL equals the rectangle AG by GB, therefore NL is
also a mean proportional between CH and KL. Therefore CH is to NL as NL is to KL.
But CH is to NL as CK is to NM, and NL is to KL as NM is to KM, therefore CK is to MN as MN is to KM. Therefore the rectangle CK by KM equals the square on MN, that is, to the fourth part of the
square on FM.
Since, then, CM and MF are two unequal straight lines, and a parallelogram equal to the fourth part of the square on FM and deficient by a square figure has been applied to CM, and divides it into
commensurable parts, therefore the square on CM is greater than the square on MF by the square on a straight line commensurable with CM.
And neither of the straight lines CM nor MF is commensurable in length with the rational straight line CD set out, therefore CF is a third apotome.
Therefore, the square on a second apotome of a medial straight line applied to a rational straight line produces as breadth a third apotome. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX99.html","timestamp":"2014-04-21T12:16:53Z","content_type":null,"content_length":"7865","record_id":"<urn:uuid:0ee1d485-3299-482c-9f04-70dcc2c6046f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help!! Calculus Problems!
March 25th 2006, 08:01 AM
Help!! Calculus Problems!
Anyone know how to solve the following problems?
1. Integrate
∫(x-1) / (lnx) dx Limit 0≦x≦1
∫(x^Alpha)*lnx dx Limit 0≦x≦1
∫(sin x)^3 / (x^2) dx Limit 0≦x≦∞
∫((sin x )/ x) dx Limit 0≦x≦∞
2. (a) Evaluate ∫((sin x)^3) /((sin x)^3 + (cos x)^3) dx Limit 0≦x≦Pi/2
(b) What is the rate of change of f(x) = arctan (tan x) at x =Pi/2
3. A point mass m moves along the curve y = 2(X^2) due to gravity and no other external force is present. Determine the power of gravity as a function of Xo, g, t. (Xo means x knot)
March 25th 2006, 12:40 PM
Do you have knowledge of complex analysis or only the basic (real) calculus?
March 25th 2006, 04:14 PM
Originally Posted by thebigshow500
Anyone know how to solve the following problems?
(b) What is the rate of change of f(x) = arctan (tan x) at x =Pi/2
Since $\arctan$ is inverse function for $\tan$ we have,
$\arctan (\tan x)=x$ thus, the derivative (rate of change) of this is $(x)'=1$. The problem, is of course that $\pi/2$ is not in the domain of $\tan x$ thus, it is not differenciable at $x=\pi/2$
thus, I would say that the rate of change does not exist there.
March 25th 2006, 04:21 PM
Originally Posted by thebigshow500
Anyone know how to solve the following problems?
2. (a) Evaluate ∫((sin x)^3) /((sin x)^3 + (cos x)^3) dx Limit 0≦x≦Pi/2
Use "Weierstrauss Substitution"
Thus, $u=\tan \frac{x}{2}$ for $-\pi<x<\pi$
$\sin x=\frac{2u}{1+u^2}$
$\cos x=\frac{1-u^2}{1+u^2}$
This, converts any rational function of sine and cosine into an ordinary rational function.
Let me continue,
first this function is countinous over $[0,\pi/2]$ thus, the integral is not improper of the second type. Thus, you may simply use the fundamental theorem of calculus.
Thus, the problem,
transfroms after Weierstrauss substitution into,
$\int^1_0\frac{ \left(\frac{2u}{1+u^2} \right)^3 }{ \left( \frac{2u}{1+u^2} \right)^3+\left( \frac{1-u^2}{1+u^2} \right)^3} \cdot \frac{2}{1+u^2}du$
Multiply the top and bottom of the left fraction by $(1+u^2)^3$ to get,
$\int^1_0\frac{8u^3}{8u^3+(1-u^2)^3}\cdot \frac{2}{1+u^2}du$
Hope, this helps, it is still a mess.
March 25th 2006, 04:46 PM
Thank you, buddy!
There is initially a mistake on the first problem, and I have fixed it already.
I hope the other problems can be solved as well. :)
March 25th 2006, 04:54 PM
Originally Posted by thebigshow500
Anyone know how to solve the following problems?
∫((sin x )/ x) dx Limit 0≦x≦∞
You need,
$\int^{\infty}_{0^+}\frac{\sin x}{x}dx$
Express, the sine as an infinite series and divide by $x$ to get,
Upon, integration,
$\left x-\frac{x^3}{3\cdot 3!}+\frac{x^5}{5\cdot 5!}-\frac{x^7}{7\cdot 7!} \right|^{\infty}_{0^+}$
I just do not know what this infinite sum is equal to. It seems to be to diverge. Thus,
$\int^{\infty}_{0^+}\frac{\sin x}{x}dx$
March 25th 2006, 10:46 PM
Originally Posted by TD!
Do you have knowledge of complex analysis or only the basic (real) calculus?
I just know the basic concepts on calculus.
For example, I know some techniques such as integration by parts, substitution, and chain rules, etc.
However, I still can't figure out how to deal with the above integral problems.
March 25th 2006, 11:15 PM
Strange, since for example the integral
$\int^{\infty}_{0^+}\frac{\sin x}{x}dx$
is typically one which can be computed using complex analysis.
For the record, it does converge, to $\pi/2$
March 26th 2006, 09:35 AM
Originally Posted by TD!
Strange, since for example the integral
$\int^{\infty}_{0^+}\frac{\sin x}{x}dx$
is typically one which can be computed using complex analysis.
For the record, it does converge, to $\pi/2$
Why does it diverge according to my computer program?
March 26th 2006, 09:38 AM
What program are you using?
March 26th 2006, 09:42 AM
Originally Posted by TD!
What program are you using?
This is best program out there (freeware :))
March 26th 2006, 09:44 AM
Apparently not 'the best' since it's saying that a convergent integral diverges ;)
I can show you the math if you like, but it'll involve complex analysis (residue calculation and complex contour integration).
March 26th 2006, 09:50 AM
Originally Posted by TD!
Apparently not 'the best' since it's saying that a convergent integral diverges ;)
I can show you the math if you like, but it'll involve complex analysis (residue calculation and complex contour integration).
I never studied complex analysis :o
Is complex analysis really so "powerful"?
You seem to be obsessed with it, whenever anyone asks a question on these forums you always throw a complex analytic solution to his problem.
March 26th 2006, 09:55 AM
You're probably referring to some integrals which happened to be doable using complex analysis. Since calculus/analysis is one of my favourite fields in math, I replied there :p
It is indeed very powerful, many 'real integrals' which are very hard or even impossible using normal calculus can be computed with complex analysis. Like this one, since sin(x)/x doesn't have a
primitive function (at least not in terms of the elementary functions).
March 26th 2006, 09:58 AM
Originally Posted by TD!
You're probably referring to some integrals which happened to be doable using complex analysis. Since calculus/analysis is one of my favourite fields in math, I replied there :p
It is indeed very powerful, many 'real integrals' which are very hard or even impossible using normal calculus can be computed with complex analysis. Like this one, since sin(x)/x doesn't have a
primitive function (at least not in terms of the elementary functions).
What did you think of expressing sin(x)/x as an infinite series and then very easy to integrate? | {"url":"http://mathhelpforum.com/calculus/2328-help-calculus-problems-print.html","timestamp":"2014-04-21T01:04:25Z","content_type":null,"content_length":"22395","record_id":"<urn:uuid:8e8d50a6-aba6-4e03-884a-4e15dc3ba4f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
All primes ending in 1
Polya's Induction and Analogy in Mathematics Chapter 1; q1: All primes ending in 1
Nov 14, 2012 ~ 5 Comments ~ Written by obitus
Today I finally received my books from Amazon.com. I had purchased George Polya's Induction and Analogy in Mathematics Volume I and Volume II; along with Knuth's book, Art of Computer Programming,
Volume 1: Fundamental Algorithms (3rd Edition). I had been meaning to buy these books for some time now.
I first heard about Polya from my math teacher in advanced linear algebra class. I had asked him about developing mathematical concepts and he recommended me Polya's books for reference. I must say
for a man who read these books at his own leisure, he had excellent taste in mathematical literature.
Polya talks about developing the notion of "guess work" in his book, he encourages you to try out your ideas to see if you have the right notion. In one of his examples he states this problem:
1. Guess the rule according to which the successive terms of the following sequence are chosen: 11, 31, 41, 61, 71, 101, 131, ...
At first glance I immediately started to take the difference between each number, first there is a difference of 20, then 10, then 20, then 10, then 30, and finally 30 again. I could not make out any
discernible pattern, at least not right away. I had to take a step back and think about the first element in the sequence, 11. So I got my white boards together and started writing.
I thought maybe there was a formula for the way each element in the sequence came about, so I started to play with the indices of each element. Like everything,
increases by 1 for the use of this algorithm. I could not reproduce any of the numbers with out writing a formula for each element individually. Nothing came to mind with this technique. So then I
took the first element, 11, and started to remember its properties.
11 is a prime number, so then I looked at the second element. 31 is also prime, so is 41, 61, 71, 101, and 131. All numbers are prime in this list and they all have one thing in common, besides being
prime. They all end in 1, so the next element would be 151 and so forth.
I checked my work against the author's answer and online, it turns out that I am right. If you have experience with prime numbers this problem is a little less complicated and easier to figure out.
The major clue is the first element, if don't know primes you may spend a lot of time on the differences of each element. Its a good first practice problem.
If this sort of problem interest you then I recommend you pick a copy of George Polya's Induction and Analogy in Mathematics volume 1, just as my teacher had recommended to me.
Now my friend recommend we add on to what we can do with this current knowledge, so I looked up how we can code a program that will check if a number is prime and also check if its last digit is 1.
The second component is easy for me to program, so I don't need to find anything online for that. I am no expert programmer, so any solution I come with will most likely not be the most efficient. It
rarely occurs that I write the shortest most efficient code, I usually write the code to be "readable". I like to make my code self-documented as possible, so when you read the code it does not look
like hieroglyphics.
The language I love to use the most is python because that is what my computer science teacher used to teach me the fundamentals of programming. I also like python because it is easy to use for a
mathematician like me. There are plenty knowledgable programmers online that can answer questions should you need help, or just borrow the code like we are about to do.
So, as I searched online I came across this one person's website that had exactly what I am looking for. If you look here there is a nice program that tells us if a number is prime, which is the
first key in our quest for prime domination. For you "must be a better way" type, there is also this post on using numpy which can calculate a prime faster. How much faster is debatable, and not for
what we are looking for. The author posts a two different methods, which we will go over.
The first task we have to conquer is determining if a number is prime. The author of the post explores more than one way to check if a number is prime, so will discuss these methods. Although this
gets into computer science, it still deals with a lot mathematics.
Method 1: The Divisor Test
So the author starts off by testing a number for being a potential prime by doing the most basic test. Take any given n, we than proceed to divide every number less than n into n itself. So if we
take small prime number, say
We would than take our 5 and divide by 4, 3, and 2. Can't divide by 1 because 1 is part of the definition of prime. By modular division, none of these numbers produce a remainder of 0. So we can
conclude 5 is indeed prime:
The basic code looks like this:
The author goes through some techniques of how the code could be written more efficiently. Because this is not our concern, I will just show you the final product. Just incase you did not read the
entire post, he turns the algorithm into a function. This way he can call upon the function in another piece of code as needed.
The next method the blog shows us is prime sieve. This is one of the earliest methods of finding primes that is thought in school. If we take a range of numbers say 1-100 and than we start taking out
multiples of each prime we come across. By the end we should have crossed off all composite numbers.
Method 2: Prime Sieve
The final code that the author gives looks like this:
He even gives a funny picture that plots the first 1000 primes, each prime is represented as a vertical black line. Looking at the picture you might have guessed it to be a bar code of some sort.
john gerber
There is a formula for this. It is 10n + 1... http://oeis.org/A030430
• http://twitter.com/CptDeadBones Captain Dead Bones
I agree....
• Obitus
Had no idea there was a specific formula for this, thanks John.
I just bought Polya's books based on reading this post.
• Obitus
Thanks, I am glad to see one more person enthused about mathematics. If I had all the time in the world, I would keep writing more and more of his work. Polya had a way of fueling the desire to
tackle any mathematical problem, regardless of the difficulty.
Posted in Ask The Troll, Mathematical Books, Personal, Python code, Technique - Tagged divisor test, george polya, patterns, prime pattern, Prime Sieve, primes, python | {"url":"http://deadendmath.com/all-primes-ending-in-1/","timestamp":"2014-04-21T14:40:45Z","content_type":null,"content_length":"54023","record_id":"<urn:uuid:9a53d0a5-0403-45e3-aa4f-b89f3995ca93>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sunset Island, FL SAT Math Tutor
Find a Sunset Island, FL SAT Math Tutor
...I am very patient and understand that some students have a fear of mathematics, often due to a lack of understanding the basics. Students often need to be retaught what was not learned
previously with an experienced and patient teacher. I have had great results with my previous students on a one on one basis, I even have had students improving gradually their grade from a F to
an A.
48 Subjects: including SAT math, chemistry, reading, calculus
...I have worked in churches, wellness centers, gyms and yoga studios. I have been helping students for the SAT Math for the last two years. I can accommodate to the needs of the students and
give options on how to prepare better.
16 Subjects: including SAT math, Spanish, chemistry, biology
...I enjoy tutoring as it allows me to help students to understand and enjoy subjects that are really fascinating and useful, no matter what future career path they choose. My goal is to make the
student really understand the subject, not memorize it, so he/she can build a strong foundation that ca...
13 Subjects: including SAT math, chemistry, physics, calculus
Last May, I graduated from the University of Miami as a chemistry major and a mathematics minor, and I am currently a first year student at the Miller School of Medicine. During my four years at
UM, I spent a considerable amount of time tutoring, both one-on-one and in front of larger groups. I wo...
14 Subjects: including SAT math, chemistry, calculus, geometry
...I have worked as a private tutor for 5 years in a variety of subjects. I am very patient and believe in teaching by example. My general teaching strategy is the following: I generally cover
the topic, then explain in detail, make the student do some problems or write depending on the subject, and finally I make them explain and teach the topic back to me.
30 Subjects: including SAT math, chemistry, English, geometry
Related Sunset Island, FL Tutors
Sunset Island, FL Accounting Tutors
Sunset Island, FL ACT Tutors
Sunset Island, FL Algebra Tutors
Sunset Island, FL Algebra 2 Tutors
Sunset Island, FL Calculus Tutors
Sunset Island, FL Geometry Tutors
Sunset Island, FL Math Tutors
Sunset Island, FL Prealgebra Tutors
Sunset Island, FL Precalculus Tutors
Sunset Island, FL SAT Tutors
Sunset Island, FL SAT Math Tutors
Sunset Island, FL Science Tutors
Sunset Island, FL Statistics Tutors
Sunset Island, FL Trigonometry Tutors
Nearby Cities With SAT math Tutor
Carl Fisher, FL SAT math Tutors
Fisher Island, FL SAT math Tutors
Flamingo Lodge, FL SAT math Tutors
Hallandale Beach, FL SAT math Tutors
Indian Creek, FL SAT math Tutors
Keystone Islands, FL SAT math Tutors
Ludlam, FL SAT math Tutors
Miami Beach SAT math Tutors
Miami Beach, WA SAT math Tutors
Modello, FL SAT math Tutors
Seybold, FL SAT math Tutors
South Florida, FL SAT math Tutors
South Miami Heights, FL SAT math Tutors
Venetian Islands, FL SAT math Tutors
West Dade, FL SAT math Tutors | {"url":"http://www.purplemath.com/Sunset_Island_FL_SAT_Math_tutors.php","timestamp":"2014-04-19T07:12:33Z","content_type":null,"content_length":"24548","record_id":"<urn:uuid:52c0dfd3-4841-4155-abab-0ca197dada45>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
The PolyArea Project: Part 1
The Algorithm
The algorithm works by slicing triangles off the top of the polygon repeatedly until only a single triangle is left. During this process the polygon may split into two separate polygons. In this case
the same process continues on both polygons (which may split too). Overall, the algorithm manages a list of polygons and in each step it slices off a triangle from one of them and potentially splits
one of the polygons into two polygons.
In high-level pseudo code the algorithm can be described like this:
- Find top point
- Find second top point
- Slice off the top triangle
- If the second top point resides between the two lower vertices of the top
triangle (happens only when there is no top plateau) split the polygon into
two polygons along the segment from the top point to the second top point (see Figure 5)
- Repeat for each polygon until it is reduced to a single triangle.
The Code
The algorithm is implemented in the calc_polygon_area() function of the polygon.py module. The function takes a list of polygons and a callback function and it starts to decimate the polygons
according to the logic described above. Initially, the list of polygons will contain just the original polygon. As splits occur the list may grow up. As sub-polygons are exhausted the list of
polygons may shrink until there are no more polygons. If a polygon is reduced to a single triangle it is removed from the list.
The callback interface is intended for interactive programs where the main program may do something each time a new triangle is removed from a polygon. The callback function receives the removed
triangle (three vertices) and its area.
Here is the code:
def calc_polygon_area(polygons, callback):
while polygons:
poly = polygons[0]
if poly.is_triangle():
polygons = polygons[1:]
triangle = poly.points
second_top_point, kind = poly.find_second_top_point()
if kind == 'inside':
# need to split the polygon along the diagonal from top to second top
new_polygons = split(poly, (poly.sorted_points[0], second_top_point))
polygons = new_polygons + polygons[1:]
# No callback in this iteration because the first polygon was split
# but no triangle was removed.
# If got here then the target polygon (poly) has a top triangle that can be
# removed
triangle = remove_top_triangle(poly, second_top_point, kind)
# Call the callback with the current triangle
triangle = round_points(triangle)
callback(triangle, calc_triangle_area(*triangle))
# No more polygons to process. Call the callback with None, None
callback(None, None)
The calc_polygon_area() function operates on Polygon objects. These objects are instances of the custom Polygon class that has some special features for the purpose of this very algorithm. For
example, it manages the polygon points in a sorted list from top to bottom and points with the same Y coordinate are ordered from left to right. This helps finding the top point (it's simply the
first point) and the second top point. Let's go over the Polygon class.
The __init__() method accepts a list of points (the polygon vertices), stores them (after rounding) and then sorts them and verifies that everything is valid (the invariant() method). The
round_points() function was imported from the helpers module and just rounds each co-ordinate to three decimal digits, so it is more readable. It's not really necessary for the operation of the
class Polygon(object):
def __init__(self, points):
self.points = round_points(points)
The sort_points() method sorts the original points according to the "topness" order as needed by the main algorithm. Points with higher Y coordinate come before points with lower Y coordinate and
within the points with the same Y coordinate, points with lower X coordinate come first. Python objects can be sorted using Python's sorted() function that take a sequence and sorts it. By default,
the sequence elements are compared directly, but you can provide a custom compare function (or a function that creates a comparison key from each element). The sort_points() method defines an
internal function called compare_points() that implements the "topness" order, uses it as the cmp argument to the built-in sorted() function and assigns the result to self.sorted_points.
def sort_points(self):
def compare_points(p1, p2):
if p1 == p2:
return 0
if p1[1] > p2[1]:
return -1
if p1[1] < p2[1]:
return 1
# Same y-coordinate
assert p1[1] == p2[1]
if p1[0] < p2[0]:
return -1
assert p1[0] != p2[0]
return 1
self.sorted_points = sorted(self.points, cmp=compare_points)
The invariant() method verifies that there are more than two points, that there are no duplicate points, that the sorted points are sorted properly, that no three consecutive points are on the same
line (has the same X or Y coordinates) and finally that there are no vertices that reside on a polygon side:
def invariant(self):
assert len(self.points) > 2
for p in self.points:
assert len(p) == 2
for i, p in enumerate(self.sorted_points[1:]):
p2 = self.sorted_points[i]
assert p2[1] >= p[1]
if p[1] == p2[1]:
assert p2[0] < p[0]
# Make sure there are no duplicates
assert len(self.points) == len(set(self.points))
# Make sure there no 3 consecutive points with the same X or Y coordinate
point_count = len(self.points)
for i in xrange(point_count):
p = self.points[i]
p1 = self.points[(i + 1) % point_count]
p2 = self.points[(i + 2) % point_count]
assert not (p[0] == p1[0] == p2[0])
assert not (p[1] == p1[1] == p2[1])
# Make sure no vertex resides on a side
sides = []
for i in xrange(len(self.points)):
sides.append((self.points[i], self.points[(i+1) % len(self.points)]))
for p in self.points:
for side in sides:
if p != side[0] and p!= side[1]:
assert not helpers.point_on_segment(p, side)
The most interesting and complicated method of the Polygon class is find_second_top_point(). It returns a pair (2-tuple) that consists of the second top point itself and its kind. There are three
kinds of second top points: 'vertex', 'inside' and 'outside'. I will explain the code bit by bit because there is a lot to take in. The first stage is preparation only. The top point and the
candidates for second top point are found by iterating over the self.sorted_points list.
def find_second_top_point(self):
top = self.sorted_points[0]
top_y = top[1]
second_top_y = None
second_top = None
candidates = []
# Find the Y co-ordinate of the second top point and all the candidates
for p in self.sorted_points[1:]:
if p[1] < top_y:
if second_top_y is None:
second_top_y = p[1]
if p[1] < second_top_y:
break # finished with second top candidates
The next stage is finding the vertices that are adjacent to the top point. This is needed because if the second top point is one of them then it means its kind is 'vertex'.
index = self.points.index(top)
pred = self.points[index-1]
post = self.points[(index+1) % len(self.points)]
assert None not in (pred, post)
Once you have the candidates for second top point and the pred and post points. You can start looking for 'inside' second top point. If there is a candidate second top point horizontally between pred
and post then return it. There are three cases: both pred and post are candidates, only pred is a candidate or only post is a candidate. Note, that technically this point may be a vertex, but it is
still classified as 'inside' in order to split the polygon (otherwise the situation gets complicated).
# If both pred and post are candidates and there is another candidate
# between them then pick the candidate in between as an 'inside' point
if pred in candidates and post in candidates:
pred_index = self.sorted_points.index(pred)
post_index = self.sorted_points.index(post)
if abs(post_index - pred_index) > 1:
# there is a candidate between pred and post
index = min(pred_index, post_index) + 1
assert index < max(pred_index, post_index)
p = self.sorted_points[index]
assert p in candidates
return (p, 'inside')
# If either pred or post are candidates and there is another candidate
# between them then pick the candidate in between as an 'inside' point
if pred in candidates:
# Find the point p on (top, post) where y = second_top. If there is a
# candidate whose X coordinate is between pred.x and p.x then it is the
# second top point and it's an 'inside' point'
p = helpers.intersect_sweepline((top, post), second_top_y)
if p is not None:
left_x = min(pred[0], p[0])
right_x = max(pred[0], p[0])
for c in candidates:
if left_x < c[0] < right_x:
return (c, 'inside')
if post in candidates:
# Find the point p on (top, pred) where y = second_top. If there is a
# candidate whose X coordinate is between post.x and p.x then it is the
# second top point and it's an 'inside' point'
p = helpers.intersect_sweepline((top, pred), second_top_y)
if p is not None:
left_x = min(post[0], p[0])
right_x = max(post[0], p[0])
for c in candidates:
if left_x < c[0] < < right_x:
return (c, 'inside')
At this point, the option of an 'inside' point when pred or post are candidates has been ruled out and the second top point is the first candidate. If it is also either pred or post then it is a
second_top = candidates[0]
assert second_top[1] < top_y
# If the second top point is either pred or post then it is a 'vertex'
if second_top in (pred, post):
return (second_top, 'vertex')
If pred or post are at the same height as the top point (plateau, remember?) but the second top point is not pred or post (otherwise you wouldn't get here because it will return at one of the
previous checks) then it is outside.If it is higher then lower between pred or post (the one that is not in the plateau). Otherwise, it is inside.
# If pred or post are at the same height as top then second top is 'outside'
# if the second top is vertically between pred and post, otherwise it's
# inside
if max(pred[1], post[1]) == top_y:
if min(pred[0], post[0]) < second_top[0] < max(pred[0], post[0]):
return (second_top, 'inside')
return (second_top, 'outside')
At this stage, there is no plateau. Both pred and post are below the top point. The second top point is not pred or post. If the second top point has the same Y coordinate as pred or post then it
must be outside. How come? If it was between pred and post then it can't be the leftmost point between the three.
# Check if pred or post are at the same height as the second top point.
# If this is the case then the second top point must be outside.
if second_top[1] in (pred[1], post[1]):
return (second_top, 'outside')
Now, you know that the second top point has a higher Y coordinate, than both pred and post, but you are not sure if it is inside or outside (vertex was ruled out earlier). To figure it out you have
to check the intersection of the horizontal sweep-line that with Y=second_top with the line segments (top, pred) and (top, post). To do that pred and post are replaced with the corresponding
intersection points:
pred = helpers.intersect_sweepline((pred, top), second_top[1])
assert pred is not None
post = helpers.intersect_sweepline((top, post), second_top[1])
assert post is not None
The idea is that now you have again three points: pred, post, and second_top that all have the same Y coordinate and you can determine by their X coordinate if second_top is between pred and post
(inside) or outside:
# Pred and post are both fixed to be on the sweepline at this point
# if they weren't already. Find their left and right X-coordinate
left_x = min(pred[0], post[0])
right_x = max(pred[0], post[0])
# If the second_top_point is between post and pred it's internal
# otherwise it's outside
if left_x < second_top[0] < right_x:
kind = 'inside'
kind = 'outside'
return (second_top, kind)
Okay, so you found the second top point and classified it as 'vertex', 'inside' or 'outside'. If it's 'inside' then a triangle can't be removed at the moment and you need to split the polygon into
two separate polygons along the diagonal (top, second_top), which is guaranteed not to cross any other polygon line. This is exactly the job of the split() function. Here is how it works:
Let's say the polygon has 8 vertices numbered 0 through 7 and the diagonal runs from 3 to 6. Then the vertices from 3 to 6 (3, 4, 5, 6) will be one polygon and the vertices from 6 to 3 (6, 7, 0, 1,
2, 3) will be the second polygon. These two new polygons are kind of Siamese twins connected along the diagonal and share the diagonal vertices, but no other vertex is shared. Each twin is again a
simple polygon and they don't overlap:
def split(poly, diagonal):
"""Split a polygon into two polygons along a diagonal
poly: the target simple polygon
diagonal: a line segment that connects two vertices of the polygon
The polygon will be split along the diagonal. The diagonal vertices will
be part of both new polygons
Return the two new polygons.
assert type(diagonal) in (list, tuple)
assert len(diagonal) == 2
assert diagonal[0] in poly.points
assert diagonal[1] in poly.points
assert diagonal[0] != diagonal[1]
index = poly.points.index(diagonal[0])
poly1 = [diagonal[0]]
for i in range(index + 1, len(poly.points) + index):
p = poly.points[i % len(poly.points)]
if p == diagonal[1]:
index = poly.points.index(diagonal[1])
poly2 = [diagonal[1]]
for i in range(index + 1, len(poly.points) + index):
p = poly.points[i % len(poly.points)]
if p == diagonal[0]:
return [Polygon(poly1), Polygon(poly2)]
If the second top point is not 'inside' then you can remove a triangle from the current polygon. This is the job of the remove_top_triangle() function. It has to handle two cases: the single top
point and the plateau case. The function's doc comment in the code explains everything quite clearly, but I'll describe it here in a little more detail. The input to the function is the target
polygon, the second top point and its kind. The first few lines just verify the input:
def remove_top_triangle(poly, second_top_point, kind):
""" """
assert not poly.is_triangle(), 'Polygon cannot be a triangle'
assert second_top_point in poly.points
assert kind in ('vertex', 'outside'), 'Inside second top point is not allowed'
Next, it gets the top point and its index and verifies the top point is indeed above the second top point:
# Get the top point and its index
top_point = poly.sorted_points[0]
index = poly.points.index(top_point)
# Make sure the top point is really above the second top point
assert top_point[1] > second_top_point[1]
Then, the sweep-line is created. This is a horizontal line whose Y co-ordinate is the same as the second top point:
# Create the sweepline
x1 = -sys.maxint
x2 = sys.maxint
second_top = second_top_point[1]
sweepline = ((x1, second_top), (x2, second_top))
Once, all the preliminary checks are done and the sweep-line is defined it's time to check if the polygon has a top plateau or a single top point. The first step is to find the vertices that precede
and follow (index-wise) the top point. Care must be taken not to run out of bounds.
next_point = poly.points[(index + 1) % len(poly.points)]
prev_point = poly.points[index - 1]
It's time to check if we are in dealing with case 1 (two consecutive top points) or 2 and handle it. An empty list called new_points will store the new vertices that may need to be added to the
polygon and later redundant vertices will be removed.
# check if we are in case 1 (two consecutive top points) or 2
new_points = []
In case 1 there is a single new point, which is the intersection of the segment from the non-plateau point with the sweep-line (if the other point is the second top point it stays as is). The second
plateau point (not the top point) is added too.
if max(next_point[1], prev_point[1]) == top_point[1]:
# Case 1 - platuae
p = next_point if next_point[1] < prev_point[1] else prev_point
other_point = prev_point if next_point[1] < prev_point[1] else next_point
segment = (p, top_point)
new_point = helpers.intersect(segment, sweepline)
assert new_point is not None
new_points = [new_point, other_point]
In case 2 the new points are the intersection of the prev and next segments with the sweep-line (if one of the points is a second top point it stays as is).
# Case 2 - single top point
for p in prev_point, next_point:
if p[1] == second_top_point[1]:
new_point = p
segment = (p, top_point)
new_point = helpers.intersect(segment, sweepline)
assert new_point is not None
At this stage, the top triangle to be removed is constructed from the top point and the two new points.
assert len(new_points) == 2
triangle = new_points + [top_point]
The new points are rounded and the ones that are not in the current polygon are added to the polygon (injected where the top point used to be).
new_points = round_points(new_points)
to_add = [p for p in new_points if p not in poly.points]
poly.points = poly.points[:index] + to_add + poly.points[index+1:]
The last and most delicate part of the function is to remove invalid points that violate the polygon invariant. In order to figure out what points are invalid you need to create a list of the polygon
# Find the polygon sides
sides = []
for i in range(0, len(poly.points) - 1):
sides.append((poly.points[i], poly.points[i+1]))
sides.append((poly.points[-1], poly.points[0]))
The first type of invalid point is a left vertex of the top triangle if the top triangle has a horizontal bottom and it is sticking to the left (see Figure 6). This previously valid point became
invalid as a result of removing the top triangle.
# Remove left vertex of triangle if it has a horizontal bottom and it is
# on a polygon side
if new_points[0][1] == new_points[1][1]:
left_vertex = (min(new_points[0][0], new_points[1][0]), new_points[0][1])
right_vertex = (max(new_points[0][0], new_points[1][0]), new_points[0][1])
for side in sides:
if helpers.point_on_segment(right_vertex, side):
if right_vertex not in side:
Another case involes three consecutive points along the same horizontal or vertical line.
poly.points = helpers.filter_consecutive_points(poly.points)
The last case are points that end up in the middle of an existing polygon side. You need to find the polygon sides again because the polygon has been potentially modified.
# If there are points that are in the middle of a side remove them
to_remove = []
# Find the polygon sides again
sides = []
for i in range(0, len(poly.points) - 1):
sides.append((poly.points[i], poly.points[i+1]))
sides.append((poly.points[-1], poly.points[0]))
# Iterate over all the polygon points and find the points
# that intersect with polygon sides (but not end points of course)
for i in range(0, len(poly.points)):
p = poly.points[i]
for side in sides:
# If p is on segment s then should be removed
if p != side[0] and p != side[1]:
if helpers.point_on_segment(p, side):
for p in to_remove:
Finally, sort the polygon points, verify that the new polygon still maintains the invariant and return the removed triangle.
return triangle
What started off as a "little" project turned up to be more complicated than expected. I hoped for a little elegant algorithm, but it ended up as a fairly complex beast fractured into multiple cases
that need to be handled separately. It was a lot of fun working together with Saar and I think that now he at least understand the complexity of software and how much work it takes to make seemingly
simple things work reliably. In the next installment I will describe the poly area user interface that also turned out out to be surprisingly difficult to get right. | {"url":"http://www.drdobbs.com/open-source/the-polyarea-project-part-1/226700093?pgno=4","timestamp":"2014-04-17T10:25:04Z","content_type":null,"content_length":"120630","record_id":"<urn:uuid:8c3db4b0-c3a1-41a7-a41b-bed68530b2c9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |