content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
October 18th 2008, 02:42 AM #1
Oct 2008
Let S be the set of all functions from {1, 2} to {1, 2, 3}. Let F be an element of S chosen at random.
Find |S| and find the probability that F is an injective function.
For the question I know the |S|= 9 but finding it difficult to find the probability that f is an injective function.
Let S be the set of all functions from {1, 2, 3} to {1, 2}. Let G be an element of S chosen at random.
Find |S| and find the probability that G is an injective function.
For the question I know that the probability that f is an injective function is 0 as this can not be injective as the domain and co domain is disjoint. But I am finding it difficult to find |S|.
Let S be the set of all functions from {1, 2,…, M} to {1, 2,…, N}. Let Q be an element of S chosen at random.
Find |S| and find the probability that Q is an injective function.
For this question I do not understand it at all.
In order for there to be an injection from $A \mapsto B$ it must be the case that $<br /> \left| A \right| \leqslant \left| B \right|$.
Then we count the number of permutations of $\left| B \right|$ taken $\left| A \right|$ at a time: $\frac{{\left| B \right|!}}{{\left[ {\left| B \right| - \left| A \right|} \right]!}}$.
In terms of probability $<br /> \frac{{\frac{{\left| B \right|!}}{{\left[ {\left| B \right| - \left| A \right|} \right]!}}}}{{\left| B \right|^{\left| A \right|} }}$.
In order for there to be an injection from $A \mapsto B$ it must be the case that $<br /> \left| A \right| \leqslant \left| B \right|$.
Then we count the number of permutations of $\left| B \right|$ taken $\left| A \right|$ at a time: $\frac{{\left| B \right|!}}{{\left[ {\left| B \right| - \left| A \right|} \right]!}}$.
In terms of probability $<br /> \frac{{\frac{{\left| B \right|!}}{{\left[ {\left| B \right| - \left| A \right|} \right]!}}}}{{\left| B \right|^{\left| A \right|} }}$.
Thanks for for the help. I do understand the first part and what is meant by injective but i do not understand about the following:
Then we count the number of permutations of taken at a time: .
In terms of probability and the question.
Can you please explain it to me in more detail.
Let’s say that $\left| A \right| = 5\;\& \;\left| B \right| = 9$ so there are $9^5\ = 59049$ possible mappings $A\mapsto B$.
The set $B^A$, the set of all mappings $A\mapsto B$, has cardinality $\left| {B^A } \right| = \left| B \right|^{\left| A \right|}$. (The notation makes it easy to remember.)
Now the number of injections $A\mapsto B$ is $P(9,5) = \frac{{9!}}{{\left[ {9 - 5} \right]!}} = \left( 9 \right)\left( 8 \right)\left( 7 \right)\left( 6 \right)\left( 5 \right) = 15120$.
So the probability of randomly selecting an injection is $\frac {15120}{59049}$.
Let’s say that $\left| A \right| = 5\;\& \;\left| B \right| = 9$ so there are $9^5\ = 59049$ possible mappings $A\mapsto B$.
The set $B^A$, the set of all mappings $A\mapsto B$, has cardinality $\left| {B^A } \right| = \left| B \right|^{\left| A \right|}$. (The notation makes it easy to remember.)
Now the number of injections $A\mapsto B$ is $P(9,5) = \frac{{9!}}{{\left[ {9 - 5} \right]!}} = \left( 9 \right)\left( 8 \right)\left( 7 \right)\left( 6 \right)\left( 5 \right) = 15120$.
So the probability of randomly selecting an injection is $\frac {15120}{59049}$.
Know i get it.
Once again, thanks for helping me, i really appriate it.
October 18th 2008, 03:57 AM #2
October 18th 2008, 04:51 AM #3
Oct 2008
October 18th 2008, 06:06 AM #4
October 18th 2008, 08:25 AM #5
Oct 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/54305-solved-sets-functions.html","timestamp":"2014-04-23T21:21:25Z","content_type":null,"content_length":"52276","record_id":"<urn:uuid:d5fb3f9e-e43a-4b33-a1f1-bcfae50f2c89>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
quick question on the conservation of angular momentum
The classic example is the figure skater doing twirls. When one pulls one's limbs in, it decreases the moment of inertia, so the angular velocity increases. The angular momentum stays the same.
So far, so good.
At a microscopic level, I suppose you could picture the molecules moving on average about the center of mass with some linear velocity perpendicular to the direction to the center of mass. If these
molecules are pulled in toward the center of mass, the linear velocity doesn't change (conservation of linear momentum), but the same linear velocity becomes a larger angular velocity because it's
As has been pointed out already in this thread, the linear momentum does change. There are a number of ways to think about this.
1. The skater has to exert significant force to draw his or her arms in. That force does work on the arms, increasing their kinetic energy. If their kinetic energy increases, it follows that their
linear velocity has increased.
2. Moment of inertia scales as the square of radius. If you scale down the radius by a factor of two and conserve angular momentum, it follows that angular velocity increases by a factor of four. It
is then clear that linear velocity has increased by a factor of two.
3. As you reel in an object on a string the only force is radial -- there is no torque. But the radial direction and the tangential direction are not at right angles as an object is pulled in on a
spiral trajectory. There is a net tangential acceleration. So the object ends up moving faster than it started. | {"url":"http://www.physicsforums.com/showthread.php?p=4183387","timestamp":"2014-04-17T21:39:56Z","content_type":null,"content_length":"70359","record_id":"<urn:uuid:d79c80cd-3eba-4fb0-a0b1-a0492e8eb54f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix Algebra
Date: 08/28/97 at 21:34:35
From: Corinne Offerman
Subject: Matrices, algebra
B = ( ) C = ( -1 -3 )
1 -1 -2 5 -6
Find BC from this matrix.
I am not sure which formula of matrices to use in this situation.
Can you please help me? Thank you.
Date: 08/29/97 at 12:38:42
From: Doctor Anthony
Subject: Re: Matrices, algebra
You should look at a textbook covering the rules of matrix algebra.
However, the rule for multiplying requires that you combine elements
of rows from the lefthand matrix with elements of columns of the
righthand matrix:
|2 4 3| | 1 2| |13 -26|
|1 -1 -2| |-1 -3| = |-8 17|
| 5 -6|
(2 x 3) (3 x 2) = (2 x 2)
| | | |
| |_ inner_| |
| |
|____ outer ______|
The 'shape' of the lefthand matrix is (2 x 3), that is, it has 2 rows
and 3 columns. Always quote the number of rows first, then the number
of columns when specifying the 'shape'. The righthand matrix has shape
(3 x 2).
Now the rule of multiplication is that number of columns of the
lefthand matrix must equal the number of rows of the righthand matrix.
In this example they are both equal to 3, so the product exists.
If we write down the shapes as I have done below the matrices, we
see that that the 3's are the 'inner' numbers in the two brackets,
and these must be equal. The 'shape' of the resulting matrix after
multiplication is given by the two 'outer' numbers, in this case
(2 x 2). So the result will have 2 rows and 2 columns.
Finally, to fill in the numbers in the product matrix, we first look
at the first row, first column position. We get its value by combining
the elements of the first row of the lefthand matrix with the elements
of the first column of the righthand matrix. Starting at the first
element of each we get 2 x 1.
Now move to second element of each. We get 4 x -1. Then go to third
element of each. We get 3 x 5. So we must combine 2 - 4 + 15 to get
13. To get the the value of first row second column of product matrix
we combine elements of the first row of the lefthand matrix with
elements of the second column of the righthand matrix. This gives
2 x 2 + 4 x -3 + 3 x -6 = 4 - 12 - 18 = -26.
The second row of the product matrix is filled in the same manner, but
now you combine elements of the second row of the lefthand matrix with
elements of the first and second columns of the righthand matrix.
To fill in the value of any position in the resulting matrix, first
decide in which row and which column it lies. Then select that row
from the lefthand matrix, and that column from the righthand matrix,
and combine them element-by-element in the manner described above.
Having written down the 'shapes', if the two 'inner' numbers are
not the same, then the product does not exist. Don't waste time
trying to produce it. Note also that if you swap the matrices round
from AB to BA you will get a different result, or no result, as the
product may exist in the AB form but not in the BA form.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/51647.html","timestamp":"2014-04-17T15:57:52Z","content_type":null,"content_length":"8164","record_id":"<urn:uuid:1b20aa60-ce47-463d-99c1-b45a1e3f4e9c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
The subset function is available in base R and can be used to return subsets of a vector, martix, or data frame which meet a particular condition. In my three years of using R, I have repeatedly used
the subset() function and believe that it is the most useful tool for selecting elements of a
More fun with boxplots
Here are a few more plotting options for boxplots: Let’s start plotting the full set plot(b$mod, b$x) Plot labels for a subset in full set plot (label all points x < -1) text(subset(b$mod, b$x < -1),
subset(b$x, b$x < -1), … Continue reading →
Example 8.4: Including subsetting conditions in output
A number of analyses perform operations on subsets. Making it clear what observations have been excluded or included is helpful to include in the output.SASThe where statement (section A.6.3) is a
powerful and useful tool for subsetting on the fly. (... | {"url":"http://www.r-bloggers.com/tag/subset/","timestamp":"2014-04-16T22:20:15Z","content_type":null,"content_length":"27647","record_id":"<urn:uuid:b7d589eb-a03c-4832-ab2c-ed4a88b29aa1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
adding vectors
November 10th 2010, 07:47 AM
adding vectors
Calculate the resultant vectors for the following. Solve using scale when appropriate.
5a + (-2a) if a is 22 kilometres north?
5x 22 +(-2x22) = 110 - 44 ??
how do I represent them?
November 10th 2010, 07:53 AM
a is 22 km north means it can be represented by a = <0,22> or does your text have a weird way of saying that?
November 10th 2010, 09:36 AM
"In mathematics, you don't understand things. You just get used to them." -- Johann von Neumann
I was talking about the direction.
when you represent the vectors, +(-2a) is going to point towards south? | {"url":"http://mathhelpforum.com/trigonometry/162764-adding-vectors-print.html","timestamp":"2014-04-18T00:48:00Z","content_type":null,"content_length":"4642","record_id":"<urn:uuid:c9403163-b03b-4334-a2bc-bfcc2f3f5bbc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Google Answers: Relative strength of square steel tubing in different orientations
Hello kann, you are correct in your thinking. The section modulus,
which is used to calculate bending strength, does depend on
orientation in the case of a square shape. Whether the section is a
piece of square tubing or a solid square bar the same ratio holds
true. In the first case where your boom is mounted "square", the
technical term is "axis of moments through center". In the second
"diamond" mounting, it would be said to have "axis of moments on
diagonal". The formulas for the two cases are as follows:
Axis of moments through center:
Section modulus = d^3 / 6 = d^3 x 0.167
Axis of moments on diagonal:
Section modulus = d^3 / (6 x sqrt 2) = d^3 x 0.118
In both formulas d is the length of a side or 3 inches in your case.
You can see by this that the section modulus is larger when the boom
is mounted "square" by the ratio of 0.167/0.118 or 1.42. This means
that your boom is 1.42 times stronger when it is mounted as you have
it now in the "square" orientation. However, this is only true if you
are using the boom to pick up a load in a straight vertical pull. If
you were to pull at 45 degrees, the "diamond" mounting would be
Now to the problem of fence post impacts. If the boom impacts the post
and the load applied is in a purely horizontal plane, then the boom is
still stronger by exactly the same ratio. The only thing that would
change this is if the post were to hit the boom at an odd angle.
I think I have answered all the points of your question. However, if I
have left something out or there is anything you don't understand,
please ask for a clarification and I will do my best to answer.
Hope this wins your bet, Redhoss | {"url":"http://answers.google.com/answers/threadview/id/523190.html","timestamp":"2014-04-21T14:57:00Z","content_type":null,"content_length":"9558","record_id":"<urn:uuid:67e1e99d-f914-4c1c-a9e9-fadb9b725dc0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
opengl z-buffer alpha sorting
Actually, this isn't necessarily true. Imagine two triangles, one red and one green, in front of the camera with their vertices at the exact same positions. Now imagine one of them is rotated
slightly along the vertical axis, and the other is rotated by the same amount in the opposite direction. Now in some pixels the red triangle should be in front of the green triangle, but in others
the green one is in front. If you simply draw one and then the other, you'll end up with a mostly red or mostly green shape, rather than something half green and half red.
Of course, but these triangles are then intersecting one another, ("they are not solid"). I presume OP wants to draw triangles that cannot intersect (think of them made of some solid material).
What I was pointing out is that even if the triangles are solid and do not intersect, you still cannot order them.
If your scenery has only two triangles that do not intersect pairwise, we could cook up formulas for determining which of two triangles must be drawn first quite easily.
Would approximating centers work then?
Nope, you can cook up very simple examples (two rectangles perpendicular to the ground) in which this will fail.
How I would go about determining which of two solid shapes comes first? I would take all of the sides of the solid shapes and compare those. If all sides of a solid convex body are in front of all
sides of another convex body then the first convex body is in front of the second. A parallelepiped has 6 walls, so that is an easy function.
How I would go about determining which of two solid pairwise non-intersecting polygons comes first? I triangulate each polygon, and boil down the problem to one for triangles. A rectangle can be
triangulated in two triangles.
Now, to solve the problem for two triangles. I would go about like this. 1) Project each edge of each triangle on the viewing surface ("the eye of the beholder").
[Edit:] Thanks to helios for the correction:
[DEL:If the edges do not intersect pairwise you are done (neither of the objects lies in front of the other). :DEL]
1.5) Find whether a vertex of one triangle is contained in the other triangle. There is a ready formula for that.
2) Find the first intersection (in a non-corner point!) You need to check 9 cases here (each of the 3 edges of the first triangle with each of the 3 edges of the second triangle).
3) Find the two corresponding point on the original (non-projected triangles). Those will lie on the same ray that projects them onto the same point on the surface of the "eye of the beholder".
4) Determine which of the two points is closer to the point of projection.
Note that the surface projection onto the "eye of the beholder" is usually done like this. You select a point "the center of your eye". Then you put a canvas ("the surface of your eye") in front of
the "center of your eye". This canvas is usually rectangular and represented by the monitor of your computer.
Then a point in the scenery is projected by drawing a segment between the point and the center of your eye, and determining at which point does that segment intersect "the surface of your eye".
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/lounge/89544/","timestamp":"2014-04-19T20:08:05Z","content_type":null,"content_length":"29447","record_id":"<urn:uuid:6e5eb082-9e3f-46ce-a3a8-4737be7ef688>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference class problem
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
This article needs rewriting to enhance its relevance to psychologists..
Please help to improve this page yourself if you can..
In statistics, the reference class problem is the problem of deciding what class to use when calculating the probability applicable to a particular case. For example, to estimate the probability of
an aircraft crashing, one might use the frequency of crashes of all aircraft, of this make of aircraft, of aircraft flown by this company in the last ten years, etc. Any case is a member of very many
classes, in which the frequency of the attribute of interest (such as crashing) differs, and the reference class problem discusses which is the most appropriate to use.
More formally, many arguments in statistics take the form of a statistical syllogism:
1. $X$ proportion of $F$ are $G$
2. $I$ is an $F$
3. Therefore, the chance that $I$ is a $G$ is $X$
$F$ is called the "reference class" and $G$ is the "attribute class" and $I$ is the individual object. How is one to choose an appropriate class $F$?
In Bayesian statistics, the problem arises at that of deciding on a prior probability for the outcome in question (or when considering multiple outcomes, a prior probability distribution).
John Venn stated in 1876 that "every single thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite
number of different classes of things", leading to problems with how to assign probabilities to a single case. He used as an example the probability that John Smith, a consumptive Englishman aged
fifty, will live to sixty-one.^[1]
The name "problem of the reference class" was given by Hans Reichenbach, who wrote, "If we are asked to find the probability holding for an individual future event, we must first incorporate the
event into a suitable reference class. An individual thing or event may be incorporated in many reference classes, from which different probabilities will result."^[2]
There has also been discussion of the reference class problem in philosophy.^[3]
See also
1. ↑ J. Venn,The Logic of Chance (2nd ed, 1876), p. 194.
2. ↑ H. Reichenbach, The Theory of Probability (1949), p. 374
3. ↑ A. Hájek, The Reference Class Problem is Your Problem Too, Synthese 156 (2007): 185-215. | {"url":"http://psychology.wikia.com/wiki/Reference_class_problem","timestamp":"2014-04-23T17:47:56Z","content_type":null,"content_length":"62055","record_id":"<urn:uuid:c77a0031-2490-4b99-b3e3-d5a4018c002b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analyze IMD-Causing Frequency Components
Amplifiers based on MOSFET devices can suffer nonlinearities traceable to harmonic signal components. To better understand this phenomenon, an analysis was performed, based on polynomial substations,
to identify the signal components that can affect the nonlinear performance of an amplifier operated at relatively high input signal levels. Polynomial substitutions were performed with the
Mathematica software from Wolfram Research.
The concept of generating intermodulation products by feeding back frequency components produced from different orders of nonlinearity has been treated previously in the literature. ^1,2 In one
study,^3 Volterra series analysis was used to find expressions for the third-order intermodulation (IMD3) of a common-emitter bipolar junction transistor (BJT).
Replacing β with τ and ∞ with 0 in the IMD[3] equations for the BJT in ref. 3 yields Eqs. 1 through 3 for the MOSFET. These equations have emerged as the basis of a significant portion of
linearization concepts proposed by researchers in recent times:
IM(2ω[a] ω[b]) = (3/4) |H(ω)| |A(ω)|^3 |e(Δω, 2ω)| V2 (1)
ε(Δω,2ω)=g[3]-g[OB] (2)
gOB = (2g[2]^2/3)({2/1 + g(Δω)>} + {1/1 + g(2ω)>}) (3)
Parameter H(ω) relates the equivalent IMD voltage to the third-order intermodulation response of the drain current nonlinear term; A[1](ω) is the linear transfer function for the input voltage of
vgs, parameter ε(Δω, 2ω) shows how the collector current nonlinearities contribute to its IMD3 response, and parameters g(Δ?) and g(2ω) are the conductance functions defined at subharmonic (ω[1] ω
[2]) and second-harmonic (2ω) frequencies.
As can be seen from Eq. 2, if g[3] is made negligibly small (by linearization), ε(?ω, 2ω) becomes dominated by g[OB], which is proportional to the square of g[2] that represents the level of the
second-order nonlinearity.^1
The effect on IMD[3] from the second- order nonlinearity comes from the feedback of second-order nonlinear components to the input. While most work in the literature focuses on two componentsthe
second harmonic and the difference frequency as contributing to third-order intermodulation distortion, the present analysis includes the potential effects of all components produced from all single
orders of nonlinearity, from single-order to seventh-order effects, in understanding their contributions to third-, fifth-, and seventh-order intermodulation distortion as a result of feedback of
nonlinearity terms through seventh-order terms.
One of the main feedback paths in MOSFETs is the gate-drain capacitance; the other is the degeneration inductance at the source, which is frequently used in low-noise-amplifier (LNA) designs to
generate a positive real part of the impedance at the input for matching purposes. Taking the second-harmonic products as an example, as the inductance represents a nonzero impedance at the
second-harmonic frequency, the second- harmonic currents generated in the source of the MOSFET establish nonzero second-harmonic responses in the gate-source voltage, effectively feeding back the
second-harmonic components to the input.
Again taking second-order components as an example, Fig. 1 shows graphically how second-order nonlinearities can contribute to thirdorder intermodulation distortion as an example of the general
concept of frequency component feedback. The original two-tone input signal is Asinω[1]t + Asinω[2]t (shown in short form in Fig. 1 as ω[1] + ω[2] to represent frequency-only content for
simplification). The system has two third-order nonlinear terms. The two-tone input is exposed to second- and third-order nonlinearities separately. Frequency components produced from the second-
order nonlinearity are as shown in Fig. 1. The Mathematica snapshot of Fig. 2(a) shows the exact magnitudes and coefficients of these components.
All frequency components generated from the third-order system can be fed back to the input. Considering only components produced from the second-order nonlinearity, which can contribute to
third-order intermodulation distortion when fed back, the components 2ω[1] and (ω[1] ω[2]) are indicated on the arrow of the feedback path in Fig. 1. These components mix with the fundamental and
re-enter the third-order nonlinear system. The new total input is therefore exposed to all orders of nonlinearity in a similar fashion as the original two-tone input. The new input to the system is
as shown at the top of Fig. 2(b), which is a snapshot of the substitution in Mathematica for the example when the second-order component 2ω is fed back to the input. The second-order nonlinear term
produces the sum and difference of all the frequency components to which it is exposed. Since the components 2ω[1] (feedback) and ω[2] (input tone) are present at the input, when the second-order
nonlinearity calculates the difference, the thirdorder intermodulation component, 2ω[1] ω[2] results, as shown in Fig. 2(a). In another example, with the difference frequency ?1 ?2 (feedback) and
component 2?1 (input tone) present at the input, if the second-order nonlinearity calculates the sum, the thirdorder intermodulation component 2ω[1] ω[2] results.
A mathematical analysis was performed using a seventh-order polynomial for representing the nonlinearity of an assumed amplifier where nonlinearities as high the seventh order were considered
significant. The analysis, which was used to study all the possibilities of intermodulation products, was based on the deleterious feedback effects discussed in refs. 2 and 4 in terms of
contributions to the overall intermodulation products. The procedures work as follow: A two-tone signal of the form Asinω[1]t + Asinω[2]t is assumed as an input signal to the theoretical nonlinear
amplifier under analysis; this is thereafter referred to as the initial fundamental two tone input. The initial fundamental input is processed by each nonlinearity order of the amplifier
individually. Several harmonic and intermodulation components are generated from this process; these are thereafter referred to as the initially generated frequency components. Since filtering takes
place only at the output of the amplifier, all the initially generated frequency components will return to the input through feedback routes that resemble a negative feedback configuration.
Therefore, the initially generated frequency components are also thereafter referred to as feedback components. Each feedback component will be added to the fundamental at the input and the new input
processed through all orders of nonlinearity to the seventh order. The process will result in the generation of new frequency components from each order of nonlinearity not generated when only the
fundamental is processed through that respective order.
The focus of the analysis is on the third-, fifth-, and seventh-order intermodulation components generated from this process, these are thereafter referred to as the feedback-generated
intermodulation components. Any feedback component that results in the generation of intermodulation distortion products is then noted. To generalize the analysis and make it valid for as many
amplifiers of various standards (which may have different operating powers) as possible, the following assumptions are made:
Continue on page 2
Page Title
All components will return with equal amplitudes and zero phase; it is difficult to make up for the limited bandwidth of the return path.
All the resulting intermodulation components will add up algebraically; it is difficult to make up for the phasefrequency characteristics. This will result in a worst-case analysis.
In fact, the feedback loop is usually strongly frequency dependent. The above-mentioned assumptions are, therefore, simplifying assumptions and may result in worst-case estimations. Nevertheless,
this analysis will provide a better insight into the likely significance of generated intermodulation components from mixing all the fed-back components and the original fundamental input tones.
Figure 2 explains how this was represented in Mathematica.^5 Frequency components produced from the second- order nonlinearity when the input is just the original two-tone input are shown in Fig. 2
(a). In Fig. 2(b), the sections associated with the 2ω components are fed back by adding them (after reversing their sign for the negative feedback) to the initial two-tone input and exposing the
total new input to only the second order nonlinearity. As seen in the circled components, the third-order intermodulation products are produced.
It should be noted that in deriving the equations in Fig. 2(b), only the contributions of the 2ω components were considered. Obviously, additional third-order components will be produced if the
second-order sum and difference components are considered. The substitution in Fig. 3 considers the produced third-order intermodulation components when the feedback 2ω frequency resulting from the
secondorder nonlinearity is inserted into a third-order system.
Figure 3 shows only the resultant IMD3 components from this substitution. The first component has coefficient "b" only, which indicates that this component is produced from mixing the two initial
input tones and the fed-back components by the secondorder nonlinearity only. The second component has coefficient "c" only, which indicates that this component is produced from mixing the two input
tones by the third-order nonlinearity only. The third component has coefficients "b" and "c," indicating this component is produced by mixing components from the second-order nonlinearity (fed back)
and the original input tones in the third-order nonlinearity.
The table provides a summary (not full results due to space limitations) of the analysis results showing which fed-back components resulted in feedback-generated intermodulation components. Table 1
should be read as follows: The frequency components on the left are those that result from the mixing of the original two-tone input in their respective order of nonlinearity alone. These components
are mixed with the original fundamental two-tone input into each order of nonlinearity from the second order to the seventh order, individually. The symbol comprised of an "x" with a check mark means
that an intermodulation distortion component of order "x" was produced while the symbol comprised of an "x" with a cross means no intermodulation component of order "x" has been produced.
Regarding the table:
a) The feeding back of frequency components not only introduces intermodulation distortion components from orders of nonlinearity that would normally not produce them, but adds to the intermodulation
components normally produced by their respective order of nonlinearity. For example, feeback of the ω[1] ω[2] component from the second-order nonlinearity to the input produces third-order
intermodulation component 3b^2> sin(2ω[1] ω[2]) when mixed with the fundamental signal components in the second-order nonlinearity, but also, when mixed with the fundamental signal components in the
third-order nonlinearity, it produces third-order component (3/4)A^5b^2c>sin(2ω[1] ω[2]),which adds to the third-order intermodulation component (3/4)A^3c> sin(2ω[1] ω[2]) normally produced by the
third-order nonlinearity from the original two-tone input.
b) All fed-back components mixed with the fundamental in any odd-order nonlinearity produce at least two respective intermodulation components of that order; one is due to the exposure of the
fundamental components to that nonlinearity order and the other is due to mixing with the feedback components. Usually, the magnitude of the non-mixed term is higher. For example, 2ω[1]+ ω[2] from
the third order into the fifth order produces 5e>sin(3ω[1] 2ω[2]) and 7ce>sin(3ω[1] 2ω[2]) where the coefficient "e" of the first term indicates it is from the fundamental and coefficient "ce" of the
second term indicates it is from mixing with the fedback third-order term.
c) Generally, if a component of an order of intermodulation distortion is produced from any even-order nonlinearity, all lower-order intermodulation products are also produced. For example, 2ω[1] 4ω
[2] from the six order into the six order produces seventhorder, fifth-order, and third-order components. And 2ω from the second order into the third order produces fifth-order and third-order
components. Also, 4ω from the fourth order into the third order produces thirdorder and seventh-order components but not fifth-order components.
d) Several previously unnoticed frequency components produce thirdorder intermodulation distortion, but with lower magnitude than those resulting from the feedback of the second-harmonic component.
For example, 2ω[1]+ ω[2] from the third order into the third order produces 5c^2>sin(2ω[1]- ω[2]) and 2ω[1]- 2ω[2] from the fourth order into the second order produce 5bd>sin(2ω[1]- ω[2]). However,
these components can be of concern if the input power was high enough to cause their magnitudes to increase to levels where they can cause distortion. This may be specific for a particular circuit
and there is no general rule to define its boundaries.
e) In some cases, frequency components produced from some low orders of nonlinearity mix with the fundamental in other low orders of nonlinearity to produce higher-order intermodulation distortion.
For example, 2ω from the second into the third order produces fifth-order components, 2ω[1]- 2ω[2] from the fourth into the third order produces seventhorder components and ω[1]- ω[2] from the second
to the fifth order produces seventh-order components.
f) Perhaps a key observation is that any feedback frequency component from any order of nonlinearity, mixed with the fundamental in any odd order of nonlinearity, produces feedbackrelated
intermodulation components of that respective order and all lower odd orders.
1. B. Kim, J.-S. Ko, and K. Lee, "Highly Linear CMOS RF MMIC Amplifier Using Multiple Gated Transistors and its Volterra Series Analysis," in Proceedings of the IEEE MTT-S Int. Microwave Symposium
Digest, Vol. 3, 2001, pp. 515-518.
2. R. A. Baki, T. K. K. Tsang, and M. N. El-Gamal, "Distortion in RF CMOS Short-channel Low-noise Amplifiers," IEEE Transactions on Microwave Theory and Techniques, Vol. 54, No. 1, 2006, pp. 46-56.
3. V. Aparin and C. Persico, "Effect of Out-of-band Terminations on Intermodulation Distortion in Common-Emitter Circuits," in Proceedings of the IEEE MTT-S International Microwave Symposium Digest,
Vol. 3, 1999, pp. 977-980.
4. V, Aparin and L. E. Larson, "Modified Derivative Superposition Method for Linearizing FET Low-Noise Amplifiers," IEEE Transactions on Microwave Theory and Techniques, Vol. 53, No. 2, 2005, pp.
5. Wolfram Research, Inc., Mathematica, www.wolfram.com (accessed 2007). | {"url":"http://mwrf.com/print/components/analyze-imd-causing-frequency-components","timestamp":"2014-04-17T12:32:24Z","content_type":null,"content_length":"31165","record_id":"<urn:uuid:b51a1424-739d-4ff7-a562-40daa500a7eb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of the South African Institution of Civil Engineering
Services on Demand
Related links
Print version ISSN 1021-2019
J. S. Afr. Inst. Civ. Eng. vol.52 no.1 Midrand Apr. 2010
Infilling annual rainfall data using feedforward back-propagation Artificial Neural Networks (ANN): application of the standard and generalised back-propagation techniques
M Ilunga
Water resource planning and management require long time series of hydrological data (e.g. rainfall, river flow). However, sometimes hydrological time series have missing values or are incomplete.
This paper describes feedforward artificial neural network (ANN) techniques used to infill rainfall data, specifically annual total rainfall data. The standard back-propagation (BP) technique and the
generalised BP technique were both used and evaluated. The root mean square error of predictions (RMSEp) was used to evaluate the performance of these techniques. A preliminary case study in South
Africa was done using the Bleskop rainfall station as the control and the Luckhoff-Pol rainfall station as the target. It was shown that the generalised BP technique generally performed slightly
better than the standard BP technique when applied to annual total rainfall data. It was also observed that the RMSEp increased with the proportion of missing values in both techniques. The results
were similar when other rainfall stations were used. It is recommended for further study that these techniques be applied to other rainfall data (e.g. annual maximum series, etc) and to rainfall data
from other climatic regions.
Keywords: rainfall data infilling, artificial neural network, back-propagation
A considerable amount of data on hydrological variables such as rainfall, streamflow, etc are required for the planning, management and effective control of water resource systems. Annual rainfall is
used for agricultural planning since the total amount of rainfall is among the most important factors that affect agricultural systems. Crop production in semi-arid regions like South Africa is
largely determined by the annual total rainfall; however, rainfall is the limiting factor in these areas. Sometimes hydrological data series have missing values or are incomplete. In such cases, the
reliability of the design of, for example, a hydropower plant and the construction of dams, can be severely affected. Limited financial resources, poor management of data related to water resources,
temporary absence of observers, cessation of measurement or no reliable hydrological networks can lead to incomplete or missing data in hydrological time-series. This situation is common in
developing countries.
In South Africa, for example, the overwhelming majority of gaps are caused by the temporary absence of observers, the cessation of measurement or absence of observations prior to the commencement of
measurement (Makhuvha et al 1997). In Bolivia, due to the limited financial resources, even a minimum national network could not be achieved according to the meteorological network density ratio
(Balek 1972).
Developing countries generally lag behind in the use of new technologies to process their statistical data (Sadowsky 1989). Yet their needs are just as great; they need to achieve a viable
statistical data processing capability if they are to provide, on a continuous and sustained basis, the essential statistical information needed for their development planning and administration
(Sadowsky 1989). Most of the old data for developing countries have been lost due to non-existent database storage (Medeiros et al 2002).
Several hydrological data infilling techniques have been developed. These techniques include artificial neural networks (ANNs), regression methods, deterministic models, stochastic models for
rainfall-runoff modelling, flood forecasting/prediction and water quality modelling (Lawrence et al 1996; Minns & Hall 1996; Raman & Sunilkumar 1995). Although several studies indicate that ANNs have
proven to be potentially useful tools in hydrology, their disadvantages should not be disregarded (ASCE Task Committee 2000b). The success of an ANN application depends both on the quality and the
quantity of data available (ASCE Task Committee 2000b). This requirement cannot go back far enough. Quite often the requisite data are not available and have to be generated by other means, such as
another well-tested model. Even when long historical records are available, it is not certain that conditions have remained homogeneous over the time span. Therefore data sets recorded over a period
that was relatively stable and unaffected by human activities are desirable. Yet another limitation of ANNs is the lack of physical concepts and relations. The lack of a standardised way of selecting
a network architecture has also been criticised. The choices of network architecture, training algorithm and definition are usually determined by the user's experience and preference, rather than by
the physical aspects of the problem (ASCE Task Committee 2000a,b)
Despite the criticisms levelled against ANN techniques (ASCE Task Committee 2000ab), they were found to be powerful tools when compared to multivariate regression-based models for infilling
streamflow data (Panu et al 2000). Kuligowski and Barros (1998) showed that ANNs gave promising results in the estimation of missing rainfall data when compared to other methods such as regression
techniques. ANN techniques can be used to express a non-linear mapping between variables with no prior assumptions as to the variables (linear or non-linear as in regression methods), and these
techniques can cope with missing data (French et al 1992). Over the past decade, ANNs have been used intensively in hydrology and water-related fields (Lawrence et al 1996; Minns & Hall 1996; Raman &
Sunilkumar 1995; French et al 1992; Wilby & Dawson 1998). However, the application of ANNs for infilling rainfall data remains limited. In addition, there is nothing in the literature on the use of
the generalised BP (back-propagation) ANN technique for infilling hydrological data, specifically for rainfall data, which generally show a relatively high variability both in time and space.
This paper discusses feedforward ANN techniques used for rainfall data infilling. The standard back-propagation (BP) technique (Freeman and Skapura 1991) is compared to the generalised BP technique
which has been introduced for the first time in hydrology, specifically for rainfall data infilling problems. Note that the generalised BP was initially used for different problems which included the
"Exclusive-Or" problem (XOR) and the 3-bit parity and 5-bit counting problems (Ng et al 1996). The root mean square error of predictions (RMSEp) is then used as a criterion to evaluate the
performance of these two techniques. A case study is presented to demonstrate the performance of the two techniques. The terms algorithm and technique are used interchangeably in this paper.
Overview of Artificial Neural Networks (ANNs)
ANNs are networks of interconnected simple units (nodes) based on a greatly simplified model of the human biological system, which are capable of representing non-linear and complex interactions
between variables without prior specification. There are two main types of ANNs: feedforward networks (where the signal is propagated only from the input nodes to the output nodes) and recurrent
networks (where the signal is propagated in both directions). The advantage of ANNs, even if the "exact" relationship between sets of input and output data is unknown but is acknowledged to exist, is
that they can be trained to learn that relationship, and require no prior underlying assumptions (non-linear vs linear) as in conventional methods. ANNs are regarded as ultimate black box models
(Minns & Hall 1996). ANNs were shown to be generally superior in sediment yield models when compared to linear transfer function models (Argawal et al 2005). ANNs seek to learn patterns, but not to
replicate the physical processes of transforming input to output (Minns & Hall, 1996). As opposed to conventional methods, ANNs are thought to have the ability to cope with the missing data and,
perhaps most importantly, are able to generalise a relationship from small subsets of data while remaining relatively robust in the presence of noisy or missing inputs. Thus ANNs can learn in
response to a changing environment (Wilby & Dawson 1998). Since the early 1990s, ANNs have been successfully used in the area of water resource engineering related to rainfall/runoff forecasting
(Minns & Hall 1996; Agarwal & Singh 2001); streamflow data infilling (e.g. Panu et al 2000; Khalil et al 2001; Elshorbagy et al 2000; Ilunga & Stephenson 2005); validation and correction of
high-frequency water quality data (Quilty et al 2004) and rainfall data infilling (Kuligowski & Barros 1998). The latter authors used ANNs to estimate the missing rainfall data at the target rainfall
station from nearby rainfall stations. The issues of data quality for computational intelligence in earth sciences were also discussed by Cherkassy et al (2006). However, the application of ANNs
hydrological data infilling is still very limited, specifically for rainfall data infilling. Some authors (e.g. Panu et al 2000; Khalil et al 2001; Elshorbagy et al 2000, Ilunga & Stephenson 2005)
developed ANN techniques for cases where data were available before and after missing periods of data (e.g. consecutive missing values). Three-layered ANNs have been used intensively for that
purpose. The hidden-layer feedforward neural network is one of the most common architectures used by neurohydrologists (Panu et al 2000; Khalil et al 2001; Elshorbagy et al 2000; French et al 1992;
Minns & Hall 1996; Agarwal & Singh 2001). These hydrologists believe that certain problems in hydrology and water resources can be solved using ANNs.
Standard back-propagation (BP) technique
The standard BP technique is only outlined in this section and for more details the reader is referred to, for example, Freeman and Skapura (1991). Given a three-layered ANN as depicted in Figure 1,
in standard BP the adjustment of the interconnecting weights during training employs a method known as error back-propagation in which the weight associated with each connection is adjusted by an
amount proportional to the strength of the signal in the connection and the total measure of the error. The total error at the output layer is then reduced by redistributing this error value
backwards through the hidden layers until the input layer is reached. This process is repeated until the total error for all data sets is sufficiently small. The weight changes to the output layer
and hidden layer are given by Equations (1) and (2) respectively:
where i is the unit node in the input layer, j is the unit node in the hidden layer, p is the pattern, k is the neuron related to the output layer, η is the learning rate, δ[pk]^0 and δ[pj]^h are
error terms (which encompass a derivative part) for output units and hidden units respectively, t is the t-th iteration, w[kj]^0(t) and w[ji]^h (t) are weights in the output layer and the hidden
layer respectively at t-iteration, and x[i] and i[pj] are inputs to unit nodes i and j respectively.
For practical considerations, it is sometime suggested that the bias terms be removed altogether, i.e. their use is optional (Freeman & Skapura 1991).
In the standard BP, the learning process is done through both sequential and batch modes. In the former mode the process of learning is governed by the error of each data set and the weight update is
made for each sample of the training, and in the latter mode the weights at each iteration are adjusted only after all the data sets have been processed.
An activation function is used to express the non-linear relationship process between the input and output data. This function can be any threshold function or any continuous function. It is normally
a monotonic non-decreasing function and differentiable everywhere for x values. The activation function most commonly used is a sigmoid, non-linear continuous function between 0 and 1 and is
represented as follows:
Freeman and Skapura (1991) proposed that a range of x values from 0,1 to 0,9 should be used for practical purposes. This range is adopted in this paper. Thus the input data and the output data will
be scaled (during training of ANNs) to adhere to the above range. A linear scaling was used in this paper. For ANNs, input data and output data scaling can speed up the convergence of the neural
system. It also gives each input equal importance, prevents premature saturation of the activation function and aids the generalisation capability (i.e. neural networks can approximate values that
they did not see during training). Therefore the equations used in this paper should not contain any unit as they apply to scaled numbers used during the training of ANNs.
The majority of ANNs applied in water resources involve the use of feedforward propagation. The standard BP (which is a gradient descent method) has been criticised because convergence to an optimal
solution is not always guaranteed (Agarwal & Singh 2001). In other words, the method guarantees that the algorithm will find the nearest local minimum. Consequently, the solution often follows a
zig-zag path while trying to reach the minimum error position, which may slow down the training process (ASCE Task Committee 2000a). Thus several variants of BP such as Bayesian regulation, the
conjugate gradients method, adaptive stepsize, the Levenberg-Marquardt algorithm, causal recursive BP, Maclaurin pseudo-power series, and the generalised BP introduced recently by Ilunga and
Stephenson (2005), were proposed. Despite these criticisms, it appears that in practice BP leads to solutions in almost every case and that standard multilayer feedforward networks are capable of
approximating any measurable function to any desired degree of accuracy, as stated by Minns and Hall (1996). In the following section the generalised BP algorithm is briefly described.
Generalised BP algorithm
The main reason for criticism of the use of standard back-propagation is due to the derivative of the sigmoid activation function (Ng et al 1996). When the actual output of the -th output neuron for
the p -th pattern (i.e. o[pk]) approaches the extreme values such as 0 or 1, the derivative of the activation function having the factor o[pk] (1 - o[pk]) will not be significant, and the BP error
signal will become very small (Ng et al 1996). Thus the output can be maximally wrong without producing a large error signal. The algorithm can be trapped into local minima. Consequently the weight
adjustment of the algorithm can be very slow or even suppressed. Therefore a generalisation of the derivative of the activation function (i.e. logistic) is proposed so as to improve the convergence
of the learning process by preventing the error signal dropping to a very small value.
In generalised BP, the error signals for the output layer and hidden layer now become:
where o[pk] is the target output, net[pk] is the net input to the output layer, net[pk] is the net input to the hidden layer, f[k]^0' is the first derivative of the sigmoid function for the k-th
neuron in the output layer, b is the generalisation parameter. In this case b > 1. For b = 1, this results in the standard BP algorithm.
The effect of generalised BP is to change the slope of the sigmoid function in the two "tail" regions. For b y[pk] - o[pk]) more appropriately. The generalised BP technique was applied to different
problems including the "Exclusive-Or" problem XOR and the 3-bit parity and 5-bit counting problems (Ng et al 1996). The results were not good for b
Tests were performed to determine whether the statistics (e.g. mean and standard deviation) of the infilled annual rainfall totals are significantly different from the observed annual rainfall totals
at the target stations.
In practice a level of significance of 0,05 or 0,01 is customary, although other values are used. In other words, there is about a 5-in-100 chance that the hypothesis will be rejected when it should
be accepted. There is a 95% confidence level that the right decision is made.
The tests on means and standard deviations are performed as explained by Spiegel and Boxer (1972). The tests are explained in the following sections.
Test on means
For large sample sizes N (N > 30), the sampling distribution of the statistic can be assumed to be a (nearly) normal distribution with mean s. The test is performed based on the following rule
decision or test of hypothesis or significance:
(a) Reject the hypothesis at a 0,05 level of significance if the Z score of the statistic (e.g. mean) lies outside the range -1,96 to 1,96. In the case of means, the null hypothesis H[0] : µ = µ
[0] is tested against the alternative hypothesis H[a] : µ ≠ µ[0] (where µ[0] is the population mean).
(b) Accept the hypothesis (or if desired make no decision at all).
The Z score is computed using the following equation:
Where µ and σ are the mean and standard deviation of the population and s =
Test on standard deviations
For large values of the degrees of freedom γ, (γ> 30), (with γ = N - 1) and using the chi-square(^2) test, the 95% confidence limit is given by:
The annual rainfall totals for the Bleskop station (SAWS gauge no 02284170) and the Luckhoff-Pol station (SAWS gauge no 0228495) were considered for this preliminary study to test the performance of
the two techniques, i.e. standard BP and generalised BP. (These two rainfall stations were selected randomly.) These rainfall stations are about 20 km apart and belong to the secondary drainage
region named D33 of the Orange River Drainage System (D) of South Africa. The monthly rainfall data were obtained from the report by Midgley et al (1994). The geographical location and other
characteristics of the selected rainfall stations located in the summer rainfall zone are listed in Tables 1 and 2.
Gauge 0228495 (Luckhoff-Pol) was taken as the target gauge and gauge 0228170 (Bleskop) as the control gauge. Gauge 0228170 was chosen as the control since it gave better results during trials of the
estimation of missing values at gauge 0228495 than when gauge 0228495 was considered to fill in the missing values at gauge 0228170. The mean monthly rainfall information listed in Table 2 was
obtained by multiplying the MAP (in mm) by the monthly rainfall as a percentage of MAP since this was drawn from the Water Research Report WRC 298/3.1/94. (The computer programme HDY08 output is a
monthly time-series, expressed as a percentage of MAP, and is representative of the rainfall zone.)
Other rainfall stations were added to this preliminary study: the Touws River station (SAWS gauge no 0044050) and the Jan Deboers station (SAWS gauge no 0044286). The 0044050 and 0044286 rainfall
stations are about 15 km apart and belong specifically to the J1C rainfall zone of the primary river drainage system (J) in the Western Cape. The geographical locations of the selected rainfall
stations 0044050 and 0044286 is given in Table 3. Gauge 0044050 was taken as the target gauge and gauge 0044286 as the control gauge. This was done in a similar way as explained above. The mean
monthly rainfall information (Table 4) was calculated directly from the data since data files were obtained from the SAWS. The annual rainfall totals for the Isidenge station (SAWS gauge no 0079490)
and the Izeleni station (SAWS gauge no 0079730) were considered as well. The 0079490 and 0079730 rainfall stations are about 14 km apart and belong to the S6A rainfall zone of the primary river
drainage system s (S) in the Eastern Cape. The geographical locations of the selected rainfall stations, 0079490 and 0079730 are listed in Table 5. The mean monthly rainfall information as listed in
Table 6 was calculated directly from the data since data files were obtained from the SAWS. Gauge 0079490 was taken as the target gauge and gauge 0079730 as the control gauge as in the previous
The hydrological year starts in October and ends in September for the data used.
The SAWS rainfall data used in this study were checked for general reliability and consistency using a mass plot, i.e. a plot of cumulative rainfall against time, as outlined by Midgley et al (1994).
The two techniques, i.e. standard BP and generalised BP, were applied to the different rainfall data sets. In the following, the results of the application of these techniques are presented and
The selected rainfall data sets (stations 00228170 and 00228295) of the Orange River Drainage System were complete and had no periods of missing data. However, for testing both infilling techniques,
some consecutive gaps (e.g. 7, 13, 20, 25, 30, 35, 40 and 45% of missing data, starting from 1935) were created randomly in the target rainfall station data set (station 0228495).
The selected rainfall data sets in the Western Cape (stations 0044050 and 0044286) and in the Eastern Cape (stations 0079490 and 0079730) were complete and had no periods of missing data. However,
for testing the different infilling techniques (i.e. the standard BP and the generalised BP), some consecutive gaps (e.g. 5, 10, 15, 20, 25, 30, 35, 40 and 45% of missing data, starting from 1965)
were created randomly in the target rainfall station data set, i.e. 0044050. Similarly, some consecutive gaps (e.g. 5, 10, 15, 20, 25, 30, 35, 40 and 45% of missing data, starting from 1930) were
created in the target rainfall station data set, i.e. 0079790.
The two techniques were then applied to annual total rainfall series. The ANNs were trained on the concurrent parts of the observed data using a sequential mode and the weights obtained were then
used to estimate the missing values. The approach was similar to that used by Kuligowski and Barros (1998). A single input-output, three-layered ANN with three nodes in the hidden layer was used and
the bias terms were assumed to be zero as their use is optional. Learning rates set to 0,15 and 0,45 yielded reasonable results, although a wide range of values (i.e. between 0,01 and 0,9) for the
learning rate was tried. Input and output values were scaled linearly to fall within the range 0,1 to 0,9 as mentioned earlier. Tables 7, 8 and 9 contain a summary of the results obtained from the
two techniques. It was found that a value of 5 for the generalisation parameter gave good results for the generalised BP technique at rainfall station 00228295, while a value of 3 for the
generalisation parameter yielded good results for the same technique at rainfall stations 0044050 and 0079490.
From Tables 7, 8 and 9 it is evident that the RMSEp at the target station increases with the proportion of missing values (gap size) for both techniques. Thus the accuracy decreases as the proportion
of missing annual total rainfall values increases. A similar observation was made for a streamflow data infilling problem (Ilunga & Stephenson 2005). This situation (in this study) could be due to
the fact that the generalisation capability of the two techniques of neural networks reduces as the proportion of missing values to be infilled becomes larger. In other words, as the periods of
missing data increase, so the neural network is trained on smaller data sets and thus verified on a larger proportion of data. Hence the generalisation capability of ANNs decreases. It was noted that
earlier missing record periods (e.g. 1928) in the records of the target station 0228495 did not have a significant impact on the accuracy of the estimated values for the different techniques.
A similar observation was made that earlier missing periods (1965) in the record of target rainfall station 0044050 did not apparently have any impact on the accuracy of the estimated values for the
different techniques. Similarly, it was noted that earlier gaps (1925) in the record of target rainfall station 0079490 did apparently not have any impact on the accuracy of the estimated values for
the different techniques.
In Figures 2 to 10, StandardBP and Generalised BP refer to standard back-propagation and generalised back-propagation techniques.
By and large, generalised BP performed slightly better than standard BP. The plots shown in Figures 2 and 3, 5 and 6, and 8 and 9 confirm these results: the differences in the estimated missing
values at rainfall station 0228495 are generally small, whereas the differences in the estimated missing values at rainfall stations 0044050 and 0079490 are more prominent. This could be due to the
generalisation para meter introduced in the update Equations (4) and (5). In the cases under discussion, the generalised parameter is believed to slightly improve the approximation of the output
signal without producing a large error signal in the neural network when the actual input for the given neuron and pattern approaches the limits, i.e. 0,1 and 0,9. This could therefore support the
premise behind the generalised BP algorithm (Ng et al 1996) that a generalisation of the derivative of the activation function (i.e. logistic) enables improvement of the convergence of the learning
process by limiting the error signal drop to a very small value. From Figures 4, 7 and 10 it can also be seen that for all algorithms the bigger the proportion of missing values (gap size), the
bigger the RMSEp, hence the accuracy decreases. The two lines (obtained from the scatter data points) in Figure 4 are very close, while in Figures 7 and 10 the two lines are not very close. This
could correlate with the observation that the differences in estimated values at station 0228495 were very small for both techniques. The two techniques were generally shown to give a good estimation
of the annual total rainfall values.
From the above it can be said that both the standard BP and the generalised BP algorithms are acceptable to fill in the annual rainfall values for rainfall stations 0228495, 0044050 and 0079790. It
was shown that there was no significant breach of statistical properties (i.e. the mean and the variance of the incomplete and infilled rainfall series specifically at the target rainfall station).
The hypothesis test was conducted on the basis of the statistical method explained above. The mean and variance of the observed annual rainfall totals at the target stations were considered (assumed)
as an estimation of the population mean and variance respectively. The mean of the infilled data series for each proportion of missing values was tested (at 95% confidence interval) for acceptance or
rejection of the mean of the annual rainfall totals remaining unchanged. The different tests revealed that the results could generally be accepted at 95% confidence interval, except for a 45% missing
proportion at rainfall station 0079790 as shown in Tables 10, 11, 12 and 13.
For the generalised BP technique as applied in this paper, the generalisation parameter was purposely not strictly restricted to the conditions of binary problems, i.e. b b b
The generalised BP technique has been introduced for the first time in hydrology, specifically for a rainfall data infilling problem. This technique was compared to the standard BP technique. The
performance of the two techniques was evaluated through RMSEp for annual rainfall data. The preliminary results using the rainfall station pair 0228170 (control) and 0228495 (target) showed that the
generalised BP technique performed slightly better than the standard BP technique. However, the standard BP had no negative impact on the estimation of missing values at the target station. Both
techniques were acceptable for infilling the missing annual total rainfall data at station 0228495. Hence either of these techniques could be used for infilling annual rainfall totals. The results
were similar when other station pairs were used: 0044050 (control) and 0044050 (target) and 0079030 (control) and 0079090 (target).
It was also observed that the RMSEp at the different target stations generally increased with an increase in the proportion of missing values (gap size) for both techniques. It is suggested that the
impact of other activation functions (e.g. hyperbolic) as well as the batch-training mode for neural networks should be investigated. The techniques used in this study should also be tested on other
rainfall data sets. A sensitivity analysis of the generalisation parameter on the accuracy of estimated rainfall values should also be investigated. These results were based on the techniques applied
to annual total rainfall data. It is recommended that other rainfall data should also be tried (e.g. maximum series, mean annual, etc) and data from other climatic regions should also be used to
evaluate the techniques.
The author thanks the South African Weather Service for providing the rainfall data used in this research paper.
Agarwal, A & Singh, J K 2001. Pattern and batch learning ANN process in rainfall-runoff modeling. Indian Association of Hydrologists. Journal of Hydrology, 24(1): 1-14. [ Links ]
Agarwal, A, Singh, R D, Mishra, SK & Bhunya, PK 2005. ANN-based sediment yield models for Vamsadhara river basin (India). Water SA, 31(1): 95-100. [ Links ]
ASCE Task Committee (2000a) Artificial neural networks in hydrology I: Preliminary concepts. Journal of Hydrological Engineering, 5(2): 115-123. [ Links ]
ASCE Task Committee (2000b) Artificial neural networks in hydrology I: Hydrologic applications. Journal of Hydrological Engineering, 5(2): 124-137. [ Links ]
Balek, J 1972. An application of the inadequate hydrological data of the African tropical regions in engineering design. Proceedings, 2nd International Hydrological Symposium, Fort Collins, Colorado,
USA, pp 95-96. [ Links ]
Cherkassky, V, Krasnopolsky, V, Slomatine, D P & Valdes, J 2006. Computational intelligence in earth sciences and environmental applications. Neural Networks, 19: 113-121. [ Links ]
Elshorbagy, A A, Panu, U S & Siminovic S P 2000. Group-based estimation of missing hydrological data: I. Approach and general methodology. Hydrological Sciences Journal, 45(6): 849-866. [ Links ]
Freeman, A J & Skapura, M D 1991. Neural Networks: Algorithms, Application and Programming Techniques. Redwood City, CA: Addison Wesley. [ Links ]
French, N, Krajewsky, F & Cuykendall, R 1992. Rainfall forecasting in space and time using neural network. Journal of Hydrology, 137: 1-31. [ Links ]
Ilunga, M & Stephenson, D 2005. Infilling streamflow data using feedforward back-propagation (BP) artificial neural networks: Application of the standard BP and pseudo MacLaurin power series BP
techniques. Water SA, 2: 171-176. [ Links ]
Khalil, M, Panu, U S & Lennox, W C 2001. Group and neural networks based streamflow data infilling procedures. Journal of Hydrology, 241: 153-176. [ Links ]
Kuligowski, R J & Barros, A P 1998. Using artificial neural networks to estimate missing rainfall. Journal of the American Water Resources Association, 34: 1437-1447. [ Links ]
Lawrence, S, Tsoi, C A & Giles, L 1996. Local minima and generalization. IEEE Proceedings, International Conference on Neural Networks, 1: 371-376. [ Links ]
Makhuva, T, Pegram, G, Sparks, R & Zucchini, W 1997. Patching rainfall data using regression methods. 1. Best subset selection, EM and pseudo-EM methods: Theory. Journal of Hydrology, 198: 289-307. [
Links ]
Medeiros, Y D P, Fiuza, J M S, Figueira, C C & Sema, C S 2002. Information in Salitre River Basin-Bahia, Brazil. In: Sherif, M M, Singh, V & Al-Rashed, M (Eds), Groundwater Hydrology, Netherlands:
Balkema, pp. 21-35. [ Links ]
Midgley, D C, Pitman, WV & Middleton, B J 1994a. Surface water resources of South Africa, 1990, lst edition. Water Research Commission Report No. 298/3.1/94. [ Links ]
Minns, A W & Hall, M J 1996. Artificial neural network as rainfall-runoff models, Hydrological Sciences Journal, 41(3): 399-417. [ Links ]
Ng, S C, Leung, S H & Luk, A 1996. A generalization backpropagation algorithm for faster convergence. IEEE Proceedings, International Conference on Neural Networks, 1: 409-413. [ Links ]
Panu, U S, Khalil M & Elshorbagy A 2000. Streamflow data infilling techniques based on concepts of groups and neural networks. In: Artificial Neural Networks in Hydrology, Netherlands: Kluwer
Academic Publishers, pp 235-258. [ Links ]
Quilty, E, Hudson, P & Farahmand, T 2004. Artificial neural networks for validation and correction of higher frequency water quality data. Proceedings, 11th CWWA National Conference, Calgary, Canada,
pp 1-14. [ Links ]
Raman, H & Sunilkumar, N 1995. Multivariate modelling of water resources time-series using artificial neural networks. Hydrological Sciences Journal, 40(2): 145-163. [ Links ]
Sadowsky, G 1989. Statistical data processing in developing countries. Applications of emerging Technology. CEUS: Emerging Technology, pp 1-26. [ Links ]
Spiegel, M R 1972. Schaum's Outline of Theory and Problems of Statistics in SI Units. Schaum's Outline Series, New York: McGraw-Hill. [ Links ]
Wilby, R & Dawson, C W 1998. An artificial neural network approach to rainfall-runoff modeling. Hydrological Sciences Journal, 43(1): 47-66. [ Links ]
DR MASENGO ILUNGA is a senior lecturer and the current Head of Department of the Civil and Chemical Engineering Department at UNISA. He is also the leader of the UNISA Water Research Group and
teaches subjects related to water engineering. His current research interests are artificial intelligence, entropy, statistical techniques and modelling in hydrology and water resources.
Contact details:
College of Science, Engineering and Technology
Civil and Chemical Engineering
University of South Africa
T: 011 471 2791
F: 011 471 2090
E: ilungm@unisa.ac.za | {"url":"http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S1021-20192010000100001&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-19T09:31:30Z","content_type":null,"content_length":"68415","record_id":"<urn:uuid:eda68e64-9f7c-453a-8ee0-3d4fbf0f9c66>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] synchronizing timestamps from different systems; unpaired linear regression
josef.pktd@gmai... josef.pktd@gmai...
Tue Apr 10 09:13:52 CDT 2012
On Tue, Apr 10, 2012 at 5:27 AM, Chris Rodgers <xrodgers@gmail.com> wrote:
> I have what seems like a straightforward problem but it is becoming
> more difficult than I thought. I have two different computers
> recording timestamps from the same stream of events. I get lists X and
> Y from each computer and the question is how to figure out which entry
> in X corresponds to which entry in Y.
> Complications:
> 1) There are an unknown number of missing or spurious events in each
> list. I do not know which events in X match up to which in Y.
> 2) The temporal offset between the two lists is unknown, because each
> timer begins at a different time.
> 3) The clocks seem to run at slightly different speeds (~0.3%
> difference adds up to about 10 seconds over my 1hr recording time).
> I know this problem is solvable because once you find the temporal
> offset and clock-speed ratio, the matching timestamps agree to within
> 10ms. That is, there is a strong linear relationship between some
> unknown X->Y mapping.
> Basically, the problem is: given list X and list Y, and specifying a
> certain minimum R**2 value, what is the largest set of matched points
> from X and Y that satisfy this R**2 value? I have tried googling
> "unmatched linear regression" but this must not be the right search
> term.
> One approach that I've tried is to create an analog trace for X and Y
> with a Gaussian centered at each timestamp, then finding the lag that
> optimizes the cross-correlation between the two. This is good for
> finding the temporal offset but can't handle the clock-speed
> difference. (Also it takes a really long time because the series are
> 1hr of data sampled at 10Hz.) Then I can choose the closest matches
> between X and Y and fit them with a line, which gives me the
> clock-difference parameter. The problem is that there are a ton of
> local minima created by how I choose to match up the points in X and
> Y, so it gets stuck on the wrong answer.
> Any tips?
I'm pretty sure someone has a more experienced answer similar to image
What I would try to do is do your correlation or regression matching
on two subsamples, for example a segment at the beginning and a
segment at the end, then the different clock speeds will have a small
effect. Then recover the clockspeed difference comparing the match
between the two subsamples/segments.
> Thanks!
> Chris
> PS: my current code and test data is here:
> https://github.com/cxrodgers/DiscreteAnalyze
> --
> Chris Rodgers
> Graduate Student
> Helen Wills Neuroscience Institute
> University of California - Berkeley
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2012-April/032036.html","timestamp":"2014-04-18T21:04:53Z","content_type":null,"content_length":"6569","record_id":"<urn:uuid:95aa9b51-05f7-445b-856e-568d67ff945f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: statsby slowness
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: statsby slowness
From Roger Harbord <rogerharbord@bigfoot.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: statsby slowness
Date Tue, 14 Aug 2007 10:14:44 +0100
I've had similar experiences - based on a bit of experimentation, the
time taken by -statsby- appears to be quadratic in the number of groups
of the by() variable, at least once you get beyond a few hundred or so
groups. For an example, see the code below.
However, I'm not sure that giving up on -statsby- helps -- i've tried
programming a simulation from scratch and the problem appeared to be
inherent to -in- when you're selecting one of a large number of groups.
I ended up splitting the data into chunks of 500 groups each and running
each chunk separately (by the way I had good reasons for generating the
whole 10000 simulations in one go rather than the approach taken by
-simulate- of generating and estimating each simulation in turn, which
is clearly often a better approach).
I believe the speed improvements in Michael Blasnik's -statsbyfast- were
incorporated into the official -statsby- some time ago (see -help
whatsnew8_1- ), so -statsbyfast- is of historical interest only.
An illustration :
sysuse auto, clear
expand 1000
bysort make : gen int i = _n
sort i make
set rmsg on
statsby t=(_b[weight] / _se[weight]), by(i) clear nodots : regress mpg
set rmsg off
Change -expand 1000- to -expand 2000- and -statsby- takes not twice as
long but four times as long.
Changing -regress mpg weight- to -logit foreign weight- indicates the
issue isn't unique to -regress- (or commands ultimately based on
-regress-, such as -spearman- in David Airey's example below).
David Airey wrote:
> .
> I have found Blasnik's statsbyfast improvement, but for some reason it
> is broken in Stata 10.
>> At what point does one give up using statsby? With just three
>> variables in my data set,
>> ssrownum, iso_VSV, expression
>> the following command does OK with 1000 by groups (< 20 cases in a
>> group), but is not useable with 20,000 by groups.
>> statsby n=r(N) spearman=r(rho) p=r(p), by(ssrownum): spearman iso_VSV
>> expression
>> Why?
>> I posted something similar a long time ago compared speeds of ttest
>> with if versus in and versus regress, but I'm not happy at the moment.
> --
> David C. Airey, Ph.D.
> Research Assistant Professor
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-08/msg00470.html","timestamp":"2014-04-16T19:02:34Z","content_type":null,"content_length":"7111","record_id":"<urn:uuid:a852a4ca-1afb-4dbd-8dc2-cb314b5d3199>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem 160. So many choices
For inputs n and k (in that order), output the number of ways that k objects can be chosen from amongst n distinct objects. [The order of the selected objects is irrelevant.]
For example, suppose you have 4 blocks: red, green, blue, yellow. How many ways are there to select 2 of them? There are 6 ways: RG, RB, RY, GB, GY, BY. [Remember the ordering is irrelevant, so GR is
equivalent to RG, and therefore does not count as a distinct choice.]
So, function(4,2) ---> 6.
Problem Comments
2 Comments
Sumit Agrawal
on 10 Sep 2013
Why in my pc showing correct but not in this
the cyclist
on 11 Sep 2013
@Sumit, you are probably using a function from a toolbox (e.g. statistics toolbox) that is not available to Cody.
Solution Comments
1 Comment
Franck Dernoncourt
on 30 Jan 2012
Let me import my Statistics Toolbox! ;) | {"url":"http://www.mathworks.com/matlabcentral/cody/problems/160-so-many-choices","timestamp":"2014-04-25T05:34:53Z","content_type":null,"content_length":"50673","record_id":"<urn:uuid:6e535fa7-53c3-4bd4-b82b-d391ddaaaa33>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus AB Lesson 05: Exponential and Logarithmic Functions : Monterey Institute for Technology and Education : Free Download & Streaming : Internet Archive
Monterey Institute for Technology and Education
There is currently no description for this item.
This educational material is part of the collection: Advanced Placement Calculus AB
About this Item
Audience: Learning: High School and Pre-College
Creator: Monterey Institute for Technology and Education
Language: English
Creative Commons license: Attribution-NonCommercial-NoDerivs | {"url":"https://archive.org/details/AP_Calculus_AB_Lesson_05","timestamp":"2014-04-24T18:28:54Z","content_type":null,"content_length":"18999","record_id":"<urn:uuid:c535f28f-7556-4b78-8af6-7548f8c2a180>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two people push on a desk they are trying to move. One person pushes with 35 N at a 50^o angle, while the other person pushes with 27 N at a 340^o angle. What is the resultant push on the desk?
Given: push one, []
Unknown: resultant push, []
Physical (Mathematical) Principles and Ideas: rules for adding vectors, trigonometric
functions and Pythagorean theorem.
Solution: There are several ways to add vectors, which are neither co-linear (i.e., do not lie along a line) nor perpendicular. The one that can be used most easily in the greatest variety of
situations is the method of components. A vector, any vector, can always be “broken” down in two (or in three dimensions—three) other vectors at right angles to each other which, when added
vectorially using the Pythagorean theorem, “sum” to give the original vector. These two (or three) vectors are called components. Using this idea of components, the general outline for how we solve
vector addition problems is the following:
(a) “Break” each vector down into its components along identified axes using trigonometric functions. That is, find the x and y (and when necessary z) components of each vector in the problem.
Notice that this process requires orienting the vectors properly on a set of coordinate axes.
(b) Add the components for each of the axes independently, i.e., add the x components to get one “x vector”, add the y components to get one “y vector”, and add the z components to get one “z
vector”. Keep in mind that these processes of “adding” components may actually involve subtracting, since components pointing in opposite directions would subtract.
(c) Finally, we add the “x vector”, “y vector” and “z vector”, using the Pythagorean theorem, to find the resultant (sometimes called the net) vector.
We will now carry out this program for the two vectors in the problem
(a) If we sketch [] with its components, we get:
So [] itself is the hypotenuse of this right triangle, the x
component is the side of the triangle adjacent to the 50^o angle, and the
y component is the side of the triangle opposite the 50^o angle. That
cosine50^o = [][]^
That is, x component (35 N)(0.64) = 22.5 N and since it points to the right on our axes, it is positive. (On our axis, right is positive and left is negative for x, up is positive and down is
negative for y.) The y component is obtained from:
sine50^o = []
giving y component = (35 N)(0.77) = 26.8 N
This component points upward so it is positive also. We now have the x and y components of []. They are:
In a similar way we find the components of
[]. Here we have a preliminary step
we have to complete; we have to find the angle
q. The angle we were given was 340^o, which
means q = 360^o – 340^o = 20^o. Now we
proceed in the same way we did with []
cosine20^o = [] x component = (27 N) cosine 20^o = 25.4 N
and this will be positive since it points to the right.
y component = (27 N) sin 20^o = 9.2 N
but this will be negative since it points down. The components of
[] are: F(2)x = +25.4 N and F(2)y = -9.2 N
b) In this step we add the components independently. (Why can we do this?) In other words, we find the x component of the resultant from
Fx = F (1) x + F (2) x = +22.5 N + 25.4 N = 47.9 N
and the y component from
Fy = F (1) y + F (2) y = +26.8 N – 9.2 N = 17.6 N
c) Finally, we “sum” the two components of the resultant using the Pythagorean theorem, and then determine the angle for the resultant. First, we get the magnitude of the resultant
Now the angle can be found using either the sine, cosine, or tangent.
Note that this means 20.17^o above the x axis (i.e., in the first quadrant) since Fx and Fy are both positive. (What quadrant would the angle be in if Fx was negative and Fy positive? What about if
Fx and Fy were both negative?)
Other Kinematics Examples: | {"url":"http://users.ipfw.edu/maloney/ex3.html","timestamp":"2014-04-17T21:55:53Z","content_type":null,"content_length":"54963","record_id":"<urn:uuid:e7ad0620-fd8b-4882-b4f1-2025e6d56c7f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: LONG SHORT-TERM MEMORY Up: LSTM CAN SOLVE HARD Previous: LSTM CAN SOLVE HARD
Traditional recurrent nets fail in case of long minimal time lags between input signals and corresponding error signals [7,3]. Many recent papers propose alternative methods, e.g., [16,12,1,6,9]. For
instance, Bengio et al. investigate methods such as simulated annealing, multi-grid random search, time-weighted pseudo-Newton optimization, and discrete error propagation [3]. They also propose an
EM approach [1]. Quite a few papers use variants of the ``latch problem'' (and ``2-sequence problem'') to show the proposed algorithms's superiority, e.g. [3,1,6,9]. Some papers also use the ``parity
problem'', e.g., [3,1]. Some of Tomita's [18] grammars are also often used as benchmark problems for recurrent nets [2,19,14,11].
Trivial versus non-trivial tasks. By our definition, a ``trivial'' task is one that can be solved quickly by random search (RS) in weight space. RS works as follows: REPEAT randomly initialize the
weights and test the resulting net on a training set UNTIL solution found.
Random search (RS) details. In all our RS experiments, we randomly initialize weights in [-100.0,100.0]. Binary inputs are -1.0 (for 0) and 1.0 (for 1). Targets are either 1.0 or 0.0. All activation
functions are logistic sigmoid in [0.0,1.0]. We use two architectures (A1, A2) suitable for many widely used ``benchmark'' problems: A1 is a fully connected net with 1 input, 1 output, and In all our
simulations below, RS finally classified all test set sequences correctly; average final absolute test set errors were always below 0.001 -- in most cases below 0.0001.
``2-sequence problem'' (and ``latch problem'') [3,1,9]. The task is to observe and classify input sequences. There are two classes. There is only one input unit or input line. Only the first N
real-valued sequence elements convey relevant information about the class. Sequence elements at positions
Bengio et al.'s results. For the 2-sequence problem, the best method among the six tested by Bengio et al. [3] was multigrid random search (sequence lengths 50 -- 100; 1] was reported to solve the
problem within 2,900 trials.
RS results. RS with architecture A2 (A1, 3] (only 3 parameters), the problem was solved within only 22 trials on average, due to tiny parameter space. According to our definition above, the problem
is trivial. RS outperforms Bengio et al.'s methods in every respect: (1) many fewer trials required, (2) much less computation time per trial. Also, in most cases (3) the solution quality is better
(less error).
It should be mentioned, however, that different input representations and different types of noise may lead to worse RS performance (Yoshua Bengio, personal communication, 1996).
``Parity problem''. The parity task [3,1] requires to classify sequences with several 100 elements (only 1's or -1's) according to whether the number of 1's is even or odd. The target at sequence end
is 1.0 for odd and 0.0 for even.
Bengio et al.'s results. For sequences with only 25-50 steps, among the six methods tested in [3] only simulated annealing was reported to achieve final classification error of 0.000 (within about
810,000 trials -- the authors did not mention the precise stopping criterion). A method called ``discrete error BP'' took about 54,000 trials to achieve final classification error 0.05. In more
recent work [1], for sequences with 250-500 steps, their EM-approach took about 3,400 trials to achieve final classification error 0.12.
RS results. RS with A1 (
Again it should be mentioned that different input representations and noise types may lead to worse RS performance (Yoshua Bengio, personal communication, 1996).
Tomita grammars. Many authors also use Tomita's grammars [18] to test their algorithms. See, e.g., [2,19,14,11,10]. Since we already tested parity problems above, we now focus on a few
``parity-free'' Tomita grammars (nr.s #1, #2, #4). Previous work facilitated the problems by restricting sequence length. E.g., in [11], maximal test (training) sequence length is 15 (10). Reference
[11] reports the number of sequences required for convergence (for various first and second order nets with 3 to 9 units): Tomita #1: 23,000 - 46,000; Tomita #2: 77,000 - 200,000; Tomita #4: 46,000 -
210,000. RS, however, clearly outperforms the methods in [11]. The average results are: Tomita #1: 182 (A1,
Non-trivial tasks / Outline of remainder. The experiments in the remainder of this paper deal with non-trivial tasks whose solutions are sparse in weight space. They require either many free
parameters (e.g., input weights) or high weight precision, such that RS becomes completely infeasible. All experiments involve long minimal time lags -- there are no short time lag training exemplars
facilitating learning. To solve these tasks, however, we need a novel method called ``Long Short-Term Memory'', or LSTM for short [8]. Section 3 will briefly review LSTM. Section 4 will present new
results on tasks that cannot be solved at all by any other recurrent net learning algorithm we are aware of. LSTM can solve rather complex long time lag problems involving distributed,
high-precision, continuous-valued representations, and is able to extract information conveyed by the temporal order of widely separated inputs.
Next: LONG SHORT-TERM MEMORY Up: LSTM CAN SOLVE HARD Previous: LSTM CAN SOLVE HARD Juergen Schmidhuber 2003-02-25
Back to Recurrent Neural Networks page | {"url":"http://www.idsia.ch/~juergen/nipslstm/node1.html","timestamp":"2014-04-16T13:04:39Z","content_type":null,"content_length":"12923","record_id":"<urn:uuid:a76099a4-fbf8-4d9a-bebb-49200681a82c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Congruent Triangles
Product Information
Congruent Triangles
Shape and Space PowerPoint Presentation
This is very good presentation on congruent triangles. It starts off by reminding students about congruent shapes generally. A short historical note discusses the four congruent theorems proven by
Euclid in book I of his Elements and stresses the important part that they played in proving other theorems such as the Theorem of Pythagoras and the Chord Bisector Theorem. The four conditions that
guarantee congruency for triangles are clearly explained and sets of questions follow to re-enforce and test understanding. We then go on to look at six proofs that enlist the use of congruent
triangles. They have been carefully chosen so that each of SSS, SAS, AAS and RHS plays a part. | {"url":"http://www.whiteboardmaths.com/product_57.html","timestamp":"2014-04-16T15:59:13Z","content_type":null,"content_length":"12625","record_id":"<urn:uuid:8fb57b8d-a1ab-40d4-b1b4-447fb965c3bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
RIEMANNIAN GEOMETRY, FIBER BUNDLES, KALUZA-KLEIN THEORIES AND ALL THAT.....
A. Jadczyk,
Print-82-0210 (GOTTINGEN), (Received Mar 1982). 26pp.
Published in Ann.Poincare 28:99-111,1983.
R. Coquereaux, A. Jadczyk,
CERN-TH-3483, Dec 1982. 31pp.
Published in Commun.Math.Phys. 90:79-100,1983.
A. Jadczyk
Published in J. Geom. Phys. 1 ( 1984) No.2 97-126
COLOR AND HIGGS CHARGES IN G/H KALUZA-KLEIN THEORY.
A.Z. Jadczyk
IC/84/42, Apr 1984. 28pp.
Published in Class.Quant.Grav.1:517-530,1984.
FIBER BUNDLES AND KALUZA-KLEIN THEORY.
A. Jadczyk
UWR-85/642, May 1985. 14pp. Lecture given at 21st Karpacz Winter School of Theoretical Physics, Karpacz, Poland, Feb 18 - Mar 2, 1985.
Published in Karpacz Winter School 1985:333 (QC178:W45:1985)
Coquereaux, A. Jadczyk
CERN-TH-4023/84, Sep 1984. 30pp.
Published in Class.Quant.Grav.3:29-42,1986.
CONSISTENCY OF THE G INVARIANT KALUZA-KLEIN SCHEME.
R. Coquereaux, A. Jadczyk
Nucl. Phys. B276 ( 1986) 617-628 and CERN Geneva - TH. 4305 (85,REC.DEC.) 15p.
ON BERRY'S PHASE.
Arkadiusz Jadczyk
Preprint, Based on talk presented at the 7th Summer Workshop on Mathematical Physics - Group Theoretical Methods and Physical Applications. Clausthal, Germany,
R. Coquereaux, A. Jadczyk
CPT-89/P-2302, Sep 1989. 43pp.
Published in Rev.Math.Phys. 2:1-44,1990.
See also LIE BALLS AND RELATIVISTIC QUANTUM FIELDS
R. Coquereaux, Published in Nuclear Physics 18B (1990) 48-52
TOPICS IN QUANTUM DYNAMICS.
A. Jadczyk
Published in Infinite Dimensional Geometry, Non Commutative Geometry, Operator Algebras, Fundamental Interactions,
ed. R. Coquereaux et al, Worl Scientific, Singapore 1995, pp. 57-91
e-Print Archive: hep-th/9406204 | {"url":"http://arkadiusz-jadczyk.org/papers/kk.htm","timestamp":"2014-04-19T09:33:38Z","content_type":null,"content_length":"7351","record_id":"<urn:uuid:51fafaaa-c5e7-4672-9276-d88ae93868c2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random-walk computation of similarities between nodes of a graph, with application to collaborative recommendation
Results 1 - 10 of 105
, 2004
"... Consistency is a key property of statistical algorithms, when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about
consistency of most clustering algorithms. In this paper we investigate consistency of a popular family of spe ..."
Cited by 286 (15 self)
Add to MetaCart
Consistency is a key property of statistical algorithms, when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about consistency
of most clustering algorithms. In this paper we investigate consistency of a popular family of spectral clustering algorithms, which cluster the data with the help of eigenvectors of graph Laplacian
matrices. We show that one of the two of major classes of spectral clustering (normalized clustering) converges under some very general conditions, while the other (unnormalized), is only consistent
under strong additional assumptions, which, as we demonstrate, are not always satisfied in real data. We conclude that our analysis provides strong evidence for the superiority of normalized spectral
clustering in practical applications. We believe that methods used in our analysis will provide a basis for future exploration of Laplacian-based methods in a statistical setting.
- SIAM J. Comput
"... We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph G = (V, E, w) and a parameter ǫ> 0, we produce a weighted
subgraph H = (V, ˜ E, ˜w) of G such that | ˜ E | = O(n log n/ǫ 2) and for all vectors x ∈ R V (1 − ǫ) ∑ (x ..."
Cited by 63 (4 self)
Add to MetaCart
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph G = (V, E, w) and a parameter ǫ> 0, we produce a weighted subgraph
H = (V, ˜ E, ˜w) of G such that | ˜ E | = O(n log n/ǫ 2) and for all vectors x ∈ R V (1 − ǫ) ∑ (x(u) − x(v)) 2 wuv ≤ ∑ (x(u) − x(v)) 2 ˜wuv ≤ (1 + ǫ) ∑ (x(u) − x(v)) 2 wuv. (1) uv∈E uv ∈ ˜ E This
improves upon the sparsifiers constructed by Spielman and Teng, which had O(n log c n) edges for some large constant c, and upon those of Benczúr and Karger, which only satisfied (1) for x ∈ {0, 1}
V. We conjecture the existence of sparsifiers with O(n) edges, noting that these would generalize the notion of expander graphs, which are constant-degree sparsifiers for the complete graph. A key
ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between
any two vertices in a graph in O(log n) time. uv∈E
"... Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and
implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency ..."
Cited by 38 (0 self)
Add to MetaCart
Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and
implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by
allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track
recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation
system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to
represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as
social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The
results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.
- In WWW , 2009
"... christian.bauckhage ..."
- In KDD ’09: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining , 2009
"... This paper describes and evaluates privacy-friendly methods for extracting quasi-social networks from browser behavior on user-generated content sites, for the purpose of finding good audiences
for brand advertising (as opposed to click maximizing, for example). Targeting social-network neighbors re ..."
Cited by 23 (2 self)
Add to MetaCart
This paper describes and evaluates privacy-friendly methods for extracting quasi-social networks from browser behavior on user-generated content sites, for the purpose of finding good audiences for
brand advertising (as opposed to click maximizing, for example). Targeting social-network neighbors resonates well with advertisers, and on-line browsing behavior data counterintuitively can allow
the identification of good audiences anonymously. Besides being one of the first papers to our knowledge on data mining for on-line brand advertising, this paper makes several important
contributions. We introduce a framework for evaluating brand audiences, in analogy to predictive-modeling holdout evaluation. We introduce methods for extracting quasi-social networks from data on
visitations to social networking pages, without collecting any information on the identities of the browsers or the content of the social-network pages. We introduce measures of brand proximity in
the network, and show that audiences with high brand proximity indeed show substantially higher brand affinity. Finally, we provide evidence that the quasi-social network embeds a true social
network, which along with results from social theory offers one explanation for the increases in audience brand affinity.
"... We present a unified framework for learning link prediction and edge weight prediction functions in large networks, based on the transformation of a graph’s algebraic spectrum. Our approach
generalizes several graph kernels and dimensionality reduction methods and provides a method to estimate their ..."
Cited by 19 (2 self)
Add to MetaCart
We present a unified framework for learning link prediction and edge weight prediction functions in large networks, based on the transformation of a graph’s algebraic spectrum. Our approach
generalizes several graph kernels and dimensionality reduction methods and provides a method to estimate their parameters efficiently. We show how the parameters of these prediction functions can be
learned by reducing the problem to a one-dimensional regression problem whose runtime only depends on the method’s reduced rank and that can be inspected visually. We derive variants that apply to
undirected, weighted, unweighted, unipartite and bipartite graphs. We evaluate our method experimentally using examples from social networks, collaborative filtering, trust networks, citation
networks, authorship graphs and hyperlink networks. 1.
- Proceedings of the 6th International Conference on Data Mining (ICDM 2006 , 2006
"... This paper presents a survey as well as a systematic empirical comparison of seven graph kernels and two related similarity matrices (simply referred to as graph kernels), namely the exponential
diffusion kernel, the Laplacian exponential diffusion kernel, the von Neumann diffusion kernel, the regul ..."
Cited by 19 (6 self)
Add to MetaCart
This paper presents a survey as well as a systematic empirical comparison of seven graph kernels and two related similarity matrices (simply referred to as graph kernels), namely the exponential
diffusion kernel, the Laplacian exponential diffusion kernel, the von Neumann diffusion kernel, the regularized Laplacian kernel, the commute-time kernel, the random-walk-with-restart similarity
matrix, and finally, three graph kernels introduced in this paper: the regularized commute-time kernel, the Markov diffusion kernel, and the cross-entropy diffusion matrix. The kernel-on-a-graph
approach is simple and intuitive. It is illustrated by applying the nine graph kernels to a collaborative-recommendation task and to a semisupervised classification task, both on several databases.
The graph methods compute proximity measures between nodes that help study the structure of the graph. Our comparisons suggest that the regularized commute-time and the Markov diffusion kernels
perform best, closely followed by the regularized Laplacian kernel. 1
"... We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between
pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves t ..."
Cited by 16 (1 self)
Add to MetaCart
We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs
of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed
values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small “sketch ” for each node in the graph, and at
query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance. Categories and Subject Descriptors G.2.2
[Graph Theory]: Graph algorithms, path and circuit problems
- in Proceedings of the 14th SIGKDD International Conference on Knowledge Discovery and Data Mining
"... This work introduces a new family of link-based dissimilarity measures between nodes of a weighted directed graph. This measure, called the randomized shortest-path (RSP) dissimilarity, depends
on a parameter θ and has the interesting property of reducing, on one end, to the standard shortest-path d ..."
Cited by 14 (7 self)
Add to MetaCart
This work introduces a new family of link-based dissimilarity measures between nodes of a weighted directed graph. This measure, called the randomized shortest-path (RSP) dissimilarity, depends on a
parameter θ and has the interesting property of reducing, on one end, to the standard shortest-path distance when θ is large and, on the other end, to the commute-time (or resistance) distance when θ
is small (near zero). Intuitively, it corresponds to the expected cost incurred by a random walker in order to reach a destination node from a starting node while maintaining a constant entropy
(related to θ) spread in the graph. The parameter θ is therefore biasing gradually the simple random walk on the graph towards the shortest-path policy. By adopting a statistical physics approach and
computing a sum over all the possible paths (discrete path integral), it is shown that the RSP dissimilarity from every node to a particular node of interest can be computed efficiently by solving
two linear systems of n equations, where n is the number of nodes. On the other hand, the dissimilarity between every couple of nodes is obtained by inverting an n × n matrix. The proposed measure
can be used for various graph mining tasks such as computing betweenness centrality, finding dense communities, etc, as shown in the experimental section.
- In Proceedings of the 11th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2007). Lecture notes in Computer Science, LNCS , 2007
"... This work presents a kernel method for clustering the nodes of a weighted, undirected, graph. The algorithm is based on a two-step procedure. First, the sigmoid commute-time kernel (KCT),
providing a similarity measure between any couple of nodes by taking the indirect links into account, is compute ..."
Cited by 13 (6 self)
Add to MetaCart
This work presents a kernel method for clustering the nodes of a weighted, undirected, graph. The algorithm is based on a two-step procedure. First, the sigmoid commute-time kernel (KCT), providing a
similarity measure between any couple of nodes by taking the indirect links into account, is computed from the adjacency matrix of the graph. Then, the nodes of the graph are clustered by performing
a kernel k-means or fuzzy k-means on this CT kernel matrix. For this purpose, a new, simple, version of the kernel k-means and the kernel fuzzy k-means is introduced. The joint use of the CT kernel
matrix and kernel clustering appears to be quite successful. Indeed, this methodology provides good results, outperforming the spherical k-means, on a document clustering problem involving the
newsgroups database. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.72.5404","timestamp":"2014-04-18T19:46:32Z","content_type":null,"content_length":"41474","record_id":"<urn:uuid:154978eb-7bc1-48a2-8c43-c0940bbc7915>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral Octonions (Part 5)
Posted by John Baez
I’m back from China. I saw lots of cool stuff:
(Click for more information.)
But I also had lots of time sitting in trains, buses and automobiles to think about the $\mathrm{E}_8$ root polytope. Calculations along these lines turn out to be a great antidote to boredom!
So now I’d like to show you how to calculate the number of vertices, edges, 2d faces, etc. of this 8-dimensional polytope. What’s interesting is not the answers so much as the technique, which
involves Dynkin diagrams.
If you take a bunch of equal-sized balls in 8 dimensions, and get as many as possible to all touch a central one, their centers will be the vertices of the $\mathrm{E}_8$ root polytope. Here are two
attempts to draw it—click for more information:
In its full 8-dimensional glory, this shape has:
• 240 vertices
• 6,720 edges
• 60,480 2d faces, which are all equilateral triangles
• 241,920 3d faces, which are all regular tetrahedra
• 483,840 4d faces, which are all regular 4-simplexes
• 483,840 5d faces, which are all regular 5-simplexes
• 207,360 6d faces, which are all regular 6-simplexes
• 19,440 7d faces, consisting of 17,280 regular 7-simplexes and 2,160 regular 7-orthoplexes
• a symmetry group with 696,729,600 elements, including rotations and reflections.
Remember, a 3-simplex is a tetrahedron:
while a 3-orthoplex is an octahedron:
Anyone can look up these crazy numbers on Wikipedia. But how can we see that they’re right?
That’s our challenge for today. It’s easiest if we define the $\mathrm{E}_8$ root polytope starting from this picture:
and use the theory of simply-laced Dynkin diagrams, which are Dynkin diagrams without any multiple edges between dots. (Most of what I’m about to say generalizes to other Dynkin diagrams, but I’ll
save a bit of time by focusing on this case.)
Suppose we’ve got such a diagram with $n$ dots. At an elementary level, this diagram tells us to take a bunch of unit vectors in $n$-dimensional Euclidean space, and make sure they lie at a $90^\
circ$ angle if there’s no edge connecting them, and a $120^\circ$ angle if there is an edge connecting them.
We get a lattice called the root lattice by taking all integer linear combinations of these vectors. The nonzero vectors closest to the origin are called roots. These roots are the vertices of an $n$
-dimensional convex polytope called the root polytope.
The roots include the vectors we started with, one for each dot in our Dynkin diagram: these are called the simple roots. But there are also a lot more.
Starting with the $\mathrm{E}_8$ Dynkin diagram, we can in principle figure out everything we want to know about the $\mathrm{E}_8$ root polytope, including how many faces of each dimension it has,
and the size of its symmetry group. But to do such things elegantly, it helps to use some tricks.
Coxeter groups
The main trick is to use the theory of Coxeter groups. Any simply-laced Dynkin diagram gives such a group, and it’s very easy to describe. It has one generator $s_i$ obeying
$s_i^2 = 1$
for each dot in the Dynkin diagram, one relation
$s_i s_j s_i = s_i s_j s_i$
for each pair of dots connected by an edge, and one relation
$s_i s_j = s_j s_i$
for each pair of dots not connected by an edge. Each generator $s_i$ corresponds to a reflection: the reflection that flips the $i$th simple root to its negative.
For example, starting with the Dynkin diagram called $\mathrm{A}_3$:
we get a group with 3 generators, which happens to be the symmetry group of a regular tetrahedron:
So, you should think of the three dots in the Dynkin diagram as standing for ‘vertex’, ‘edge’ and ‘face’. Why? Because the corresponding generators of the Coxeter group are reflections that:
• switch two neighboring vertices,
• switch two neighboring edges, and
• switch two neigboring faces,
The $\mathrm{A}_n$ diagram gives the symmetry group of a regular $n$-simplex in just the same way. A similar story applies to the symmetry group of the regular $n$-dimensional orthoplex, but the
Dynkin diagram for this is not simply laced, so I won’t get into the details. The cases we really need now are the $\mathrm{E}_8$ Dynkin diagram and the sub-diagrams of this.
For example, if we pull off the end dot of $\mathrm{E}_8$ we get a sub-diagram called $\mathrm{D}_7$, which looks like this:
or, after prettying it up, this:
Reading from left to right, can think of the dots here as standing for ‘vertex’, ‘edge’, ‘2d face’, ‘3d face’ and so on… but when we get to the right end of the diagram, obviously something funny
must happen!
There are two dots that mean ‘top-dimensional face’. The reason is that this Coxeter group is the symmetry group of a 7-orthoplex that has its 6-dimensional faces colored alternately white and black.
We only consider symmetries that carry faces to faces of the same color. Two 6d faces intersect in at most one 5d face, so there’s no dot for ‘5d face’. We can summarize the story like this:
All the $\mathrm{D}_n$ Dynkin diagrams look similar, with some number of dots in a row followed by two at the end… and they all work the same way. Their Coxeter groups are the symmetry groups of
orthoplexes with their top-dimensional faces colored alternately white and black.
In the case of $\mathrm{E}_8$, something even stranger happens! The top-dimensional faces are of two kinds: 7-simplex and 7-orthoplex:
Other subtleties arise for the sub-diagrams $\mathrm{E}_7$ and $\mathrm{E}_6$. So, you’re probably thinking that this subject is an elaborate quagmire. But it’s not.
Coxeter complexes
To deal with all Coxeter groups in a systematic way, it’s better to think of them as symmetry groups of certain simplicial complexes called ‘Coxeter complexes’. Roughly speaking, a simplicial complex
is a gadget made of 0-simplexes, 1-simplexes, 2-simplexes, 3-simplexes, and so on — all stuck together in a nice way.
If you have a Coxeter diagram with $n$ dots, the highest dimension of the simplexes in its Coxeter complex is $n-1$. There is one of these top-dimensional simplexes for each element of the Coxeter
group. For example, I’ve already said this Dynkin diagram:
gives the Coxeter group consisting of symmetries of a regular tetrahedron — by which I mean all reflections and rotations. This group has 4! = 24 elements, so the Coxeter complex is built from 24
triangles. And in fact, we get it by barycentrically subdividing the surface of a tetrahedron! We can draw it on a sphere, like this:
You can see there are 24 triangles. And, you can see that if you fix a given triangle, there’s a unique symmetry of the Coxeter group mapping it to any other triangle.
In general, the Coxeter group of a Dynkin diagram with $n$ dots always acts as linear transformations of $\mathbb{R}^n$. Each root gives a reflection that flips that root to its negative. So, this
group also acts on the $(n-1)$-sphere. If we take this sphere and chop it up along the hyperplanes corresponding to all the reflections in the Coxeter group, we get the Coxeter complex. See if you
can visualize this in the picture above.
Even better, if you pick any top-dimensional simplex in the Coxeter complex, there always exactly one element of the Coxeter group that maps it to any other top-dimensional simplex. So the Coxeter
complex is the best possible thing made out of simplexes on which the Coxeter group acts as symmetries!
The size of the Coxeter group corresponding to a Dynkin diagram is always the product of some integers, one for each dot in the Dynkin diagram. These integers have lots of important properties, and
calculating them is really the key to all the problems we’ve set out to solve. To start with, let me just show you these numbers and what we can do with them.
Here they are. For any Dynkin diagram $D$, let $W(D)$ be its Coxeter group, also known as its Weyl group. Here are the sizes of these groups for all the simply-laced Dynkin diagrams:
$\begin{array}{ccll} |W(\mathrm{A}_n)| &=& 2 \cdot 3 \cdot 4 \cdot \cdots \cdot (n+1) &=& (n+1)! \\ |W(\mathrm{D}_n)| &=& 2 \cdot 4 \cdot 6 \cdot \cdots \cdot 2(n-1) \cdot n &=& (2n)?! \\ |W(\mathrm
{E}_6)| &=& 2 \cdot 5 \cdot 6 \cdot 8 \cdot 9 \cdot 12 &=& 51,840 \\ |W(\mathrm{E}_7)| &=& 2 \cdot 6 \cdot 8 \cdot 10 \cdot 12 \cdot 14 \cdot 18 &=& 2,903,040 \\ |W(\mathrm{E}_8)| &=& 2 \cdot 8 \cdot
12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30 &=& 696,729,600 \end{array}$
It’s easy to see that $W(A_n)$, the symmetry group of the $n$-dimensional simplex, has size equal to
$|W(\mathrm{A}_n)| = (n+1)!$
since the $n$-simplex has symmetries permuting its vertices any way you want. While it’s not a simply-laced example, it’s also easy to see that $W(B_n)$, the symmetry group of the $n$-dimensional
orthoplex, has size equal to
$|W(\mathrm{B}_n)| = 2^n n!$
This number is equal to the double factorial
$(2n)!! = 2 \cdot 4 \cdot \cdots \cdot 2n$
Finally, it’s also easy to see that $W(D_n)$, the symmetry group of the orthoplex with its top-dimensional faces colored alternately white and black, is half as big. So, its size is the half double
factorial of $2n$:
$(2n)?! = 2 \cdot 4 \cdots \cdot 2(n-1) \cdot n$
So, the only problem is computing the size of $W(\mathrm{E}_6)$, $W(\mathrm{E}_7)$, and $W(\mathrm{E}_8)$. I’ll talk about this later.
You might think it’s somewhat arbitrary how we wrote the size of these Coxeter groups as products of numbers—but it’s not! I’ll talk about this more at the end of the article. But here’s a little
taste of the fun:
Puzzle 1. If you double each of these numbers and subtract one, then add up the results, you get the dimension of the Lie group corresponding to this Dynkin diagram. Why?
Let’s check that it works for $\mathrm{E}_8$:
$2 \cdot (2 + 8 + 12 + 14 +18 + 20 + 24 + 30) - 8 = 2 \cdot 128 - 8 = 248$
Yes, this is the dimension of the Lie group $\mathrm{E}_8$!
Getting ready to calculate
Assuming we know the size of the Coxeter groups for all the simply-laced Dynkin diagrams, let’s see if we can calculate how many faces of each kind the $\mathrm{E}_8$ root polytope has.
I explained the method in week187. Each Dynkin diagram $D$ describes a kind of ‘incidence geometry’ with different kinds of figures—points, edges, triangles, tetrahedra and so on—one for each dot in
the Dynkin diagram. These figures can be ‘incident’ to each other—e.g., a triangle can lie on a tetrahedron—and there’s one basic incidence relation for each edge in the Dynkin diagram.
The Coxeter group $W(D)$ acts transitively on the figures of any given kind. We can also work out the subgroup of $W(D)$ that preserves a figure of a given kind. To do this, we just remove the
corresponding dot from the Dynkin diagram, and get a sub-diagram $E \subset D$. Then $W(E) \subseteq W(D)$ is the subgroup we want!
So, the set of the figures of a given kind is $W(D)/W(E)$. Thus, the number of figures of this kind is the ratio
To see if you understand this, let’s do an easy example! How many edges does a tetrahedron have? Here our Dynkin diagram $D$ is $\mathrm{A}_3$:
It describes a geometry, namely a tetrahedron, with figures of three kinds:
Suppose we want to count the number of 1-simplexes, or edges. We take the ‘1-simplex’ dot:
and we remove it, obtaining this sub-diagram:
Hmm, this isn’t connected! But that’s okay. This diagram is the disjoint union of two copies of the one-dot Dynkin diagram, $\mathrm{A}_1$. If you follow the rules, you’ll see its Coxeter group is
the product
$W(\mathrm{A}_1) \times W(\mathrm{A}_1)$
So, the number of edges in a tetrahedron is
$\frac{|W(\mathrm{A}_3)|}{|W(\mathrm{A}_1)| \times |W(\mathrm{A}_1)|} = \frac{4!}{2! \times 2!} = 6$
which is right!
If you’ve never done this sort of stuff, I encourage you to play around with more examples. It’s lots of fun! But having spent years warming up with exercises like that, I’m now going to climb Mount
Everest and count the various figures in the $\mathrm{E}_8$ root polytope.
It starts out easy, but near the end, as you might imagine from this picture, it gets tricky—and I’ll slip and fall. Hanging from a icy cliff, I’ll ask for your help!
If we remove the ‘vertex’ dot from the $\mathrm{E}_8$ Dynkin diagram:
we’re left with $\mathrm{E}_7$:
So, the number of vertices of the $\mathrm{E}_8$ root polytope is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{E}_7)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 6 \cdot 8 \cdot 10 \cdot 12 \cdot 14 \cdot 18} \\ &=
& \frac{ 20 \cdot 24 \cdot 30}{6 \cdot 10} \\ &=& 240 \end{array}$
as we knew!
Just for fun, in the next to last step I cancelled all the numbers I could from the big fraction in the top line. This is worth doing because a $q$-deformed version of the same calculation lets us
count the number of points in a certain space called a ‘Grassmannian’ for the version of the group $\mathrm{E}_8$ defined over the finite field with $q$ elements, where $q$ is any prime power. I
explained how this works in week186 and week187.
In the $q$-deformed count we replace integers with $q$-integers, defined as follows:
$[n] = 1 + q + q^2 + \cdots + q^{n-1}$
So, for this particular Grassmannian the number of points is
$\frac{[2] \cdot [8] \cdot [12] \cdot [14] \cdot [18] \cdot [20] \cdot [24] \cdot [30]}{[2] \cdot [6] \cdot [8] \cdot [10] \cdot [12] \cdot [14] \cdot [18]}$
or, doing all the cancellations we can,
$\frac{ [20] \cdot [24] \cdot [30]}{[6] \cdot [10]}$
This, believe it or not, is a polynomial in $q$. But we can’t say it’s equal to $[240]$, because in general $[n m] e [n][m]$. Only when we set $q = 1$ does this equation hold, and then we get 240
points. So, our problem now, involving Coxeter groups, concerns the special case of the legendary but so far still mythical ‘field with one element’.
But if we wanted, we could look at the group $\mathrm{E}_8$ defined over the field with 2 elements. It acts in a transitive way on a set with
$\frac{ [20] \cdot [24] \cdot [30]}{[6] \cdot [10]}$
points. How many points is that? In this case
$[n] = 1 + 2 + 4 + \cdots + 2^{n-1} = 2^n - 1$
so the number of points is
$\frac{(2^{20} - 1)(2^{24} - 1)(2^{30} - 1)}{(2^6 - 1)(2^{10} - 1)} = 293,091,386,578,365,375$
If we remove the ‘edge’ dot from the $\mathrm{E}_8$ Dynkin diagram:
we’re left with $\mathrm{A}_1 \times \mathrm{E}_6$:
So, the number of edges of the $\mathrm{E}_8$ root polytope is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_1)| \times |W(\mathrm{E}_6)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \times (2 \cdot 5 \cdot 6 \cdot 8
\cdot 9 \cdot 12)} \\ &=& \frac{14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 5 \cdot 6 \cdot 9} \\ &=& 6,720 \end{array}$
A 2-simplex is just a triangle. If we remove the ‘2-simplex’ dot:
we’re left with $\mathrm{A}_2 \times \mathrm{D}_5$:
so the number of 2d faces is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_2)| \times |W(\mathrm{D}_5)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{(2 \cdot 3) \times (2 \cdot 4 \cdot
6 \cdot 8 \cdot 5)} \\ &=& \frac{12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdot 8}\\ &=& 60,480 \end{array}$
This is a funny picture of a 3-simplex, or tetrahedron. If we remove the ‘3-simplex’ dot:
we’re left with $\mathrm{A}_3 \times \mathrm{A}_4$:
so the number of 3d faces is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_3)| \times |W(\mathrm{A}_4)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{(2 \cdot 3 \cdot 4) \times (2 \cdot
3 \cdot 4 \cdot 5)} \\ &=& \frac{8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 3 \cdot 3 \cdot 4 \cdot 4 \cdot 5}\\ &=& 241,920 \end{array}$
This is a picture of a 4-simplex. If we remove the ‘4-simplex’ dot from the $\mathrm{E}_8$ Dynkin diagram:
we’re left with $\mathrm{A}_4 \times \mathrm{A}_1 \times \mathrm{A}_2$:
so the number of 4d faces is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_4)| \times |W(\mathrm{A}_1)| \times |W(\mathrm{A}_4)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{(2 \cdot 3
\cdot 4 \cdot 5) \times 2 \times (2 \cdot 3)} \\ &=& \frac{8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 2 \cdot 3 \cdot 3 \cdot 4 \cdot 5} \\ &=& 483,840 \end{array}$
At this point we meet a mystery: there is no ‘5-simplex’ dot in the $\mathrm{E}_8$ Dynkin diagram!
This is a bit like how there is no 5-simplex dot in the $\mathrm{D}_7$ Dynkin diagram:
The resolution there is that if we have ‘complete flag’ consisting of one figure of each kind shown in this picture, all incident, there is a unique 5-simplex that’s incident to all the other
figures. Indeed, there’s at most one 5-simplex touching any pair consisting of a white and a black 6-simplex. So, the 5-simplex is ‘redundant’.
The same sort of resolution must, I think, apply to $\mathrm{E}_8$. Understanding this in detail would make a good test of our understanding. But let’s move on!
If we remove the ‘6-simplex’ dot:
we’re left with $\mathrm{A}_6 \times \mathrm{A}_1$:
so the number of 6d faces is apparently:
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_6)| \times |W(\mathrm{A}_1)| } &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{(2 \cdot 3 \cdot 4 \cdot 5 \cdot 6
\cdot 7) \times 2} \\ &=& \frac{8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdot 7} \\ &=& 69,120 \end{array}$
But this does not agree with Wikipedia’s answer! It says our polytope has 207,360 6-dimensional faces, which are all regular 6-simplexes. So, we have another mystery on our hands here, which we will
return to.
If we remove the ‘7-simplex’ dot:
we are left with $\mathrm{A}_7$:
so the number of 7-dimensional simplex faces is:
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_7)| } &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdot 7 \cdot 8} \\ &=& \
frac{12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{3 \cdot 4 \cdot 5 \cdot 6 \cdot 7} \\ &=& 17,280 \end{array}$
If we remove the ‘7-orthoplex’ dot:
we are left with $\mathrm{D}_7$:
so the number of 7-dimensional orthoplex faces is:
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{D}_7)| } &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{2 \cdot 4 \cdot 6 \cdot 8 \cdot 10 \cdot 12 \cdot 7} \\ &=&
\frac{12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{4 \cdot 6 \cdot 7 \cdot 10 \cdot 12} \\ &=& 2,160 \end{array}$
Further mysteries
We’ve got some mysteries to solve.
First, our calculations seemed to show the number of 6-simplex faces of root polytope of $\mathrm{E}_8$ is 69,120, while Wikipedia says the answer is 207,360. Luckily we can compute the answer
another way, which sheds a little light on this problem.
Our polytope has 17,280 7-simplex faces and 2,160 7-orthoplex faces. Each 7-simplex has 8 faces, all of which are 6-simplexes. Each 7-orthoplex has 2^7 faces, all of which are 6-simplexes. So, we
could naively count the total number of 6-simplexes and get
$17,280 \cdot 8 + 2,160 \cdot 2^7 = 138,240 + 276,480 = 414,720$
But this is double-counting, since each 6-simplex touches two 7-simplexes. (The top-dimensional faces of a convex polytope always meet in pairs along faces of the next highest dimension.) So the real
count is half this:
$\frac{138,240}{2} + \frac{276,480}{2} = 69,120 + 138,240 = 207,360$
This matches the answer on Wikipedia. But it also sheds some light on our wrong answer. Our wrong answer, 69,120, is exactly the total number of faces of all the 7-simplexes. It’s also half the total
number of faces of the 7-orthoplexes. It’s also exactly 1/3 of the correct answer!
All this suggests that the 6-simplex faces of our polytope come in two kinds, which cannot be interchanged by symmetries of the polytope. 1/3 of them are the ones we counted, the kind mentioned in
this diagram:
2/3 of them are 6-simplexes of some other kind, which do not appear in the above diagram.
If this seems weird, remember that this diagram also does not include the 5-simplex faces of our polytope! Also remember that the $\mathrm{D}_7$ diagram, corresponding to the 7-orthoplex, gives
another example where some simplexes come in two kinds, while others do not appear at all, even though they exist as faces of the 7-orthoplex:
Even if this is true, it’s clearly not enough for a full understanding. If our polytope has two kinds of 6-simplex as faces, how do they differ? And where is the number 1/3 coming from?
Here’s my guess. The 7-orthoplex faces of our polytope have 6-simplex faces that can be colored alternately white and black, and all the symmetries of our polytope preserve this two-coloring. As
evidence, note that the symmetries preserving a 7-orthoplex form the group $\mathbb{D}_7$, which preserves such a two-coloring. But I’m guessing we can do this two-coloring in a consistent way, so
every 6-simplex in our polytope is either white or black.
And I’m guessing we can do this so that each 7-orthoplex touches another 7-orthoplex on each of its black faces, and a 7-simplex on each of its white faces! On the other hand, every 7-simplex is
completely surrounded by 7-orthoplexes.
If we can do this:
• Each white 6-simplex is the face of a unique 7-simplex, so the number of white 6-simplexes is
$17,280 \cdot 8 = 138,240$
These are the 6-simplexes we counted.
• Each white 6-simplex is the face of a unique 7-orthoplex, so the number of white 6-simplexes is also
$\frac{2,160 \cdot 2^7}{2} = 138,240$
where we get a factor of $1/2$ because only half the 6-simplex faces of each 7-orthoplex are white. This is a consistency check!
• Each black 6-simplex is the face of two 7-orthoplexes, so the number of black 6-simplexes is
$\frac{2,160 \cdot 2^7}{4} = 69,120$
• The total number of 6-simplexes is
$69,120 + 138,240 = 207,360$
as it says on Wikipedia.
All this fits together nicely… so while I haven’t proved the existence of this ‘consistent two-coloring’ and this arrangement of 6-simplexes and 6-orthoplexes, I believe it.
Puzzle 2. Prove or disprove my guesses here!
Why does our polytope have 483,840 5d faces, which are all regular 5-simplexes? I haven’t figured out how to count them. With work I should be able to do it using my guesses in the previous section,
but I made these guess just a few minutes ago.
Notably, 483,840 is also the number of 4d faces.
Puzzle 3. Show how to count the 5d faces of the $\mathrm{E}_8$ root polytope.
The magic numbers
All my calculations rely on general results that I understand, together with some numbers:
$\begin{array}{ccll} W(\mathrm{E}_6)| &=& 2 \cdot 5 \cdot 6 \cdot 8 \cdot 9 \cdot 12 &=& 51,840 \\ |W(\mathrm{E}_7)| &=& 2 \cdot 6 \cdot 8 \cdot 10 \cdot 12 \cdot 14 \cdot 18 &=& 2,903,040 \\ |W(\
mathrm{E}_8)| &=& 2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30 &=& 696,729,600 \end{array}$
In fact for my calculations I only need to know the sizes of these three Coxeter groups, but it’s irresistible to try understanding the ‘magic numbers’ that you can multiply to get their sizes.
Puzzle 4. What’s the best way to calculate these magic numbers?
Back in week186 and week187 I gave many characterizations of these magic numbers. Unfortunately, they don’t seem to make it easy to calculate these numbers for $\mathrm{E}_6, \mathrm{E}_7$ and $\
I bet somebody knows an easy way. But to help you with this puzzle, I’ll tell you eight hard ways!
Start with any Dynkin diagram $D$, not necessarily simply-laced. Let $G(\mathbb{F})$ be the corresponding simple algebraic group over a field $\mathbb{F}$. In particular,
$G = G(\mathbb{C})$
is a complex Lie group. Suppose the diagram $D$ has $n$ dots. Then the magic numbers $d_1, \dots, d_n$ are natural numbers such that
$|W(D)| = d_1 \cdot \cdots \cdot d_n$
1) The real cohomology of $G$ is an exterior algebra on generators $x_1, \dots, x_n$, one for each dot of the Dynkin diagram. The degrees of these generators are all odd, and the degree of $x_i$ is
$2d_i - 1$.
2) The real cohomology of the classifying space $B G$ is a polynomial algebra on generators $y_1, \dots, y_n$. The degrees of these generators are all even, and the degree of $y_i$ is $2 d_i$.
3) The Weyl group $W(D)$ acts on $\mathbb{R}^n$. The algebra of polynomials on $\mathbb{R}^n$ invariant under this group action is a free commutative algebra on generators $z_1, \dots, z_n$. The
polynomial $z_i$ is homogeneous of degree $d_i$.
4) Define the q-polynomial of the Dynkin diagram $D$ to be the polynomial in $q$ where the coefficient of $q^k$ is the number of $k$-cells in the Bruhat decomposition of the flag variety $G/B$. Here
$B$ is the Borel subgroup of $G$, and the Bruhat decomposition is a standard way of writing $G/B$ as disjoint union of $k$-cells, that is, copies of $\mathbb{C}^k$. Then the $q$-polynomial equals
$[d_1] \cdot \cdots \cdot [d_n]$
where $[d_i] = 1 + q + \cdots + q^{d_i}$.
5) If the coefficient of $q^k$ in the $q$-polynomial is $d$, the $(2i)$th homology group of $G/B$ is $\mathbb{Z}^d.$
6) The value of the $q$-polynomial when $q$ is a prime power is the cardinality of $G(\mathbb{F}_q)/B(\mathbb{F}_q)$. Here $\mathbb{F}_q$ is the field with $q$ elements, and $B(\mathbb{F}_q)$ is the
Borel subgroup of the simple algebraic group $G(\mathbb{F}_q)$.
7) The coefficient of $q^k$ in the $q$-polynomial is the number of Coxeter group elements of length $k$. Here the length of any element in the Coxeter group is its minimal length as a word when we
write it as product of the generators $s_1, \dots, s_n$ corresponding to dots in the Dynkin diagram.
8) The coefficient of $q^k$ in the polynomial is the number of top-dimensional simplexes of distance $k$ from a chosen top-dimensional simplex in the Coxeter complex. Here we measure ‘distance’
between top-dimensional simplexes in the hopefully obvious way, based on how many walls you need to cross to get from one to the other.
If you know enough stuff you may have fun proving all these descriptions of the magic numbers are equivalent.
I have not managed to use any of these descriptions of the magic numbers to solve this problem:
Puzzle 5. Why is these sequence of magic numbers for any Dynkin diagram ‘symmetrical’?
To explain what I mean here, look at the pattern of spacings between magic numbers for $\mathrm{E}_6$:
$\begin{array}{cccccccccccc} \mathbf{2} & & & \mathbf{5} & \mathbf{6} & & \mathbf{8} & \mathbf{9} & & & \mathbf{12} \\ \bullet & \circ & \circ & \bullet & \bullet & \circ & \bullet & \bullet & \circ
& \circ &\bullet \end{array}$
It doesn’t change when you reflect it! The same is true for $\mathrm{E}_7, \mathrm{E}_8$ and also the non-exceptional cases if you write the magic numbers in increasing order. Why? It’s reminiscent
of Poincaré duality, but I don’t see how it is that.
Puzzles 1 and 5 could be clues for Puzzle 4.
Posted at September 3, 2013 4:19 AM UTC
Re: Integral Octonions (Part 5)
I’ll need to think about it more, but I’m pretty sure that (1) the symmetry is more evident for the magic numbers minus 1, and (2) they’re the exponents appearing in the eigenvalues of the Coxeter
element. Then the symmetry comes from that guy being an orthogonal transformation of Lie(T).
There’s a lot more numerology about these exponents/degrees, and the place to read it is gray Humphreys (“Reflection groups and Coxeter groups”), which I don’t have with me.
Posted by: Allen K. on September 3, 2013 5:02 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Thanks! I’ll have to look at gray Humphreys again.
Here’s a nice thing about the magic numbers minus 1: their sum is the number of positive roots of our Lie algebra! So, I was hoping someone like you could tell me how to partition the positive roots
in some nice way that would let me calculate the magic numbers.
If this were possible, the ‘symmetry’ of the magic numbers could be a clue as to how.
But how can you take the positive roots of:
• $\mathrm{A}_2$ and partition them into bunches of size 1 and 2,
• $\mathrm{B}_2$ and partition them into bunches of size 1 and 3,
• $\mathrm{G}_2$ and partition them into bunches of size 1 and 5,
• $\mathrm{A}_3$ and partition them into bunches of size 1, 2 and 3,
• $\mathrm{B}_3$ and partition them into bunches of size 1, 3 and 5,
Posted by: John Baez on September 3, 2013 9:37 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
“Height” of the root, = sum of the coefficients when expanded in simple roots. Or, you could call it the weight multiplicities for the action of a principal SL(2).
Which shows you there’s a definition of these numbers for any representation, not just the adjoint representation!
Incidentally, under the geometric Satake correspondence telling you that each irrep V[λ] = IH(Gr^λ inside the affine Grassmannian for the Langlands dual group), the decomposition into weight spaces
for the principal SL(2) is exactly the grading on intersection homology.
Posted by: Allen K. on September 3, 2013 3:07 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
John wrote:
Here’s a nice thing about the magic numbers minus 1: their sum is the number of positive roots of our Lie algebra! So, I was hoping someone like you could tell me how to partition the positive
roots in some nice way that would let me calculate the magic numbers.
If this were possible, the ‘symmetry’ of the magic numbers could be a clue as to how.
But how can you take the positive roots of:
• $A_2$ and partition them into bunches of size 1 and 2,
• $B_2$ and partition them into bunches of size 1 and 3,
Allen wrote:
“Height” of the root, = sum of the coefficients when expanded in simple roots.
I’d considered that, but that doesn’t do the job, does it? For a rank $n$ group, I need a partition of its positive roots into $n$ parts, the size of each being one less than ‘magic number’.
For $A_2$ we have 2 positive roots of height one and 1 of height two… so we get the partition (2,1), which looks good, since the magic numbers are (3,2).
But for $B_2$ we have 2 of height one, 1 of height two and 1 of height three, so we get the partition (2,1,1). But we want the partition (3,1), since the magic numbers in this case are (4,2).
Am I confused?
Posted by: John Baez on September 4, 2013 3:25 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
I was not confused. Walking home I made an outrageous conjecture. I was hoping to become famous for having such a crazy but true idea. However, it turns out to be a theorem in Section 3.20 of
Humphrey’s gray book.
To calculate the magic numbers, you calculate how many positive roots of each height there are, and use this to get a partition of the total number of positive roots, just like Allen said. For
example, $\mathrm{B}_2$ has 2 positive roots of height one, 1 of height two and 1 of height three, so we get the partition (2,1,1).
But then you write this partition as a Young diagram:
Then you reflect this Young diagram and get another Young diagram:
Then you read this as a partition, namely (3,1). And finally, you add one to each of these numbers, and you get the magic numbers! For $\mathrm{B}_2$ you get (4,2). But the procedure works in general
for any Dynkin diagram.
This is insane, but it’s true.
Posted by: John Baez on September 4, 2013 1:36 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Allen is correct; this symmetry is Lemma 3.16 in Humphreys. The quick explanation is that $\zeta^{d_i-1}$ for $\zeta$ a particular $h$-th root of unity is the set of eigenvalues of the Coxeter
element. Since this is a real matrix, its eigenvalues come in complex conjugate pairs. Here $h$ is the Coxeter number, the order of the Coxeter element.
Posted by: Ben Webster on September 3, 2013 3:43 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Okay, Humphreys’ gray book Reflection Groups and Coxeter Groups is indeed an repository of occult wisdom on what I’m calling the ‘magic numbers’.
For example, in Section 3.20 he proves this:
Proposition. If $h$ is the Coxeter number of our Dynkin diagram $D$ and $1 \le m \le h$ is relatively prime to $h$, then $m+1$ is a magic number for $D$.
Recall that the Coxeter number is the number of roots divided by the number of dots in the Dynkin diagram. (For other descriptions, click the link!)
The dimension of the Lie algebra $\mathrm{E}_8$ is 248, and its Dynkin diagram has 8 dots, so the number of roots is 248 - 8 = 240, and its Coxeter number is 240/8 = 30. The numbers between 1 and 30
that are relatively prime to 30 are:
$1, 7, 11, 13, 17, 19, 23, 29$
So, adding one, we get some magic numbers for $\mathrm{E}_8$:
$2, 8, 12, 14, 18, 20, 24, 30$
But there are eight of them, and there’s always just one magic number for each dot in the Dynkin diagram, so these must be all the magic numbers!
This is black magic, and even the normally imperturbable Humphreys gives it an exclamation mark.
By the way, did you notice something funny about these numbers:
$1, 7, 11, 13, 17, 19, 23, 29 ?$
Except for 1, they’re all prime! 30 just happens to be the largest number for which every number besides 1 that’s smaller than it and relatively prime to it is actually prime. I don’t know if this is
important here, but it’s darn suspicious.
For $\mathrm{E}_7$ we have to work harder. The dimension of the Lie algebra $\mathrm{E}_7$ is 133, so the number of roots is 133 - 7 = 126, and the Coxeter number is 126 / 7 = 18. Here are the
numbers smaller than 18 that are relatively prime to 18:
$1, 5, 7, 11, 13, 17$
Adding one, we get magic numbers for $\mathrm{E}_7$:
$2, 6, 8, 12, 14, 18$
Unfortunately we only get six of the seven magic numbers this way. Luckily, Humphreys also proved that the sequence of magic numbers must be ‘symmetrical’ in the way I described. The only possibility
is that the seventh magic number is 10:
$\begin{array}{ccccccccccccccccc} 2 &&&& 6 &&8 &&10 && 12 &&14 &&&& 18 \\ \bullet &\circ &\circ &\circ & \bullet &\circ &\bullet &\circ &\bullet &\circ &\bullet &\circ &\bullet &\circ&\circ &\circ& \
bullet \end{array}$
By the way, did you notice anything funny about these numbers:
$1, 5, 7, 11, 13, 17 ?$
For $\mathrm{E}_6$ we must work even harder for our supper, but I’ll leave that as a puzzle for you! To get you started, I’ll remind you that the dimension of the $\mathrm{E}_6$ is 78, so the number
of roots is 78 - 6 = 72, so the Coxeter number is 72/6 = 12.
Puzzle 1 may come in handy here.
Posted by: John Baez on September 4, 2013 1:11 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
So, we’ve seen that every integer $1 < m < 30$ that’s relatively prime to 30 is actually prime. We’ve seen that these numbers are deeply related to the Weyl group $\mathrm{E}_8$.
We’ve seen that every integer $1 < m < 18$ that’s relatively prime to 18 is actually prime, and that these numbers are deeply related to the Weyl group of $\mathrm{E}_7$.
And if you did my last little puzzle, you’ll have seen that every integer $1 < m < 12$ that’s relatively prime to 12 is actually prime, and that these numbers are deeply related to the Weyl
group of $\mathrm{E}_6$.
Numbers with this property are called very round, and the only very round numbers are
$1, 2, 3, 4, 6, 8, 12, 18, 24, 30$
Is the very roundness of 24 related to other interesting things about this famous number? Wikipedia points out:
The divisors of 24 — namely, $\{1, 2, 3, 4, 6, 8, 12, 24\}$ — are exactly those $n$ for which every invertible element of the commutative ring $\mathbb{Z}/n\mathbb{Z}$ is a square root of 1. Thus
the multiplicative group $\mathbb{Z}/24\mathbb{Z} = \{\pm 1, \pm 5, \pm 7, \pm 11\}$ is isomorphic to the additive group $(\mathbb{Z}/2\mathbb{Z})^3$. This fact plays a role in Monstrous
Unfortunately it does not explain this remark! The best explanation I’ve found lies in Noam Elkies’ remarks in the Addendum to “week172”.
Now I’m wondering if there is an important Lie group which is related to 24 in the same way that $\mathrm{E}_8$, $\mathrm{E}_7$ and $\mathrm{E}_6$ are related to 30, 18 and 12.
Naively we’d be looking for a simple Lie group whose Coxeter number is 24 and whose magic numbers include the numbers you get by taking the numbers less than 24 and relatively prime to it, and adding
$1+1, 5+1, 7+1 , 11+1, 13+1, 17+1, 19+1, 23+1$
This list of numbers is symmetrical about its mean, which is a property any list of magic numbers must have. If I’m not making a mistake, the only simple Lie groups with Coxeter number 24 are $\
mathrm{A}_{11} = \mathrm{SL}(23)$, $\mathrm{B}_{12} = \mathrm{SO}(25)$, $\mathrm{C}_{12} = \mathrm{Sp}(24)$, and $\mathrm{D}_{13} = \mathrm{SO}(26)$. I think all of these have magic numbers including
the above list. Of these, I guess $\mathrm{SO}(26)$ looks the most interesting.
Posted by: John Baez on September 5, 2013 6:36 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Over on G+, Layra Idrani wrote:
You can prove the black-white thing by example, or at least I did.
Letting our vertices be
and permutations, and
with an even number of -1/2s, we get that a pair of vertices are connected by an edge iff their inner product is 1.
We can look at the following orthoplex:
Now we figure out which sets of seven can be extended to a simplex via adding another vertex.
Consider a vertex (1,-1,0,0,0,0,0,0). For another vertex to have an edge to it, i.e. to have inner product 1, its first entry must be positive or 0, and its second entry must be negative or 0,
since otherwise there would be a negative term in the inner product; both entries can’t be 0, and if both are non-zero then they must be 1/2 and -1/2 respectively.
Similarly, for any other vertex $v$ in the orthoplex, any vertex $w$ connected to $v$ must either match the sign of $v$ in precisely one of the two non-zero entries of $v$, or $w$ must match both
and all of its coordinates must have magnitude 1/2.
We take a set of seven vertices in our orthoplex such that except for the first entry, if $v$ has a non-zero entry somewhere then the other 6 vertices have 0 there. These are all our
6-dimensional simplices in our orthoplex, since any other set would contain a pair of vertices whose inner product is 0. Suppose that of these, $\{v_i\}$ for $i$ from 1 to $k$ each have a -1 in
position $p_i$. The point that has coordinate -1/2 in position $p_i$ and 1/2 in all other positions is of distance 2 from all the vertices given. Such a point is a vertex iff $k$ is even. So that
gives us 64 6-simplices that extend to 7-simplices. Moreover, these 6-simplices alternate with the remaining 6-simplices, since given a 6-simplex with an even number of vertices that have a -1 in
them, changing any of the vertices such that we still get a 6-simplex in the orthoplex yields a 6-simplex with an odd number of vertices with a -1 in them, and vice versa.
Now we just need to show that the other 64 6-simplices don’t extend to 7-simplices. Consider a set of seven vertices in the orthoplex such that an odd number have an entry of -1 and the seven
vertices form a 6-simplex.
Suppose we have a vertex $w$ with a 0 entry. If it has 1 as its first entry, then it must have an entry of 1 or -1 somewhere else, and thus either matches or is orthogonal to a vertex in our
6-simplex. If $w$ has 0 as its first entry, then it must have non-zero entries in two other places, but as there are then five remaining entries, there is a vertex in our 6-simplex that is 0 in
those two places and hence is orthogonal to $w$. If $w$ has only entries of ±1/2, then it must match each vertex in our simplex wherever that simplex is non-zero, but doing so yields a point with
an odd number of entries of -1/2, which is not a vertex. So these 6-simplices cannot be extended to 7-simplices, and hence must be connected to orthoplexes (orthoplices?).
So this particular orthoplex has precisely half of its 6-faces connected to simplices, and the other half to other orthoplexes, and such faces alternate. By symmetry, this applies to all the
Posted by: John Baez on September 3, 2013 12:11 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Layra wrote:
These are all our 6-dimensional simplices in our orthoplex, since any other set would contain a pair of vertices whose inner product is 0. Suppose that of these, $\{ v_i\}$ for $i$ from 1 to $k$
each have a -1 in position $p_i$.
I don’t understand this. Is $v_i$ the $i$th component of a vertex, or the $i$th vertex? And what does ‘position $p_i$’ mean? What’s $p$? Or maybe by ‘a -1 in position $p_i$’ you mean that the $i$th
component of the vector $v$ is -1? Or maybe the $i$th component of the vector $v_i$ is $-1$? I’m lost….
Posted by: John Baez on September 5, 2013 3:30 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Sorry about not replying to this earlier.
The pi are just numbers from 2 to 8, and the intended construction is for the pi-th component of the vector v_i to be -1.
Posted by: Layra on September 10, 2013 11:23 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
On other aspects of your octonion interests, such as here, did you see the recent Super Yang-Mills, division algebras and triality?
Posted by: David Corfield on September 4, 2013 9:52 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
I’d heard rumors of it but hadn’t seen it—thanks. In SUSY1 John Huerta and I described how to get a super-Yang–Mills theory from one division algebra: an idea not new to us, but we had to explain it
to understand it. That idea must be a special case of this one, where they get a super-Yang–Mills theory from two division algebras.
The idea of combining two division algebras is fundamental to the magic square description of exceptional Lie algebras. It would be fun if that were related to this new work: it would suggest that
the biggest, best Yang–Mills theory, the $\mathbb{O}, \mathbb{O}$ theory in ten dimensions, is related to $\mathrm{E}_8$—since that’s the case that gives $\mathrm{E}_8$ in the magic square.
I wish I could get to the bottom of this stuff, but it seems like a bottomless pool that’s somehow both dazzlingly beautiful and murky. What does it all mean? It’s as if our civilization is lacking
some concepts that would let us answer that question.
Posted by: John Baez on September 4, 2013 10:47 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
So maybe their ten super Yang-Mills theories correspond to the upper (or lower) triangle of the magic square. Wasn’t part of the ‘magic’ of the magic square to do with the symmetry of the entries?
Posted by: David Corfield on September 4, 2013 11:51 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Ah, no. They talk about their existing work – A magic square from Yang-Mills squared – on the magic square, and want to create a ‘magic pyramid’ on it as a base.
Posted by: David Corfield on September 4, 2013 11:55 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
And they’ve created the magic pyramid here.
Posted by: David Corfield on March 17, 2014 10:33 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Your theory as to what the naive counting method was actually counting for the 6-simplices was bothering me, so I checked the calculation and you actually lost a division by 2.
2⋅8⋅12⋅14⋅18⋅20⋅24⋅30/(2!⋅7!) = 69,120
which makes a lot more sense since the 6-simplex node indicates 6-simplices that extend to a 7-orthoplex but not to a 7-simplex, otherwise there ought to be a line connecting the 6-simplex node to
the 7-simplex node, which is what tipped me off.
Not having been previously aware of this method before, I actually panicked for a while thinking that I had grievously misunderstood.
In terms of the 5-simplices, the fastest way I can think to do it is to count the number of 5-simplices per 7-simplex (28) and then multiply by the number of 7-simplices (17280). This gives the right
answer. Now we just have to show that this is the correct calculation to do.
We know that we only have to worry about 5-simplices that are contained in 7-simplices, since for any 5-simplex in an orthoplex, it can extend to a white 6-simplex, and uniquely so.
To show that each 5-simplex is in a unique 7-simplex, we look at 4-simplices. According to the diagram, each 4-simplex A is contained in exactly two 7-simplices, denoted B and C. Hence if a 5-simplex
D contains A, then it can only be contained in B or C, no other 7-simplices, and indeed must be contained in at least one of B or C. Since all 5-simplices are identical, if any 5-simplex is in two
7-simplices, each 5-simplex must be in two 7-simplices.
So we assume that D is in both B and C. Since any other D’ that contains A must be in B or C, and D is in two simplices, D’ must also be in both B and C. We know that D’ exists since B has more than
one vertex that isn’t in A.
D and D’ share 5 vertices and are both in the same 7-simplex B, and so thus together define a unique 6-simplex, and that 6-simplex is contained in B. But by assumption, they’re also both in C, and so
that same 6-simplex is also in C. Thus we get that B and C share a 6-simplex, which is impossible.
Hence D is in exactly one of B and C, and similarly every 5-simplex is in precisely one 7-simplex.
Posted by: Layra Idarani on September 4, 2013 10:36 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Layra wrote:
Your theory as to what the naive counting method was actually counting for the 6-simplices was bothering me, so I checked the calculation and you actually lost a division by 2:
2⋅8⋅12⋅14⋅18⋅20⋅24⋅30/(2!⋅7!) = 69,120
Oh! Good catch! One great thing about blogging is that people catch your mistakes… but in this case, it took real concentration to notice that mistake. Thanks!
Also thanks for helping me count the mysterious 5-simplexes.
According to the diagram, each 4-simplex A is contained in exactly two 7-simplices, denoted B and C.
How do you get that from the diagram? I assume you mean this diagram:
Posted by: John Baez on September 5, 2013 6:56 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Okay, so technically I don’t actually know that the diagram says what I think it does. My general heuristic for how the naive counting method works is that the entire diagram corresponds to complete
flags, and that the subdiagrams correspond to partial flags. So that when you look at a dot, it’s really saying “take a shape that corresponds to this dot; removing the dot gives you a diagram of
which one part corresponds to flags contained in that shape, and the remaining stuff corresponds to flags that contain that shape.”
So if we remove the 4-simplex dot, we get that we have the
diagram corresponding to flags contained in the 4-simplex, and the
diagram corresponding to flags of a 6-simplex in a 7-orthoplex that contain the 4-simplex, and the
diagram corresponding to 7-simplices that contain the 4-simplex.
And then the formula comes from counting all the flags, and then dividing by the number of 0-1-2-3 flags contained in each 4-simplex, and dividing by the number of 6-7o flags containing the
4-simplex, and dividing by the 7s flags containing the 4-simplex.
So if my heuristic is correct, the fact that the 7s node gives us a factor of 2 is because for each 4-simplex, there are 2 7-simplices that contain that 4-simplex.
Since all of my knowledge about this particular method is contained in your post, I have no idea if this heuristic is correct, but I can’t imagine this method working for all diagrams if it were not
the case that the subdiagrams show up in the denominator the way they do because they are actually counting partial flags.
Posted by: Layra on September 5, 2013 7:24 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Layra wrote:
So if my heuristic is correct, the fact that the 7s node gives us a factor of 2 is because for each 4-simplex, there are 2 7-simplices that contain that 4-simplex.
Okay. Your heuristic sounds like it should be correct, and I should be able to tell if it is.
Admittedly, I find the ‘non-linear’ Dynkin diagrams very mysterious compared to the ‘linear’ ones ($\mathrm{A}_n$, $\mathrm{B}_n$, $\mathrm{C}_n$$\mathrm{G}_2$, $\mathrm{F}_4$). I spent a lot of time
pondering the $\mathrm{D}_n$ cases and getting to understand ‘2-colored orthoplexes’ before daring to study $\mathrm{E}_6, \mathrm{E}_7$ and $\mathrm{E}_8$. So, to see if your way of telling how many
figures of one type are incident to figures of another works, I should both think about it abstractly and also look at examples, starting with easy cases like $\mathrm{A}_n$ and $\mathrm{B}_n$, and
then non-linear cases like $\mathrm{D}_n$ and the $\mathrm{E}$ series.
Posted by: John Baez on September 5, 2013 3:37 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Layra wrote:
So if my heuristic is correct, the fact that the 7s node gives us a factor of 2 is because for each 4-simplex, there are 2 7-simplices that contain that 4-simplex.
You’re right: there are two 7-simplices containing each 4-simplex. Here’s how I think about it.
First I’ll count the pairs consisting of a 4-simplex and a 7-simplex that contains it. Then I’ll count the 4-simplexes. Then I’ll divide the first number by the second and get 2.
To count the pairs consisting of a 4-simplex and a 7-simplex that contains it, I mark those dots in the Dynkin diagram:
and then remove them:
leaving me with the Dynkin diagram for $\mathrm{A}_4 \times \mathrm{A}_2$. So, the number of these pairs is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_4)| \times |W(\mathrm{A}_2)| \times |W(\mathrm{A}_4)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{(2 \cdot 3
\cdot 4 \cdot 5) \times (2 \times 3)}\end{array}$
On the other hand, I already counted the number of 4-simplexes. But just as a reminder: to do this, we remove just the 4-simplex dot from the Dynkin diagram:
leaving us with $\mathrm{A}_4 \times \mathrm{A}_1 \times \mathrm{A}_2$:
so the number of 4d faces is
$\begin{array}{ccl} \frac{|W(\mathrm{E}_8)|}{|W(\mathrm{A}_4)| \times |W(\mathrm{A}_1)| \times |W(\mathrm{A}_4)|} &=& \frac{2 \cdot 8 \cdot 12 \cdot 14 \cdot 18 \cdot 20 \cdot 24 \cdot 30}{(2 \cdot 3
\cdot 4 \cdot 5) \times 2 \times (2 \cdot 3)} \end{array}$
You can see this is half our previous answer. So, yes: there are two 7-simplexes containing each 4-simplex!
Posted by: John Baez on September 8, 2013 4:13 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
Before I forget:
Puzzle 1. If you double each of the magic numbers and subtract one, then add up the results, you get the dimension of the Lie group corresponding to this Dynkin diagram. Why?
Answer. A self-contained proof can be found in Humphrey’s book Reflection Groups and Coxeter Groups; see Theorem 3.9. Personally I prefer using the fact that the real cohomology of the Lie group $G$
corresponding to the Dynkin diagram is an exterior algebra on generators $x_i$ of degrees $2d_i - 1$. Thus, the highest-degree element in the cohomology of $G$ is the product $x_1 \cdots x_n$, which
has degree
$\sum_{i=1}^n (2d_i - 1)$
But the highest degree possible in the cohomology of $G$ is just the dimension of $G$.
Posted by: John Baez on September 5, 2013 11:34 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
It must have been asked before, and I’m sure it’s implicit somehow in the azimuth polytope series, but where does the apparent grading on the dots come from? That is, why should we draw $D_5$ as
* - * - *
instead of as
* - * - *
Posted by: Jesse McKeown on September 5, 2013 7:06 PM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
That’s a great question! There is no good reason to prefer one grading over another!
The Coxeter complex of a Dynkin diagram (or more precisely Coxeter diagram) treats all dots in an even-handed way. But then, starting from that, you can get different polytopes from choosing
different subsets of the dots. I have chosen one particular way to do this in my $\mathrm{E}_8$ example, by taking a subset consisting only of the leftmost dot in the $\mathrm{E}_8$ Dynkin diagram.
Maybe you can see the pattern in this $\mathrm{B}_3$ example. This example is not simply-laced, so I’ll write “3” on an edge when two roots lie at a 120° angle and “4” when they lie at a 135° angle.
Beware: the use of light and dark dots here is different than in the blog article above; now I’m using them to choose which polytope to look at, not to count faces of a fixed polytope.
truncated cube
truncated octahedron
truncated cuboctahedron
I’m scared to draw what happens when our subset is the empty set. When we put all the dots in our subset, we get the biggest and fanciest polytope, which is dual to the Coxeter complex:
Unfortunately the diagrams above are drawn ‘backwards’ relative to how I drew the $\mathrm{E}_8$ diagram in this post! So, of all the $\mathrm{B}_3$ polytopes shown above, it’s the octahedron:
with the rightmost dot marked, that most resembles the $\mathrm{E}_8$ root polytope, where the leftmost dot of this diagram is marked:
If you want to see what happens when we play this game for $\mathrm{E}_8$, go here. Unfortunately there aren’t pictures of all $2^8 - 1 = 255$ polytopes formed by choosing all possible nonempty
subsets of the 8 dots—just a few of the most famous ones.
If anyone wants more detail, hop on over to my posts on Azimuth where I’m explaining this stuff in loving detail! There I’m slowly going from 3 dimensions up to 4… while here I’m starting at
dimension 8 and aiming for 10.
Here’s the series: part 1, part 2, part 3, part 4, part 5, part 6, part 7, part 8 and part 9. There’s more, but these explain what you’re asking about.
Posted by: John Baez on September 6, 2013 4:41 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
The gosset symmetries go down as far as two dimensions, where E_2 is represented by the body-centred 1:sqrt(7) lattice. The relevant gosset series goes
2, 6, 10, 16, 27, 56, 200 (dec 240),
The product of these numbers gives the order of E_n.
The set of lattices represents (after 3d), the third trigonal group. The lattice A_n is represented by the 60-degree rhombotope, with ‘deep holes’ at x/(n+1) of the long diagonal. The vertex is in a
‘deep hole’ by symmetry.
If you take layers of spheres, of shape A(n-1), and stack them at the first, second or third holes, you get the corresends to the first, second and third trigonal groups (or An, Bn and En
respectively). The zeroth one Zn gives a prismatic layer of A(n-1), also counts.
When the trigonal lattices are used to pack spheres of diameter sqrt(2) at the vertices, the volume of the voronii cell is the square root of the number of stations (or standing points), as An = n+1,
Bn = 4, En = 9-n. Zn = 2n. The stations are the ‘deep holes’ or centres of cells + the vertices, represents the different places where the next layer is stacked.
The packing efficiency is then 1/volume, is maximum for the first 8 dimensions in a trigonal packing.
In 3, 5, 6 and 7 dimensions, it is possible to have a non-lattice packing as efficient. In 3d, this is the hexagonal close-pack. In 5, 6, and 7 dimensions, it is possible to construct E5, E6 and E7
from layers of Bn (which is the semicubic), by packing these as a lattice. But the layers are far enough apart that one can place the second layer in either position, which allows a non-cubic
In 9 dimensions, the most efficient packing is a quarter-cubic, and if you jiggle it carefully, it will come apart! Strange but true.
Posted by: Wendy Krieger on November 10, 2013 10:09 AM | Permalink | Reply to this
Re: Integral Octonions (Part 5)
The way to derive the surtope (surface polytope) consist of a polytope is to use the SA rule. The marked nodes are actually edges, and the vertex node is connected to every edge-node.
An S mirror is a ‘surround’ mirror, which reflects a surtope into itself, but changes the bits around.
An A mirror is an ‘around’ mirror, which leaves the surtope unchanged: the surtope is in the mirror.
A W mirror is a ‘wall mirror’, which reflects a surtope onto a different copy: that is the image does not coincide with the original.
The S and A mirrors live inside a room bounded by W mirrors, operate on the same surtope. So if we count the number of rooms, as G/SA, we get the number of surtopes.
The ‘vertex node’ 0, is connected to every marked node. A surtope is made of the vertex-node, and 0 or more mirror nodes, which must be connected to the vertex node.
A polytope like x3o3x5o (the ‘runcinated 3,3,5’), has four nodes, numbered from 1 to 4. The 0 node is connected to 1 and 3.
The vertex is 0. This is directly connected to 1 and 3 (wall nodes), and not connected to 2 and 4. The a symmetry is then of order 4, and the S symmetry is 1, so the count is 14400/4/1 = 3600.
0 = vertex, W=1,3 A=2,4 giving 14400/1/4 = 3600
The edges are 01 and 03. The walls for 01 are 23, and for 03, 234. The A-nodes come to the rest (ie 4, and none), so there are 14400/4 of the former, and 14400/2 of the latter. ie
01 = edge, W=2,3 A=4, giving 14400/2/2 = 3600
03 = edge, W=1,2,4 A=- giving 14400/2/1 = 7200
The hedra (2d patches) are 012, 013, 023, and 024.
012 = triangle, W=3, A=4, giving 14400/6/2 = 1200.
013 = square, W=2,4 A=- giving 14400/4/1 = 3600
023 = triangle, W=1,4 A=- giving 14400/6/1 = 2400
024 = pentagon, W=1,2 A=- giving 14400/10/1 = 1440
The chora (3d patches) are 0123, 0134, and 0234. There are no A-nodes, the unreferenced nodes are all wall-nodes.
0123 = cuboctahedron, giving 14400/24/1 = 600
0134 = pentagonal prism, giving 14400/20/1 = 720
0234 = icosadodecahedra, giving 14400/120/1 = 120.
The tera (4d patches) are 01234.
01234 = cantellated 335, giving 14400/14400/1 = 1.
And there you go. No great mystery.
For something like the twelftycell o3o3o5x, we have 0 is connected to node 4 only, so
surtope consist:
0 = vertex, W = 4, A=1,2,3 14400/1/24 = 600
04 = edge , W = 3, A=1,2 14400/2/6 = 1200
043 = pentagon, W=2, A=1. 14400/10/2 = 720
0432 = dodecahedron, W=1, A=- 14400/120/1 = 120
Terminology follows the polygloss at my web site
Posted by: Wendy Krieger on November 10, 2013 10:34 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2013/09/integral_octonions_part_5_1.html","timestamp":"2014-04-20T06:18:11Z","content_type":null,"content_length":"203085","record_id":"<urn:uuid:a148f481-a114-4696-866c-72f960fb263c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: unstable solution of a nonlinear system with linear constraints
Replies: 4 Last Post: Nov 28, 2013 10:54 PM
Messages: [ Previous | Next ]
ARQ unstable solution of a nonlinear system with linear constraints
Posted: Nov 15, 2012 11:32 AM
Posts: 1
Registered: 11/15/12 Hello!
I am currently solving a system of 4 nonlinear equations with linear constraints.
I use fmincon to do that and always converge to a solution, but I expect (hope for) another unstable solution to exist. What is the best way to find it numerically (if possible)?
Thank you!
Date Subject Author
11/15/12 unstable solution of a nonlinear system with linear constraints ARQ
11/15/12 Re: unstable solution of a nonlinear system with linear constraints Matt J
11/15/12 Re: unstable solution of a nonlinear system with linear constraints Alan Weiss
11/16/12 Re: unstable solution of a nonlinear system with linear constraints Matt J
11/28/13 Re: unstable solution of a nonlinear system with linear constraints Leyla | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2415260","timestamp":"2014-04-17T16:18:56Z","content_type":null,"content_length":"20932","record_id":"<urn:uuid:e15ea6b2-d1a6-4af5-b768-4bbb890e9454>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US20020152054 - Peak time detecting apparatus and method
[0001] 1. Field of the Invention
[0002] The present invention relates to a peak time detecting apparatus and a peak time detecting method and, more particularly, to a peak time detecting apparatus that detects a peak time of
time-series signals by using the wavelet transformation and a peak time detecting method that detects a peak time of time-series signals inputted by using the wavelet transformation.
[0003] 2. Description of the Related Art
[0004] Various peak time detecting methods of the aforementioned type have been proposed, including a method in which a differential coefficient is calculated with respect to input time-series
signals and, based on fluctuations of the differential coefficient, a peak and a peak time are detected, a method in which a maximum value of input time-series signals is tracked, and the maximum
value retained before the signal. value decreases below a pre-set threshold is set as a peak value, and the time point of detection of the maximum value is detected as a peak time, etc.
[0005] However, in the method of detecting a peak time based on fluctuations of the differential coefficient, false detection by noises is likely to occur, and the reliability is low. In the method
in which when an input time-series signal drops below a pre-set threshold, the time of detection of the current maximum value is set as a peak time, detection of a peak time cannot be performed until
an input signal is less than the threshold, and therefore detection of a peak time requires an amount of time.
[0006] A first peak of signals from a deceleration sensor used to activate an occupant protection apparatus that protects occupants at the time of a crash of the vehicle, such as an airbag apparatus
of the like, is normally found when a bumper reinforcement provided forward of side members of a vehicle yields to an impact. Input signals up to the proximity of the first peak are used to determine
a form of crash (a frontal collision, a diagonal collision, an offset collision, etc.), or to determine a timing of activating an occupant protection apparatus and a kind of the activation of the
occupant protection apparatus, although the situation may vary depending on the configuration of a vehicle. If a peak time is detected with respect to signals from the deceleration sensor used by the
occupant protection apparatus, the detection precision and the promptness in detecting a peak time become important factors.
[0007] It is an object of the peak time detecting apparatus and the peak time detecting method of the invention to reduce the false detections cased by noises or the like so as to detect the peak
time of signals that are more precisely inputted. It is another object of the peak time detecting apparatus and the peak time detecting method of the invention to promptly detect a peak time.
Furthermore, it is an object of the peak time detecting apparatus of the invention to determine the validity of peak time detection.
[0008] In order to achieve at least one of the aforementioned objects, the peak time detecting apparatus and the peak time detecting method of the invention adopt the following means.
[0009] A peak time detecting apparatus in accordance with a first aspect of the invention is a peak time detecting apparatus peak time detecting apparatus for detecting a peak time of a time-series
signal by using a wavelet transformation, including: signal input means for inputting the time-series signal; product-sum operation means for performing a product-sum operation with respect to the
time-series signal inputted, by using a predetermined complex function as an integral base; phase calculation means for calculating a phase based on a real number portion and an imaginary number
portion of a result of the product-sum operation; and peak time determination means for determining a peak time of the time-series signal based on the phase calculated.
[0010] In the peak time detecting apparatus of the first aspect of the invention, the product-sum operation means performs the product-sum operation with respect to the time-series signal inputted by
the signal input means, by using a predetermined complex function as a base of integral. The phase calculation means calculates a phase based on the real number portion and the imaginary number
portion of a result of the product-sum operation. The peak time determination means determines a peak time of the time-series signal based on the calculated phase. The wavelet transformation is
excellent for the analysis of a time-series signal in a time region and a frequency region, in comparison with a short-time Fourier transformation. If a transformation frequency and waveforms of the
real number portion and the imaginary number portion are suitably selected, the wavelet transformation allows analysis of a targeted signal. The peak time detecting apparatus of the first aspect
detects a peak time of a time-series signal through the use of a signal analysis based on the wavelet transformation.
[0011] Since the peak time detecting apparatus of the first aspect performs the product-sum operation with respect to the input time-series signal by using the predetermined complex function, and
does not perform a differential operation, the peak time detecting apparatus is able to avoid false detection based on noises. As a result, the detection precision can be improved. Furthermore, since
the determination of a peak time is performed based on the phase calculated based on the real number portion and the imaginary number portion of a result of the product-sum operation, the
determination can be made immediately after an actual peak. Therefore, the apparatus is able to detect a peak time quickly, in comparison with an apparatus that determines a peak time when the signal
becomes lower than a pre-set threshold. Furthermore, the arithmetic operations performed in the apparatus are the product-sum operation with respect to the time-series signal, the phase calculation
with respect to a result of the product-sum operation, etc, and can be quickly performed Therefore, a peak time can be promptly detected.
[0012] In the peak time detecting apparatus of the first aspect of the invention, the product-sum operation means may be means that uses a Gabor function as the predetermined complex function.
Furthermore, the product-sum operation means may also be means that uses, as the predetermined complex function, a function that includes a real number portion having a localized waveform and an
imaginary number portion having a localized waveform that is delayed by π/2 in phase from the real number portion. In the thus-constructed peak time detecting apparatus of the invention, the peak
time determination means may be means for determining, as the peak time, a time point at which the phase calculated by the phase calculation means changes from 2π to zero. If a function that has a
real number portion having a localized waveform and an imaginary number portion having a localized waveform that is delayed by π/2 in phase from the real number portion, including the Gabor function,
is used as a base of integral, the product sum of the real number portion becomes a positive value when the real number portion is superimposed on a peak of the signal. In that case, the imaginary
number portion, being delayed by π/2 in phase, assumes zero, and therefore the product sum of the imaginary number portion is zero. Therefore, by suitably selecting signs of the real number portion
and the imaginary number portion, it becomes possible to determine a time point at which the phase calculated based on the real number portion and the imaginary number portion of a result of the
product-sum operation changes from 2π to zero, as a time at which the signal is at a peak.
[0013] The peak time detecting apparatus of the first aspect of the invention may further include validity determination means for determining a validity of a result of determination made by the peak
time determination means. Therefore, the validity of the detected peak time can be taken into account. In the thus-constructed peak time detecting apparatus of the invention, the phase calculation
means may be means for calculating a phase regarding a result of the product-sum operation with respect to a peak time detection-purposed transformation frequency and a phase regarding a result of
the product-sum operation with respect to a validity determination-purposed transformation frequency that is higher than the peak time detection-purposed transformation frequency, and the validity
determination means may be means for determining the validity based on the phase calculated regarding the result with respect to the validity determination-purposed transformation frequency. In the
wavelet transformation, increases in the transformation frequency make it possible to detect peaks at higher frequencies in addition to a peak at a frequency intended for the signal. Therefore, the
phase regarding a result of the product-sum operation using the validity determination-purposed transformation frequency that is higher than the peak time detection-purposed transformation frequency
allows more sensitive peak detection than the phase regarding a result of the product-sum operation using the peak time detection-purposed transformation frequency. Therefore, by comparing the peak
time detected through the use of the transformation frequency that allows more sensitive peak detection and the peak time determined by the peak time determination means, it is possible to determine
the validity of the peak time determined by the peak time determination means. The determination of validity includes determination as to whether a peak time has gone without being detected, etc. In
an example of such determination, if with regard to detection of a first peak time of a time-series signal, a peak is detected in the phase regarding a result of the product-sum operation using the
validity determination-purposed transformation frequency whereas a corresponding peak is not detected and the time of a second peak is detected as a peak time in the phase regarding a result of the
product-sum operation using the peak time detection-purposed transformation frequency, it is then determined that the determined peak time is uncertain as the first peak time in terms of validity, or
is undetected. Still further, in the thus-constructed peak time detecting apparatus of the invention, the validity determination-purposed transformation frequency may be 1.0 to 2.0 times the peak
time detection-purposed transformation frequency.
[0014] In the peak time detecting apparatus of the first aspect that includes the validity determination means, the validity determination means may be means for determining that a valid
determination is made if a peak time is determined within a predetermined time before and after a time point at which the phase calculated regarding the result with respect to the validity
determination-purposed transformation frequency changes from 2π to zero.
[0015] Still further, in the peak time detecting apparatus of the first aspect of the invention, the time-series signal may be a signal formed by removing a high-frequency component from a signal
detected by deceleration detection means provided in a vehicle, and the phase calculation means may be means for calculating a phase regarding a result of the product-sum operation with respect to a
predetermined transformation frequency within a range of 100 to 150 Hz. Therefore, it becomes possible to regard the deceleration of the vehicle as a time-series signal and detect a peak time of the
[0016] A peak time detecting method in accordance with a second aspect of the invention is a peak time detecting method for detecting a peak time of an input time-series signal by using a wavelet
transformation, including: performing a product-sum operation with respect to the input time-series signal by using a predetermined complex function as an integral base; calculating a phase based on
a real number portion and an imaginary number portion of a result of the product-sum operation; and detecting a peak time based on the phase calculated.
[0017] Since the peak time detecting method of the second aspect performs the product-sum operation with respect to the input time-series signal by using the predetermined complex function, and does
not perform a differential operation, the peak time detecting method is able to avoid false detection based on noises. As a result, the detection precision can be improved. Furthermore, since the
determination of a peak time is performed based on the phase calculated based on the real number portion and the imaginary number portion of a result of the product-sum operation, the determination
can be made immediately after an actual peak. Therefore, the method is able to detect a peak time quickly, in comparison with a method that determines a peak time when the signal becomes lower than a
pre-set threshold. Furthermore, the arithmetic operations performed in the method are the product-sum operation with respect to the time-series signal, the phase calculation with respect to a result
of the product-sum operation, etc, and can be quickly performed. Therefore, a peak time can be promptly detected.
[0018] In the peak time detecting method of the second aspect of the invention, a Gabor function may be used as the predetermined complex function, or a function that includes a real number portion
having a localized waveform and an imaginary number portion having a localized waveform that is delayed by π/2 in phase from the real number portion may be used as the predetermined complex function.
In the thus-constructed peak time detecting method of the invention, the peak time detecting step may be a step of detecting, as the peak time, a time point at which the phase calculated changes from
2π to zero.
[0019] The foregoing and further objects, features and advantages of the present invention will become apparent from the following description of preferred embodiments with reference to the
accompanying drawings, wherein like numerals are used to represent like elements and wherein:
[0020]FIG. 1 is a functional block diagram schematically illustrating a construction of a peak time detecting apparatus in accordance with an embodiment of the invention that accepts input of signals
from vehicle-installed G sensors;
[0021]FIG. 2 illustrates an example of the installation of the. G sensors and the peak time detecting apparatus, of the embodiment in a vehicle;
[0022]FIG. 3 is a diagram schematically illustrating a hardware construction of the peak time detecting apparatus of the. embodiment;
[0023]FIG. 4 is a diagram illustrating an example of the expression of a Gabor function on a time axis;
[0024]FIG. 5 is a diagram indicating a relationship among the real number portion R, the imaginary number portion I, the magnitude P and the phases θ of the wavelet transformation X(a, b);
[0025]FIG. 6 is a diagram illustrating a relationship between the time-series signal X(t) and the phase θ(t) of the wavelet transformation X(a, b);
[0026]FIG. 7 is a diagram indicating a relationship between the window width and the window coefficient K;
[0027]FIG. 8 is a diagram indicating a relationship between the. phase θ(t) and the signal (deceleration signal) detected by a G sensor during a crash of the vehicle;
[0028]FIG. 9 is a diagram indicating a relationship between the phase θ(t) and the signal (deceleration signal) detected by the G sensor during a crash of the vehicle;
[0029]FIG. 10 is a flowchart Illustrating a peak time detecting process routine executed by the peak time detecting apparatus;
[0030]FIG. 11 is a diagram illustrating a process performed when successive peak times are detected; and
[0031]FIG. 12 is a flowchart illustrating a peak time validity determining process routine executed by the peak time detecting apparatus.
[0032] Preferred embodiments of the invention will be described hereinafter with reference to the accompanying drawings. FIG. 1 is a function block diagram schematically illustrating a construction
of a peak time detecting apparatus 20 in accordance with an embodiment of the invention that accepts input of signals from vehicle-installed G sensors 12, 14, 16 for detecting acceleration. FIG. 2
illustrates an example of the installation of the G sensors 12, 14, 16 and the peak time detecting apparatus 20 of the embodiment in a vehicle. The peak time detecting apparatus 20 of the embodiment
is installed together with the G sensor 12 near a center console of the vehicle as shown in FIG. 2. As shown in FIG. 1, the peak time detecting apparatus 20 includes a signal input portion 22 for
inputting detection signals from the G sensor 12 and the G sensors 14, 16 installed at forward right and left locations in the vehicle at a predetermined sampling timing, a product-sum operation
portion 24 for performing a product-sum operation with respect to the input signal from each G sensor by using a complex function as a base of integral, a phase calculation portion 26 for calculating
a phase of the real number portion and the imaginary number portion of a result of the product-sum operation, a peak time detection portion 28 for detecting a peak time of the signal based on the
calculated phase, and a validity determination. portion 29 for determining a validity of the peak time detected by the peak time detection portion 28,
[0033] The G sensors 12, 14, 16 are installed in the vehicle for detecting a timing of activating a vehicle-installed occupant protecting apparatus, such as an airbag apparatus or the like. The peak
time detecting apparatus 20 of this embodiment is used, for a processing prior to the determination of the road condition or the form of a crash (a frontal collision, a diagonal collision, an offset
collision, etc.) based on signals obtained from the G sensors 12, 14, 16 when the vehicle runs on a rough road or crashes, and prior to the determination of the timing of activating the occupant
protecting apparatus (for example, the airbag deploying speed of the airbag apparatus) and the fashion, of activating the apparatus (e.g., the airbag deploying speed in the case of an airbag
apparatus). The first peak in the waveform of deceleration (negative acceleration) detected by each G sensor 12, 14, 16 during a crash of the vehicle is normally found when a bumper reinforcement
disposed forward of side members of the vehicle yields to an impact, although this may not be the case depending on the form of the crash,. The waveform detected by each G sensor 12, 14, 16 up to the
proximity of the first peak often varies depending on the form of the crash, although this may not be the case depending on the configuration of the vehicle.
[0034] In terms of a hardware construction, the peak time detecting apparatus 20 of this embodiment is formed as a microcomputer that includes a CPU 32 as a central component as shown in FIG. 3. The
peak time detecting apparatus 20 further includes a ROM 34 storing processing programs, a RAM 36 for temporarily storing data, and an input processing circuit 38 that forms a portion of the signal
input portion 22. The portions of the peak time detecting apparatus 20 of this embodiment exemplified in FIG. 1 function in a fashion in which the hardware and the software are integrated, when a
processing program stored in the ROM 34 is started.
[0035] Next, operation of the peak time detecting apparatus 20 of this embodiment constructed as described above and, in particular, an operation of detecting a peak time from an input signal will be
described together with the principle of the operation. First, the principle of detection of a peak of an input signal. The peak time detecting apparatus 20 of this embodiment detects a peak of a
signal from each of the G sensors 12, 14, 16 and the time of the peak by using the wavelet transformation. The wavelet transformation X(a, b) of a time-series signal X(t) is a development as
exemplified in equation (2) in which a pair of similar functions φa, b(t) obtained through an “a”-fold scale transformation of a basic wavelet function φ(t) localized in terms of time and frequency
followed by a shift transformation (translational transfer) of the origin by “b” as expressed in equation (1) is a basis function. The scale transformation parameter “a” is proportional to the
reciprocal of the frequency f. The scale transformation parameter “a” is proportional to the reciprocal of the frequency f.
φa,b(t)=a ^−½φ((t−b)/a)(1)
[0036] As for the basic wavelet function φ(t), the peak time detecting apparatus 20 of this embodiment uses a Gabor function as in equation (3), that is, a complex function where the imaginary number
portion is π/2 shifted in phase from the real number portion. In equation (3) ωo is a constant determined by the frequency f (ωo=2πf), and α is another constant.
φ(t)=exp(−αt ^2 +i ωot)=exp(−αt ^2)*(cos(ωot) +isin(ωot))(3)
[0037] An expression of the Gabor function on the time axis where α=π in equation (3) is exemplified in FIG. 4. As indicated in FIG. 4, the Gabor function is localized within the range of −T to T on
the time axis, and the waveforms of the real number portion and the imaginary number portion are π/2 shifted from each other. The wavelet transformation of the time-series signal X(t) is,
specifically, a product-sum operation of the time-series signal X(t) and a function including a suitably. selected scale transformation parameter “a” (ωo in equation (3)). The interval of the
arithmetic operation is the range in which the waveform is localized (the range of −T to T in FIG. 4). This range will, be referred to as “window”.
[0038] The wavelet transformation X(a, b) of the time-series signal X(t) based on the Gabor function makes a complex number since the Gabor function is a complex function. FIG. 5 indicates a
relationship among the real number portion R, the imaginary number portion I, the magnitude P and the phase θ of. the wavelet transformation X(a, b) The magnitude P is calculated as in equation (4).
The phase θ is determined by equation (5).
[0039] The magnitude P means an expedient magnitude of the wavelet transformation X(a, b), and is a non-dimensional quantity. The phase θ ranges between 0 and 2π depending on the magnitudes and signs
of the real number portion R and the imaginary number portion I.
[0040]FIG. 6 is a diagram illustrating a relationship between the time-series signal X(t) and the phase θ(t) of the wavelet transformation X(a, b). In the diagram, the frequency of the time-series
signal X(t) is 50 Hz in an internal A, 100 Hz in. an interval B, and 200 Hz in an interval C. The sampling frequency of the time-series signal X(t) is 2 kHz. With regard to the phase θ(t),
transformation frequencies f are set as follows. That is, a range of frequencies is set to 1.5 octaves above and below 1.25 Hz, and transformation frequencies f are set at increments of ½ octave
within the range. The window width is set to twice the period T of each frequency f (T=1/f) Since the transformation frequency f and ωo in the Gabor function in equation (3) have a relationship of ωO
=2πf as mentioned above, the product-sum operation for determining the phase θ(t) is an arithmetic operation using the Gabor function in equation (3) obtained by substitution of the constant ωo
determined by the transformation frequency f with respect to the time-series signal X(t).
[0041] As indicated in the diagram of FIG. 6, the phase θ(t) of the transformation frequency f close to the frequency of the time-series signal X(t) changes from 2π to zero at time points (t1, t3, t5
in the diagram) when the amplitude of the time-series signal X(t) reaches a local maximum (peak). The phase θ(t) becomes π at time points (t2, t4, t6 in the diagram) when the amplitude reaches a
local minimum (bottom). This is explained as follows. In the Gabor function expressed in equation (3), the waveform of the imaginary number portion I is π/2 shifted with respect to the waveform of
the real number portion R as exemplified in FIG. 4. Let it assumed that the transformation frequency f is presently substantially equal to the frequency of the time-series signal X(t). When the
amplitude of the time-series signal X(t) is maximum, the waveform of the time-series signal X(t) and the waveform of the, real number portion R of the Gabor function match in such a manner that they
are substantially superimposed on each others and therefore the product-sum operation of the real number portion R provides a positive value. In contrast, the waveform of the imaginary number portion
I, shifted by π/2, produces a value of zero through the product-sum operation. Therefore, the calculation of the phase θ based on equation (5) provides a value of 2π or zero. If suitable signs of the
real number portion R and the imaginary number portion I in equation (3) are selected, the phase θ(t) will change from 2π to zero in the proximity of each maximum. When the amplitude of the
time-series signal X(t) is minimum, the waveform of the time-series signal X(t) and the waveform of the real number portion R of the Gabor function superimpose in opposite signs, and therefore the
product-sum operation of the real number portion R provides a negative value, and the product sum of the imaginary number portion I becomes “0”. Therefore, the phase θ is calculated as 2π or zero in
equation (5). With regard to the above-described relationship, it is not necessary that the frequency of the time-series signal X(t) and the wavelet transformation frequency f be perfectly equal, but
a satisfactory result can be obtained if a frequency higher than or equal to a frequency close to the frequency of the time-series signal X(t) is set as a transformation frequency f, as can be
understood from FIG. 6.
[0042] A detection time delay td with respect to a peak time equals half the time that is needed for the arithmetic operation when the peak accords with the waveform of the real number portion R of
the Gabor function (i.e., arithmetic operation interval), that is, the period T, as can be understood from the waveform shown in FIG. 4. For example, the transformation frequency f is 125 Hz, the
detection time delay td is 8 msec. If detection of a peak time of the time-series signal X(t) is only the purpose, the arithmetic operation interval does not need to be set to the entire range of the
window indicated in FIG. 4, but may be set to a reduced range centered at the peak of the waveform of the real number portion R. In this case, the ratio of the arithmetic operation interval to the
width of the window is termed window coefficient K. FIG. 7 indicates a relationship between the window width and the window coefficient K. The detection time delay td and the window coefficient K
have a relationship of td=K·T. For example, if the transformation frequency f is 125 Hz and the window coefficient K is 0.125, the detection time delay td is 1 msec. Thus, decreases in the window
coefficient K decrease the detection time delay td, and decrease the amount of operation as well.
[0043] Next described will be selection of transformation frequencies f for signals from the G sensors 12, 14, 16, FIGS. 8 and 9 are diagrams each illustrating a relationship between the phase θ(t)
and a signal detected by the G sensor 12 when the vehicle crashes i.e., deceleration signal). Normally, the deceleration signal X(t) exhibits various waveforms depending on the shape and construction
of the vehicle, the form of crash, etc., and considerably contains high-frequency components due to vibrations of the vehicle. Therefore, high-frequency components are removed from signals from the G
sensor 12 by using a Kalman filter, and a moving average value of high-frequency component-removed signals is determined as an input signal. As can be understood from FIGS. 8 and 9, the peak time
detection becomes sensitive in phases θ(t) of relatively high transformation frequencies f, for example, the phases θ(t) of f=250 Hz and 354 Hz, in which peaks that cannot be considered as peaks are
detected (see the right-hand side of the peak time tp in FIG. 8). In contrast, in phases θ(t) of relatively low transformation frequencies f, for example, the phases θ(t) of f=44 Hz and 63 Hz, the
peak time detection becomes dull, and detection of a peak fails in some cases (see the peak time tp in FIG. 9) Therefore, it is desirable that transformation frequencies f be set through crash tests
and the like using vehicles equipped with peak time detecting apparatuses 20. As can be seen from FIGS. 8 and 9, a transformation frequency f of about 100 to 150 Hz is considered appropriate for
passenger vehicles.
[0044] The principle of detection of a peak time of the time-series signal X(t) based on the phase θ(t) and the selection of a window coefficient K and a transformation frequency f have been
described. The peak time detecting apparatus 20 of this embodiment is able to detect a peak time of a signal from each of the G sensors 12, 14, 16 by forming a time-series signal X(t) from the signal
from each of the G sensors 12, 14, 16; and by using a transformation frequency f set through a vehicle crash test or the like. The peak time detecting apparatus 20 of the embodiment samples the
signal from each G sensor 12, 14, 16 at a sampling frequency of 2 kHz, and performs a Kalman filter process on the sampled signals to remove high-frequency components, and performs a moving average
process using ten sample values, thereby forming a deceleration signal X(t). The product-sum operation portion 24 performs, for peak detection, a product-sum operation of the deceleration signal X(t)
from the signal input portion 22 by using the Gabor function of equation (3) based on the transformation frequency f being set to 125 Hz and setting the window coefficient K to 0.125. Furthermore,
for validity determination, the product-sum operation portion 24 performs a product-sum operation of the deceleration signal X(t) by setting the transformation frequency f to 200 Hz. The validity
determination will be described below. After that, the phase calculation portion 26 calculates a phase θ from the real number portion R and the imaginary number portion I of the result of the
product-sum operation, that is, the wavelet transformation X(a, b), as in equation (5). The peak time detection portion 28 then detects a time at which the phase θ changes from 2π to zero and
determines the time as a peak time and outputs the peak time.
[0045] The determination regarding the validity of the peak time will be described. As described above with reference to FIGS. 8 and 9, increases in the transformation frequency f increase the peak
time detection sensitivity, and decreases in the transformation frequency f decrease the peak time detection sensitivity. Even if a more precise transformation frequency f is set through a crash
test, it is not ensured that reliable peak detection is possible with respect to all the crash deceleration waveforms. For example, in FIG. 9, the peak time tp is detected if the transformation
frequency f is 177 Hz or higher, but the peak time is not detected if the transformation frequency f is 125 Hz. Therefore, it is possible to determine whether a peak time tp has gone undetected or to
determine whether a detected peak time tp is valid, by comparing a result obtained by using a transformation frequency f set through an experiment or the like (hereinafter, referred to as
“detection-purposed transformation frequency fp”) with a result obtained by using a transformation frequency f (“determination-purposed transformation frequency fj”) that is about 1.2 to 2.0 times
the detection-purposed transformation frequency fp. That is, if a peak time tp is not detected in the result obtained by using the detection-purposed transformation frequency fp whereas a peak time
tp is detected in the result obtained by using the determination-purposed transformation frequency fj, it can be determined that the detection of a peak time tp is uncertain. If a peak time tp is
detected in both the result obtained by using the detection-purposed transformation frequency fp and the result obtained by using the determination-purposed transformation frequency fj, it can be
determined that the detected peak time tp is valid. The validity determination portion 29 of the peak time detecting apparatus 20 of this embodiment receives the phase (calculated by the phase
calculation portion 26 based on the result of the product-sum operation performed by the product-sum operation portion 24 using 200 Hz as a determination-purposed transformation frequency fj. Then,
the validity determination portion 29 detects a peak time similarly to the peak time detection portion 28, and receives the peak time detected by the peak time detection portion 28, and determines
the validity of the detected peak time, and outputs the result of determination to the peak time detection portion 28. The peak time detection portion 28 outputs a detected peak time if the result of
the determination is “valid”. If the result of the determination is “invalid”, the peak time detection portion 28 outputs a signal indicating that a peak time is not detected.
[0046] The above-described detection of a peak time is performed by executing a peak detecting process routine exemplified in FIG. 10. This routine is executed with respect to both the phase θp(t) as
a result of the detection-purposed transformation frequency fp and the phase θj(t) as a result of the determination-purposed transformation frequency fj. The routine is started when the signal
detected by the G sensor 12 exceeds a value, for example, 2G, 3G or the like.
[0047] When the peak detection process routine is executed, the CPU 32 first assigns a value “0” to the time t as an initializing process (step S100), and increments the time t by a sampling time Ts
(step S102). Subsequently, the CPU 32 determines. whether the phase θ(t) is greater than a predetermined value Δ1, and determines whether a phase θ(t+Ts) is smaller than a predetermined value Δ2
(steps S104, S106). Since the detection of a peak time is the detection of a time at which the phase θ changes from 2π to zero as mentioned above, the detection can be accomplished by determining
whether such a phase change is present between a time point t and a time point “t+TS”. Therefore, taking this into account, the predetermined value Δ1 and the predetermined value Δ2 are pre-set. In
this embodiment, the predetermined value Δ1 is “2π−1”, and the predetermined value Δ2 is “2π−2”.
[0048] If the phase θ(t) is at most Δ1, or, if the phase θ(t+Ts) is at least Δ2, the CPU 32 returns to step S102, determining that there is no peak. Conversely, it the phase θ(t) is greater than Δ1
and the phase θ(t+Ts) is less than Δ2, the CPU 32 then determines whether the phase θ(t−Td) is greater than a predetermined value Δ3 (step S108). In the phase θ(t−Td), Td, is a predetermined length
of time. In this embodiment, Td is set to three times the sampling time Ts(Td=3Ts). This processing of determining whether the peak at the time point t is a phase greater than the predetermined value
Δ3 at the predetermined length of time prior to the time point t is provided for determining whether the peak at the time point t is a clear peak. In this embodiment, the predetermined value Δ3 is π.
[0049] If the phase θ(t−Td) is at most the predetermined value Δ3, the CPU 32 determines that the peak at the time point t is not a clear peak, and returns to step S102. If the phase θ(t−Td) is
greater than the predetermined value Δ3, the CPU 32 determines that the peak at the time point t is a clear peak. Then, the CPU 32 determines whether the value occurring at the time point t, that is,
the peak value X(t), is greater than the value X(t+Tb) occurring at the elapse of a time Tb after the time point t(step S110). In the value X(t+Tb), Tb is a predetermined length of time. This
processing is a processing for detecting a greater peak if the deceleration signal exhibits successive peaks. FIG. 11 indicates a case where peaks are successively detected. If, as indicated in FIG.
11, the deceleration signal X(t) has successive peaks at a time point t1 and a time point t2 and the peak value X(t1) is less than the value X(t+Tb) the peak at the time point t1 is not determined as
a peak, but the peak at the time point t2 is determined as a peak. In this embodiment, Tb is set to 10 times the sampling time Ts(Tb=10Ts).
[0050] If the peak value X(t) is less than or equal to the value X(t+Tb), the CPU 32 determines that a greater peak immediately follows, and returns to step S102. If the peak value X(t) is greater
than the value X(t+Tb), the CPU 32 determines that the peak at the time point t is a peak to be detected, and then outputs the peak time t and the peak value X(t) (step S112), Subsequently, the CPU
32 returns to step S102.
[0051] Through the above-described processing, it is possible to detect the time point of a greater peak and the value of the peak if the signal has successive clear peaks.
[0052] The peak time detecting apparatus 20 of this embodiment determines validity of the peak time tp by using the peak time tp with respect to the phase θp(t) as a result of the detection-purposed
transformation frequency fp and the peak time tj with respect to the phase θj(t) as a result of the determination-purposed transformation frequency fj in the peak detecting process routine of FIG.
10. FIG. 12 is a flowchart illustrating a peak time validity determining process routine executed by the peak time detecting apparatus 20. When this routine is executed, the CPU 32 first waits for
detection of a peak time tj with respect to the phase θj(t) as the result of the determination-purposed transformation frequency fj (step 120), and then determines whether a peak time tp with respect
to the phase θp(t) as the result of the detection-purposed transformation frequency fp is detected before the elapse of a time Δt following the peak time tj (step S122). If a peak time tp is detected
so, the CPU32 determines whether the time of the detected peak time tp is after a time point determined by subtracting the time Δt from the determination-purposed peak time tj (step S124). That is,
the processing of steps, S122 and S124 is a process of determining whether a peak time tp is detected within the time range of ±Δt from the determination-purposed peak time tj. As described above
with reference to FIGS. 6 and 9, the determination-purposed transformation frequency fj is higher than the detection-purposed transformation frequency fp, and provides a higher sensitivity for peak
time detection. Therefore, the determination-purposed transformation frequency fj allows more peak times to be detected than the detection-purposed transformation frequency fp. Although the
detection-purposed transformation frequency fp is pre-set so as to match the vehicle through experiments and the like, a real crash is not necessarily the same as a test crash. Therefore, the
waveform of the deceleration signal X(t) cannot be singularly determined, and a peak time may go undetected in some cases. The above-described processing is able to determine whether a peak time has
gone undetected. The time Δt may be set as an allowable value of deviation between the detection of a peak time tj based on the determination-purposed transformation frequency fj and the detection of
a peak time tp based on the detection-purposed transformation frequency fp, and may be determined in accordance with the sampling time, the characteristic of the deceleration signal X(t), etc. In
this embodiment, the time Δt is 2 msec.
[0053] If a peak time tp is detected between ±Δt from the determination-purposed peak time tj, the CPU 32 determines that the peak time is valid, and outputs the peak time tp and the peak value X(tp)
as results (step S126). Then, the CPU 32 ends this routine. Conversely, if no peak time tp is detected between ±Δt from the determination-purposed peak time tj, the CPU 32 determines that the peak
time is invalid or undetected, and produces an output indicating invalid detection (eg., sets a flag) (step S126). After that, the CPU 32 ends this routine.
[0054] The peak time tp and the peak value X(tp) detected as described above or the output indicating invalid detection are used in processes afterwards, for example, a process of determining a form
of crash, a process of starting an occupant protection apparatus, etc. Such processes are not a gist of the invention, and will not be described any further.
[0055] The above-described peak time detecting apparatus 20 of the embodiment is able to detect a peak time and a peak value of the signal from each of the G sensor 12, 14, 16. Furthermore, since the
wavelet transformation is employed, the peak time detecting apparatus 20 is able to promptly accomplish the detection of a peak time within a short time following the peak time. Still further, since
the peak time detection does not employ a differential operation but employs the product-sum operation, false detection of a peak time due to noises can be prevented Further, since it is determined
whether a peak time is valid or undetected, a highly reliable peak time can be acquired, and makes processes afterwards more precise.
[0056] The peak time detecting apparatus 20 of this embodiment performs the processing of step S108 in the peak, time detecting process routine exemplified in FIG. 10 in order to detect a peak time
of a clear peak. Furthermore, in order to acquire a peak time of a greater peak value if the deceleration signal X(t) has successive peaks, the peak time detecting apparatus 20 performs the
processing of step S110 in the peak time detecting process routine. However, the processing of step S108 and the processing of step S110 may be omitted if all peak times are acquired. It is also
possible to perform only one of the processing of step S108 and the processing of step S110.
[0057] Although the peak time detecting apparatus 20 of the embodiment determines whether a peak time is undetected or valid, the determination regarding validity may be omitted. In such a case, the
validity determination portion 29 shown in FIG. 1 becomes unnecessary, and the processing performed by the product-sum operation portion 24 and the product-sum operation portion 24 based on the
determination-purposed transformation frequency fj also become unnecessary.
[0058] Furthermore, the peak time detecting apparatus 20 of this embodiment performs the Kalman filter process or the moving average process as a pre-process for the product-sum operation in order to
remove high-frequency components from the signals from the G sensors 12, 14, 16. However, since these processes depend on the characteristic of the input time-series signal, it in also possible to
omit these processes as a pre-process for the product-sum operation or replace the processes with other processes depending on the input time-series signal.
[0059] Although the peak time detecting apparatus 20 of the embodiment detects a peak time of the signal from each G sensor 12, 14, 16, the embodiment is not limited to the detection regarding the
signals from the G sensors 12, 14, 16, but is applicable to detection of a peak time of any time-series signal. In such an application, the transformation frequency f may be suitably selected in
accordance with the characteristic of the time-series signal.
[0060] While the present invention has been described with reference to what is presently considered to be a preferred embodiment thereof, it is to be understood that the invention is not limited to
the disclosed embodiment or constructions. To the contrary, the invention can be embodied in various forms without departing from the spirit of the invention. | {"url":"http://www.google.com/patents/US20020152054?ie=ISO-8859-1","timestamp":"2014-04-21T03:34:53Z","content_type":null,"content_length":"102207","record_id":"<urn:uuid:a651b3f3-2bb7-4d3d-a72a-0e2c49f15398>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
An introduction to Markov chain analysis
From inside the book
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Front Cover
AN INTRODUCTION TO MARKOV CHAIN ANALYSIS 3
iii The limiting matrix 9
3 other sections not shown
Common terms and phrases
absorbing Markov aspatial assumed assumption bability Central Limit theorem component computed concept cubic matrix degrees of freedom dependent derived diagonal diagonal matrix Drewett and Ferguson
elements entries equilibrium estimated transition probabilities first-order Markov chain first-order property fundamental matrix homogeneity identity matrix individual movements Kemeny and Snell
Krenz large number Lever likelihood ratio criterion limiting matrix loge London manufacturing establishments Markov chain analysis Markov chain models Markov model Markov process Markov property
mathematical proof Maximum likelihood ratio mean first passage migrant distance migration null hypothesis number of steps observations Pacific percent period Pijk plants population pyramids
predictive Principal Components Analysis probability of moving probability vector regular Markov chain relocate S2 S3 sl ships Sl S2 S3 specific-order stationarity statistical tests stochastic
matrix stochastic process suburbs system of spatial Table tally matrix three oceans total number trends typify a first-order underlying fixed probabilities values variables Zone 2 Zone
Bibliographic information | {"url":"http://books.google.com/books?id=rBENAQAAMAAJ&dq=related:UOM39015015629655","timestamp":"2014-04-18T23:21:14Z","content_type":null,"content_length":"103280","record_id":"<urn:uuid:2142c424-6638-4e77-9f4f-4f5f94229a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cocomplete but not complete abelian category
up vote 27 down vote favorite
This is a duplicate of the following question to which I did not receive any answer: http://math.stackexchange.com/questions/238247/complete-but-not-cocomplete-category
Let $\mathfrak C$ be an abelian, cocomplete category. If $\mathfrak C$ has a generator and colimits are exact (i.e., $\mathfrak C$ is Grothendieck) then $\mathfrak C$ is the torsion-theoretic
localization of a full category of modules (by the Gabriel-Popescu Theorem) and so it is also complete. Anyway I'm not aware of any counter-example showing that a cocomplete abelian category may not
be complete. So my question is: could you provide such example or a reference to a proof of the bicompleteness of cocomplete abelian categories?
My first idea was to look for counterexamples in non-Grothendieck subcategories of a Grothendieck category. After some attempt I realized the following
Lemma. Let $\mathfrak C$ be a Grothendieck category and $\mathcal T$ a full hereditary torsion subcategory (i.e. $\mathcal T$ is closed under taking sub-objects, quotient objects, extensions and
coproducts). Then $\mathcal T$ is bicomplete.
Proof. Let $T:\mathfrak C\to \mathcal T$ be the hereditary torsion functor associated to $\mathcal T$. Now, given a family {$C_i:i\in I$} of objects in $\mathcal T$ we can take the product $(P,\
pi_i:P\to C_i)$ of this family in $\mathfrak C$. We claim that $(T(P), T(\pi_i))$ is a product in $\mathcal T$. Indeed, let $X\in \mathcal T$ and choose maps $\phi_i:X\to C_i$. By the universal
property of products in $\mathfrak C$, there exists a unique morphism $\phi:X\to P$ such that $\pi_i\phi=\phi_i$ for all $i\in I$. Now, since $X\in\mathcal T$, there is an induced map $T(\phi):X\to T
(P)$ which is clearly the unique possible map satisfying $T(\pi_i)T(\phi)=T(\phi_i)=\phi_i$. \\\
Thus there are lots of non-Grothendieck bicomplete abelian categories.
EDIT: notice that in the lemma we never use the hypothesis that the subcategory $\mathcal T$ is closed under taking extensions or subobjects. In fact, if $\mathcal T$ is just closed under taking
coproducts and quotients, one defines the functor $T:\mathfrak C\to \mathcal T$ such that, for all object $X\in\mathfrak C$, $T(X)\in \mathcal T$ is the direct union of all the subobjects belonging
to $\mathcal T$ (image (which is a quotient) of the coproduct of all the subobject of $X$ belonging to $\mathcal T$ under the universal map induced by the inclusions of the subobjects in $X$).
Clearly $T(X)$ is fully invariant as a subobject of $X$ (by the closure of $\mathcal T$ under taking quotients and the construction of $T$) and so $T$ can be defined on morphisms by restriction. It
is also clear that $T(X)=X$ if $X\in\mathcal T$ so the proof of the lemma can be easily adapted to this case.
REMARK: the new relaxed hypotheses of the lemma allow us to exclude other "exotic" examples... in particular, if you want to take the abelian subcategory of all the semisimple objects in a given
Grothendieck category, this is closed under coproducts and quotients.
ct.category-theory abelian-categories
1 @Andrej Bauer: It is what I wrote in the question: an abelian category is Grothedieck if it is cocomplete, colimits are exact and is has a set of generators. The canonical example is that of a
category of modules. A deep result of Gabriel and Popescu characterizes Grothendieck categories as the torsion-theoretic localizations of module categories. (see also en.wikipedia.org/wiki/
Grothendieck_category) – Simone Virili Nov 16 '12 at 14:02
2 Your lemma is a special case of the general observation that coreflective subcategories of complete categories are complete. – Martin Brandenburg Nov 17 '12 at 14:46
1 Whoops, I am withdrawing my answer as it is obviously wrong. Thanks for pointing it out... – Bugs Bunny Nov 23 '12 at 6:22
1 Wrong answers are also hepful. They prevent us from falling in the same mistakes again and again. – Fernando Muro Nov 25 '12 at 23:37
4 This question is quite challenging, despite of being quite basic at first sight. I have played around with a lots of cocomplete abelian categories, but somehow they always turn out to be complete.
Meanwhile I suspect that counterexamples will be quite strange ... – Martin Brandenburg Nov 26 '12 at 2:04
show 6 more comments
5 Answers
active oldest votes
My previous answer was not correct indeed, so that what remains is just the following comment.
There is a way to consider the free colimit completion in the setting of additive categories; I will essentially use construtions and results which can found in this article of B. Day and S.
Lack (in the particular case of additive categories): arXiv:math/0610439. Let $C$ be a locally small additive category. I will write $\widehat C$ for the free completion of $C$ by small
colimits (in the enriched sense). An explicit construction of $\widehat C$ is the following: if $Ab$ denotes the symmetric monoidal category of abelian groups, $\widehat C$ is the full
subcategory of the category of additive functors $$F: C^{op}\to Ab$$ which consists of those $F$'s which are small, i.e. such that there exists a short exact sequence of the form $$F''\to F'
\to F \to 0$$ with $F''$ and $F'$ isomorphic to small sums of representable presheaves. The category $\widehat C$ is always locally small and cocomplete (and the Yoneda embedding $C\to\
widehat C$ is the universal additive functor from $C$ to a cocomplete additive category. I claim that, if $C$ is abelian, then $\widehat C$ is abelian as well: indeed, it has finite limits
up vote (see Prop.4.4 in Day & Lack's paper) and the Yoneda embedding $C\to\widehat C$ commutes with limits; moreover, for any object $X$ of $C$ the evaluation at $X$ functor has both a left and a
3 down right adjoint (I leave as an exercise their explicit description) and thus is exact; as these evaluation functors form a conservative family, one deduces that $\widehat C$ is abelian
vote whenever $C$ is abelian (in fact, it is sufficient for this that $C$ is finitely complete). Note also that, whenever they exist, limits of $\widehat C$ can be computed termwise. In
accepted particular, to check that a limit is not representable in $\widehat C$, it is sufficient to check that the corresponding presheaf over $C$ is not small.
The abelian category $\widehat C$ is known to be complete if one of the following conditions is satisfied: $C$ is essentialy small, or $C$ is itself complete. Furthermore, if ever $C$ is
cocomplete, then it is a refexive subcategory of $\widehat C$, so that the completeness of $\widehat C$ implies the same property for $C$. In conclusion, if ever there is an example of a
cocomplete abelian category which is not complete, there must be one of the form $\widehat C$ for an additive category $C$ which is not essentially small and which is not complete.
Furthermore, if there is a counter example, there must exist a small family of representable presheaves over $C$ whose product in the category of preshaves is not small. So far, I did not
succeed to find such a thing.
Can you explain why $A^{(X)}$ is a category at all? I have tried similar examples, and the problem always was that it is not clear at all if (the set of objects in) $A^{(X)}$ can be
defined within the same universe as $A$. So this is an issue of set theory. – Martin Brandenburg Nov 26 '12 at 2:24
2 Denis-Charles -- I had the same idea of using the category of small presheaves valued in $Ab$ of a large set, but I couldn't understand why this wasn't closed under small products. Could
you not leave it as an exercise and spell out the details, please? Perhaps I am missing something simple. – Todd Trimble♦ Nov 26 '12 at 2:36
As he points out, it looks like A^(X) is just locally small, so the "set" of objects is actually a class. Meanwhile Hom-sets are honest sets. – Dylan Wilson Nov 26 '12 at 2:38
I modified the definition of $A^{(X)}$ because, with the previous one, there were no room for the identity. – Denis-Charles Cisinski Nov 26 '12 at 2:45
Wait, why can there be no such epimorphism? Since any object in $C$ is "finitely supported," any cone in $C$ factors through a finite direct sum of the $M_n$ (which exists in $C$).
3 Doesn't that imply that the desired functor admits an epimorphism from the small sum of functors represented by the finite sums of the $M_n$? Where is my mistake? – Daniel Schäppi Nov 26
'12 at 4:32
show 8 more comments
The following does't give an example as required, but eliminates some candidates. If $\scr A$ is an additive (not necessary abelian) category which is complete, well powered and has a
cogenerator then $\scr A$ has coproducts. Indeed, $\scr A$ satisfies the hypotheses of Fred's Special Adjoint Functor Theorem (see Adamek Rosicky, Locally Presentable and Accessible
Categories, Section 0.7). Thus every functor which preserves limits is a right adjoint. Let $X_i\in\scr A$, $i\in I$ be a set of objects. The functor $F=\prod_{i\in I} \scr {A}(X_i,-):{\scr
up vote A}\to \mathfrak {A}\mathfrak{b}$ preserves limits, so it has a left adjoint $G$. Straightforwardly $F$ is represented by the object $G(\mathbb Z)$ which has to be isomorphic to $\coprod_{i\in
0 down I}X_i$. The same is also true if we work with non-additive categories, but replacing the category of abelian groups with the category of sets.
In conclusion, if $\scr A$ has in addition push-outs (or equivalently coequalizers) then $\scr A$ is cocomplete. Note that an abelian category has always cokernels, therefore coequalizers.
Actually I think the last hypothesis that $\scr A$ has push-outs is superfluous; it is enough to replace products with arbitrary limits in the definition of the functor F above, in order to
derive directly the existence of colimits. – George C. Modoi Mar 29 '13 at 16:07
The settings in which I worked are rather dual as those of the question. Dualizing again we obtain: A cocomplete well copowered additive category with a generator is also complete. In
particular this gives another proof for the fact that Grothendieck categories are complete. – George C. Modoi Mar 29 '13 at 16:35
@ Dominic Michaelis: Why are you editing my answer by replacing a capital $X$ with a script $\scr X$ without any relation with the rest? – George C. Modoi Jul 25 '13 at 6:45
This is nothing new and can be found in any treatment of category theory. It is just the usual proof that Grothendieck abelian categories are complete. – Martin Brandenburg Nov 1 '13 at
add comment
Here's an idea. Maybe somebody can either use it to construct an example or prove it can't work?
Let ${\mathcal A}$ and ${\mathcal B}$ be complete and cocomplete abelian categories, and $F:{\mathcal A}\to{\mathcal B}$ an exact functor that preserves coproducts but not products.
Then the comma categories $F\downarrow id_{\mathcal B}$ and $id_{\mathcal B}\downarrow F$ are cocomplete abelian categories (with the obvious kernels, cokernels and coproducts). They are
also both complete, but for different reasons. In the first case, products are constructed using compositions of natural maps $F(\Pi X_i)\to\Pi FX_i\to\Pi Y_i$, but in the second case they
are constructed by taking pullbacks of $\Pi Y_i\rightarrow \Pi FX_i\leftarrow F(\Pi X_i)$.
What if we combine the two constructions?
up vote
-1 down Let ${\mathcal C}$ be the category with objects quadruples $(X, Y, \alpha:Y\to FX, \beta:FX\to Y)$ and maps $(X,Y,\alpha,\beta)\to(X',Y',\alpha',\beta')$ consisting of pairs of maps $X\to
vote X'$ and $Y\to Y'$ making the obvious diagrams commute.
Then ${\mathcal C}$ is a cocomplete abelian category, with the obvious kernels, cokernels and coproducts, but I don't see any obvious construction of products. Is it complete?
For an explicit example, take ${\mathcal A}$ to be the category of abelian groups, ${\mathcal B}$ the category of rational vector spaces, and $F$ the functor $-\otimes_{\mathbb Z}{\mathbb
Q}$. Is the category ${\mathcal C}$ constructed from this data complete? Specifically, let $S$ be the object $({\mathbb Z},{\mathbb Q},\alpha,\beta)$, where $\alpha$ and $\beta$ are the
obvious isomorphisms. Is there a product of infinitely many copies of $S$?
I think what you are describing is a lax 2-limit construction. If that's true, then your category will be locally presentable if the two starting categories are. Indeed, a category is
4 locally presentable if and only if it is cocomplete and accessible, and the 2-category of accessible categories and accessible functors is closed under lax limits (see e.g.
Adamek-Rosicky, Theorem 2.77). Note that any functor which preserves colimits is accessible since it preserves filtered colimits. So, to get a counterexample from this construction, I
think you would have to start with one. – Daniel Schäppi Nov 22 '12 at 16:57
add comment
I'm no totally sure (as ever), If no I hope could suggest some ideas..
Let $fAb$ the abelian category of finite abelian groups, and let $\mathcal{C}:= Ind(fAb)$ its ind-category, this is the full category of presheaves on $fAb$ isomorphic to a filtred diagram of
representable. Now form usual literature (e.g. Artin MAzur "Etale Homotopy" appendix) $\mathcal{C}$ is abelian, (then has finite sums), and has filtered colimits then has (small) sums, then
is cocomplete.
up vote Consider a countable numeration of finite cyclic groups $(C_n)_{n\in \mathbb{N}}$, and suppose that exist the product $P:= \prod_n h_{C_n}$ in $\mathcal{C}$ of associate representable of the
-2 down $C_n$'s, let $P\cong \varinjlim_{i\in I} h_{G_i}$ for some direct diagrams of finite abelian groups $G_i$. We have a split monomorphisms $\delta: \sum_n h_{C_n}\to P$, i claim the the family
vote of maps $h_{C_n}\to \sum_n h_{C_n}\to P$ is epimorphic, this follow because the the family $G_i\to P$ is epimorphic, and any $G_j$ is a (finite) sums of cyclic groups. But then $\delta$ is a
epimorphism, then a isomorphism. Now fix a cyclic group $C_{m}\neq 0$, and consider $(h_{C_m}, \sum_n h_{C_n})\cong \bigoplus_n fAb(C_m, C_n)$ (the sum is a direct colimits of finite sums,
and finite sums are representable by a biproduct) and each elemts of this sum as all $0$'s but finite components, but this isnt true for $(h_{C_m}, P)\cong \prod_n fAb(C_m, C_n)$, and
considering that $1_{C_n}:= \pi_i\circ \delta\circ \epsilon_i : h_{C_n}\to \sum_n h_{C_n}\to P \to h_{C_n} $ we get a absurd condition.
6 Once again, the Ind-completion of a small finitely cocomplete category is finitely accessible and cocomplete, hence locally finitely presentable, hence complete. Every answer that has
appeared thus far has failed basically for this reason. – Todd Trimble♦ Nov 22 '12 at 21:48
1 THank you Todd Trimble, quite right, I find also where my answere is wrong. – Buschi Sergio Nov 23 '12 at 14:32
add comment
up vote According to Weibel, the category of torsion abelian groups is cocomplete but not complete. No proof is offered. pg. 426 of that book of his
-2 down
14 People keep making this same mistake. The fact that the product of some torsion abelian groups in the category of all abelian groups is not torsion does not imply that there is no
product of these same objects in the subcategory. In fact, in the product of all the groups Z/n the subgroup consisting of torsion elements is a product (in the category of torsion
abelian groups). – Tom Goodwillie Nov 26 '12 at 0:55
@William: Weibel is wrong here. To anyone who has submitted an answer which has not resolved the question: please consider deleting your answer, because this silly MO mechanism will
2 automatically award an "accepted answer" to the highest voted answer after the bounty is over and the OP hasn't accepted anything. In this case, that would be the present answer. :-) I
might also mention that I am trying to email people who might well be able to answer this. (Pity that there aren't a huge number of categorists who tune in here.) – Todd Trimble♦ Nov 26
'12 at 1:18
2 -1. The full subcategory of abelian groups consisting of torsion abelian groups is coreflective, hence complete (see also the other comments). By the same reason, many other potential
examples you first think of don't work. It is a common false belief (which also gets repeated again and again on mathoverflow) that limits and colimits are reflected/preserved/created
by forgetful functors, see also mathoverflow.net/questions/23478/… – Martin Brandenburg Nov 26 '12 at 2:01
Whoops! Silly me (comment removed, as well as the fake proof) – Dylan Wilson Nov 26 '12 at 2:33
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory abelian-categories or ask your own question. | {"url":"http://mathoverflow.net/questions/112574/cocomplete-but-not-complete-abelian-category?sort=newest","timestamp":"2014-04-16T04:44:41Z","content_type":null,"content_length":"106000","record_id":"<urn:uuid:b31fb234-4379-438b-9cea-fbdaed5e54ec>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questions in need of checking.
December 20th 2007, 07:33 AM #1
Dec 2007
Questions in need of checking.
I was given a few problems to do for my pre-calc class, and I solved pretty much everything on my own. All I need is for someone to check the answers to make sure they're correct. Anyone willing?
1. f(x) = x + 2 and g(x) = x^2 + 2x + 3. Find f(g(x)) and g(f(x)). Find all values of x for which f(g(x)) = g(f(x)).
□ f(g(x)) = x^2 + 2x + 5.
□ g(f(x)) = x^2 + 6x + 11.
□ f(g(x)) = g(f(x)), when x = -6/4.
2. Find the domain of f(x) = sqrt (x+5) / x^2 - 25.
□ The domain is (-5,5) U (5, + infinity).
3. The vertex of f(x) is (-2,5) and the point (1,-1) is on f(x). Find f(x), domain of f(x) and the range of f(x).
□ f(x) = (5/3)(x+2)-5
□ The domain of the function is (- infinity, + infinity).
□ The range of the function is (- infinity, + infinity).
4. f(x) = x^2 + 2x + 3. Find the average rate of change between x=-2 and x=0.
□ The average rate of change is 0/2.
5. f(x) = ax^2 + bx + 2. f(1)=4 and f(-1)=-2. Find the value of a and b.
□ a = 3.
□ b= -1. (i'm pretty sure these are both wrong.
6. f(x) = 1/x^2. Find the average rate of change between x=x+h and x=x. Find the slope of f(x) at x=-1. Write an equation of a line tangent to f(x) at x=-1.
□ The average rate of change is 2x/x^4.
□ The slope at x=-1 is -2.
□ The tangent line at x=-1 is (y-1) = (1/2)(x+1).
7. f(x) = 3 - 2x. Find the inverse function of f(x).
8. A wire 10 cm long is cut into two pieces, one of the length x and he other of the length 10-x. Each piece is bent into the shape of a square. Find the area of both squares. What value of x
will minimize the total area of the two squares?
□ The area of the square made from wire x is x^2.
□ The area of the square made from wire 10-x is 100-20x+x^2.
□ The value of x should be 5 in order to minimize the total area of the two squares. (for this one, i got the answer x=5, but in a really strange way. can anyone show me the steps for this
I know these are a lot to do, but I'm only asking for someone to tell me if I'm correct or not. A simple correct/incorrect answer for each problem will do (except for #8, but that's up to you).
2) The domain also spans from negative infinity to negative five.
3) The function you gave doesn't contain the point (1, -1)
5) You have a and b switched
6) You are missing a negative sign
8) No, piece x will be the perimeter of the square, so a side is x/4, area is then side^2
Thank you so much! That really helped a lot. But can you explain some things to me?
2. Wouldn't making the domain span from negative infinity to -5 make the numerator a complex or imaginary number?
3. Is f(x) = -2(x+2)+5 the right answer?
6. In which part of the solution(s) am I missing a negative?
Last edited by greenhighlighter; December 20th 2007 at 08:22 AM.
Thank you so much! That really helped a lot. But can you explain some things to me?
2. Wouldn't making the domain span from negative infinity to -5 make the numerator a complex or imaginary number?
3. Is f(x) = -2(x+2)+5 the right answer?
6. In which part of the solution(s) am I missing a negative?
2) Actually, you were right here.. I missed the square root symbol. Forget my original comments here.
3) A vertex is a local extrema. In this case it is at (-2, 5). If the coordinates (1, -1) are on the graph, then as you move to the right of the vertex, the curve goes down. This function must be
a polynomial of degree at least two, and it must be facing downward. f(-2) = 5, so $f(x) = a(x+2)^2+5$, so we need to solve for a. Use the point (1, -1)... -1 = a + 5, so a = -6. $f(x) = -6(x+2)^
6) Everywhere... $f'(x)=-2x^{-3}$, so the slope at -1 is positive 2. Adjust your tangent line accordingly.
Ohhhh. I get it now. For #3, I missed the exponent 2.
Okay, thanks! I understand all the topics.
December 20th 2007, 07:58 AM #2
December 20th 2007, 08:08 AM #3
Dec 2007
December 20th 2007, 09:07 AM #4
December 20th 2007, 09:17 AM #5
Dec 2007 | {"url":"http://mathhelpforum.com/pre-calculus/25137-questions-need-checking.html","timestamp":"2014-04-16T14:42:11Z","content_type":null,"content_length":"45083","record_id":"<urn:uuid:4a67b5e7-7061-4aaf-8a48-46472bfe662c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tips for analyzing a combination circuit
One tries to simply the resistances into equivalent resistances.
Parallel components share the same voltage (potential) since they share common nodes, but series resistances have the same current, and the voltage drop would be the same only if the resistances are
the same.
A battery raises the potential from - terminal to + terminal when the current i is oriented from - to +, and lowers the potential when the current i is oriented against + to - terminal.
i.e. if i -> then - || + raises potential, and lowers it if the battery is oriented + || - | {"url":"http://www.physicsforums.com/showthread.php?t=297875","timestamp":"2014-04-19T22:59:14Z","content_type":null,"content_length":"25871","record_id":"<urn:uuid:5a22bbae-d5de-4544-9e9c-5221595a1fc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bunker Hill Village, TX Algebra 2 Tutor
Find a Bunker Hill Village, TX Algebra 2 Tutor
...Primarily I love to practice my violin. I have been playing since I was two years old. My parents put a stick in one hand and a cardboard box in the other till I could handle the real thing.
38 Subjects: including algebra 2, chemistry, reading, physics
...In high school, I took AP World History and earned a 5 on the AP exam. I have an immensely broad background in and knowledge of the physical sciences. From geology to meteorology, geology,
physics, biology, chemistry, cosmology, etc.
37 Subjects: including algebra 2, chemistry, geometry, physics
I am a Texas certified math teacher for grades 6-12. For 9 years, I have worked with H.I.S.D. teaching all levels of middle school math. In addition, I was the Mathcounts competition coach for 6
24 Subjects: including algebra 2, calculus, geometry, biology
...I am a Registered Professional Engineer in the State of Texas. My career was spent in the engineering design of chemical processes to obtain the best plant process and plant design, and then
designing, building, and operating this plant. I have taught chemistry, physics, and chemical process technology at Alvin Community College for 6 years, and I currently also tutor High School
11 Subjects: including algebra 2, English, chemistry, geometry
...My first year teaching, my classrooms TAKS scores increased by 40%. This last year I had a 97% pass rate on the Geometry EOC and my students still contact me for math help while in college. I
know I can help you.I currently teach Algebra 1 on a team that was hand selected because of our success...
8 Subjects: including algebra 2, physics, geometry, biology | {"url":"http://www.purplemath.com/Bunker_Hill_Village_TX_algebra_2_tutors.php","timestamp":"2014-04-19T12:46:48Z","content_type":null,"content_length":"24730","record_id":"<urn:uuid:4e6ab586-8d38-4726-9a79-8d4e49eaadfe>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
quivalent of
My spreadsheet needs VBA code edit, to send EMAILS to rows, that are NOT BLANK in the worksheet RESULTS, excluding the HEADER.
See sample file http://www.srands.co.uk/exoftable3.xls
Issue1: To get VBA code working edited (See code below, 9th line, For r = 2 To 2 'Needs editing so that data in 'not blank' rows only is included. Also r = row, not to be confused with column r) to
return email for only the 2nd row (Row after HEADER) entry only, on the RESULTS page. However the number of not blank rows is a variable, and will depend upon the rows that meet the criteria in
'WORKSHEET' in column U.
Issue2: If I expand the row range to a full page upto row 51, many BLANK emails are generated (Because of blank rows, the auto-generated fields would be BLANK).
I don't know what VBA code to use instead though.
In formula's for the 'RESULTS' page I would use a command that checks if the row is not blank, something like =IF(AND(A2=0),"",'email command')WHAT IS THE VB EQUIVALENT OF SOMETHING LIKE THIS?
For this spreadsheet the number of RESULTS will be unknown depending on the information/data available, hence I want to include NOT BLANK entries from rows 2 to 51.
CODE NEEDS EDITING, JUST TO COUNT NOT BLANK ROWS IN THE WORKSHEET 'RESULTS':
ShellExecute Lib "shell32.dll" _
Alias "ShellExecuteA" (ByVal hwnd As Long, ByVal lpOperation As String, _
ByVal lpFile As String, ByVal lpParameters As String, ByVal lpDirectory As String, _
ByVal nShowCmd As Long) As Long
Sub SendEMail()
Dim Email As String, Subj As String
Dim Msg As String, URL As String
Dim r As Integer, x As Double
For r = 2 To 2 'Needs editing so that data in 'not blank' rows only is included
' Get the email address
Email = Cells(r, 10)
' Message subject
Subj = "Your car for sale. " & Cells(r, 1).Text & "."
' Compose the message
Msg = ""
Msg = Msg & "Dear " & Cells(r, 11) & "," & vbCrLf & vbCrLf
Msg = Msg & "I like your car, the " & Cells(r, 1).Text & "." & vbCrLf & vbCrLf
Msg = Msg & "Please call me back. "
Msg = Msg & "It is " & Cells(r, 2).Text & "." & vbCrLf & vbCrLf
Msg = Msg & "Cheers " & vbCrLf & vbCrLf
Msg = Msg & "Stephan Rands" & vbCrLf
Msg = Msg & "07772000679" & vbCrLf
Msg = Msg & "[EMAIL="mail@srands.co.uk"]mail@srands.co.uk[/EMAIL]"
' Replace spaces with %20 (hex)
Subj = Application.WorksheetFunction.Substitute(Subj, " ", "%20")
Msg = Application.WorksheetFunction.Substitute(Msg, " ", "%20")
' Replace carriage returns with %0D%0A (hex)
Msg = Application.WorksheetFunction.Substitute(Msg, vbCrLf, "%0D%0A") ' Create the URL
URL = "mailto:" & Email & "?subject=" & Subj & "&body=" & Msg
' Execute the URL (start the email client)
ShellExecute 0&, vbNullString, URL, vbNullString, vbNullString, vbNormalFocus
' Wait two seconds before sending keystrokes
Application.Wait (Now + TimeValue("0:00:02"))
Application.SendKeys "%s"
Next r
End Sub
If you like these VB formatting tags please consider sponsoring the author in support of injured Royal Marines
See sample file http://www.srands.co.uk/exoftable3.xls
TheWORKSHEET 'RESULTS', has the VISUAL BASIC code called 'Send EMail'. Obviously to view 'RESULTS' worksheet VISUAL BASIC code, View, Tools Bars, Visual Basic, then on the Toolbar press the play
symbol (R/H arrow), Step into.
Or to play the MACRO of the rows that meet all criteria in 'WORKSHEET', shown in 'RESULTS', View, Tools Bars, Visual Basic, then on the Toolbar press the play symbol (R/H arrow), Run.
Cross threads: | {"url":"http://www.knowexcel.com/view/1426730-equivalent-of-vba-mod-fn-for-non-integer-arguments.html","timestamp":"2014-04-20T18:23:40Z","content_type":null,"content_length":"63350","record_id":"<urn:uuid:1717bf57-9dfd-4745-93a4-5e9db12c6fa9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Snohomish Geometry Tutor
Find a Snohomish Geometry Tutor
...If you don't understand something, such as a math problem, I can simplify it until you get it and solve it all by yourself. I'm a native Cantonese and Mandarin Chinese speaker. I'd studied
Chinese for over ten years, before I moved to the U.S.
13 Subjects: including geometry, Chinese, algebra 1, algebra 2
...I received a 5 on the physics AP test, and later a 4.0 on general college-level physics. I find physics super fun and satisfying, and I'm always eager to tackle problems. I hold a BA in
Computer Science from UC Berkeley.
18 Subjects: including geometry, chemistry, biology, algebra 2
...Later, I taught piano in Hawaii to several students from age 10 to adult. Born and raised in Russia, I had some of the best Russian Language and literature teaches, a student may wish for. I
was often selected for literary competitions and won high prizes.
20 Subjects: including geometry, reading, calculus, statistics
...Challenges such as ropes courses where we encouraged others to cross the rope and not stopping our support until they reached the end. I have an immense love for math and Spanish and have been
very successful in these subjects for a big part of my life. Being ahead of others in my grade at math shows my love and able to understand it.
15 Subjects: including geometry, reading, Spanish, piano
...Personally, I have scored in the 700s for SAT math section and understand the strategies behind taking the tests. I also have finished up to college level calculus. For sciences, I would enjoy
tutoring chemistry, biology, and earth sciences.
23 Subjects: including geometry, chemistry, biology, algebra 1 | {"url":"http://www.purplemath.com/snohomish_wa_geometry_tutors.php","timestamp":"2014-04-20T07:03:07Z","content_type":null,"content_length":"23801","record_id":"<urn:uuid:7e2c5bfc-e4f6-4744-8897-62ca6d7f817a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
IP 2
IP 2. An envelope for the spectrum of a matrix
Chair: Steve Kirkland
An envelope-type region E(A) in the complex plane that contains the eigenvalues of a given n × n complex matrix A is introduced. E(A) is the intersection of an infinite number of regions defined by
cubic curves. The notion and method of construction of E(A) extend those of the numerical range of A, which is known to be an intersection of an infinite number of half-planes; as a consequence, E(A)
is contained in the numerical range and represents an improvement in localizing the spectrum of A.
Michael Tsatsomeros
Dept. of Mathematics, Washington State University, US | {"url":"http://www.siam.org/meetings/la12/index.php/ip-2/index.html","timestamp":"2014-04-16T16:00:32Z","content_type":null,"content_length":"10668","record_id":"<urn:uuid:67c175fb-516f-4f9a-989c-e7a7fb9f5bd0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jesse Ernest Wilkins Jr.
Born: 27 November 1923 in Chicago, Illinois, USA
Died: 1 May 2011 in Fountain Hills, Arizona, USA
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
J Ernest Wilkins Jr was born into an African-American Methodist family, being the son of J Ernest Wilkins Sr and Lucile Beatrice Robinson. His father J Ernest Wilkins Sr was a lawyer who went on to
become President of the Cook County Bar Association in the early 1940s (Cook County is in northeastern Illinois and Chicago is in that County), and Assistant Secretary of Labor in the Eisenhower
administration of the 1950s. His mother Lucile Robinson had been educated to the level of a Master's Degree and was trained as a schoolteacher. In [6] the strong influence of his parents is
Responding to the influence, nurture and guidance of his parents, and developing his talents, he achieved much.
Ernest entered the University of Chicago in 1936 when only 13 years old and in so doing he became the youngest ever student at that university. His university career was remarkable and he received
much publicity when he graduated with his A.B. in mathematics from the University of Chicago in 1940 at the age of only 17 years. He then continued to study mathematics at Chicago for his Master's
Degree and in the following year he was awarded an M.S. He then continued with his doctoral studies at Chicago and submitted his dissertation Multiple Integral Problems in Parametric Form in the
Calculus of Variations which led to his being awarded a Ph.D. in December 1942, only a few days after his 19^th birthday.
Few teenagers can have won a scholarship and studied at the Institute for Advanced Study in Princeton but this is exactly what Wilkins did in 1942 with a Rosenwald Scholarship. He wrote his first
papers in 1942, both on geometry, and they were published in 1943. They are The first canonical pencil and A special class of surfaces in projective differential geometry both published in Duke
Mathematical Journal. In the first he gives certain geometric definitions for lines of the first canonical pencil through a point of a nonruled surface. In the second he considers a certain class of
surfaces and expresses characteristic properties of these surfaces in terms of standard projective elements.
In 1943-44 Wilkins taught at the Tuskegee Institute, where most of the students were black. This was the first year that this Institute offered graduate-level instruction (it is now called Tuskegee
University). He then returned to the University of Chicago where he worked on the Manhattan Project in the Metallurgical Laboratory from 1944 to 1946. The only method for the production of the
fissionable material plutonium 239, required for making a nuclear bomb, was being developed in this Laboratory under the direction of Arthur Holly Compton and Enrico Fermi.
Wilkins continued to produce a remarkable number of mathematical papers on a wide variety of different topics. In 1944 four of his papers appeared: On the growth of solutions of linear differential
equations; Definitely self-conjugate adjoint integral equations; Multiple integral problems in parametric form in the calculus of variations; and A note on skewness and kurtosis. The last of these is
on statistics. In the following year he published The differential difference equation for epidemics in the Bulletin of Mathematical Biophysics.
After leaving the Manhattan Project in 1946, Wilkins worked in industry. He was a mathematician at the American Optical Company in Buffalo, New York from 1946 to 1950. During this time he married
Gloria Stewart in 1947 they had two children, Sharon and J. Ernest III. From 1950 he worked as a mathematician at the United Nuclear Corporation of America in White Plains, New York for ten years. He
became Manager of the Mathematics and Physics department there in 1955, and then later Manager of Research and Development. It was during this time that Wilkins earned himself further degrees when he
was awarded a Bachelor of Mechanical Engineering from New York University in 1957, and a Master of Mechanical Engineering three years later.
After this Wilkins held a number of academic and non-academic appointments. He worked at the General Atomic Company in San Diego in the 1960s and in the 1970s he was appointed to Howard University as
Distinguished Professor of Applied Mathematical Physics. He established a Ph.D. programme in mathematics at Howard University and in doinf so it became the first traditional Black University to have
such a programme.
From 1977 to 1984 Wilkins worked at EG&G Idaho, becoming Vice President and Deputy General Manager for Science and Engineering. Then from 1984 he spent the last year before he officially retired as a
Fellow at the Argonne National Laboratory of the U.S. Department of Energy in Argonne, Illinois which carries out basic research and development of the peaceful uses of nuclear energy. He remained a
consultant at the Argonne National Laboratory after he retired in 1985. He became Distinguished Professor of Applied Mathematics and Mathematical Physics at Clark Atlanta University in 1990.
We have looked above at some of the pure mathematical topics which Wilkins looked at early in his career. He has continued to produce mathematics papers and he has over 50 papers on mathematics and
its applications. He also wrote papers on nuclear engineering and optics. His work on the penetration of gamma rays published in 1953 in the Physical Review is used in the design of nuclear reactors
and radiation shields:-
He developed mathematical models by which the amount of gamma radiation absorbed by a given material can be calculated. This technique of calculating radiative absorption is widely used among
researchers in space and nuclear science projects.
Other work which he has done has been related to heat transfer and in January 1992 he was invited to give a joint American Mathematical Society - Mathematical Association of America lecture in
Baltimore, Maryland. The American Mathematical Society has published a videocassette of this lecture and an interview. Here is an extract from the description:-
Wilkins has worked on a variety of mathematical problems throughout his distinguished career. A member of the National Academy of Engineering who received his doctorate in mathematics from the
University of Chicago at the age of nineteen, Wilkins has worked in academia, industry, and government. ... In the interview, Wilkins describes some of the mathematical problems he has worked on
and discusses some of the difficulties in trying to improve the participation of members of underrepresented groups in science and mathematics. His lecture explores a fascinating problem about
heat transfer that arises in a variety of settings. With any heat engine, it is necessary to expel heat to the surroundings. One way to do this is to attach 'fins' to the outer wall of the
engine. The shape of the fins has a large impact on how efficiently they are able to expel heat. Wilkins examines the mathematical aspects of determining the optimal shape of such fins.
Wilkins has received a large number of honours for his work. He was elected to: the American Association for the Advancement of Science (1956); a Fellowship of the American Nuclear Society (1964);
the National Academy of Engineering (1976); and Honorary membership of the National Association of Mathematicians (1994). He served as President of the American Nuclear Society in 1974-75 and on the
Council of the American Mathematical Society from 1975 to 77.
He was awarded the Outstanding Civilian Service Medal by the U.S. Army in 1980.
Article by: J J O'Connor and E F Robertson
List of References (6 books/articles)
Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © April 2002 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Wilkins_Ernest.html","timestamp":"2014-04-20T08:43:38Z","content_type":null,"content_length":"15856","record_id":"<urn:uuid:14f708e7-5ed5-4361-8c93-7ade3f1c90d4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
• A Course in Mathematical Statistics, Third Edition
Book, October 2014, by Roussas
• A Course in Probability Theory, Revised Edition
Book, October 2000, by Chung
• Advanced Statistics from an Elementary Point of View
Book, October 2005, by Panik
• A First Course in Stochastic Processes
Book, March 1975, by Karlin
• An Introduction to High-Frequency Finance
Book, April 2001, by Gençay
• An Introduction to Measure-theoretic Probability
Book, April 2014
• An Introduction to Measure-theoretic Probability
Book, October 2004, by Roussas
• An Introduction to Probability and Statistical Inference
Book, December 2002, by Roussas
• An Introduction to Stochastic Modeling
Book, December 2010, by Pinsky
• An Introduction to Time Series Analysis and Forecasting
Book, April 2000, by Yaffee
• An Introduction to Wavelets and Other Filtering Methods in Finance and Economics
Book, September 2001, by Gençay
• Applications of Random Process Excursion Analysis
Book, July 2013, by Brainina
• Applications of Variational Inequalities in Stochastic Control
Book, January 1982, by Bensoussan
• Applying Contemporary Statistical Techniques
Book, December 2002, by Wilcox
• A Second Course in Stochastic Processes
Book, April 1981, by Karlin
• Asymptotic Methods in Probability and Statistics
Book, November 1998, by Szyszkowicz
• Biostatistics
Book, December 2006, by Forthofer
• Boole's Logic and Probability
Book, October 1986, by Hailperin
• Boundary Value Problems in Queueing System Analysis
Book, January 1983, by Cohen
• Categorical Variables in Developmental Research
Book, January 1996, by von Eye
• Chi-Squared Goodness of Fit Tests with Applications
Book, January 2013, by Balakrishnan
• Clocking the Mind
Book, July 2006, by Jensen
• Comprehensive Chemometrics
Book, March 2009, by Tauler
• Convex Functions, Partial Orderings, and Statistical Applications
Book, April 1992, by Peajcariaac
• Data Insights
Book, November 2012, by Whitney
• Data Mining Applications with R
Book, December 2013, by Zhao
• Design and Optimization in Organic Synthesis
Book, November 1991, by Carlson
• Dictionary of Distances
Book, October 2006, by Deza
• Dynamic Random Walks
Book, February 2006, by Guillotin-Plantard
• Econophysics
Book, October 2012, by Savoiu
• Essential Methods for Design Based Sample Surveys
Book, November 2010, by Pfeffermann
• Essential Statistical Methods for Medical Statistics
Book, November 2010, by Miller
• Essential Statistics, Regression, and Econometrics
Book, June 2011, by Smith
• Estimation Theory in Hydrology and Water Systems
Book, June 1993, by Nacházel
• Experimental Design Techniques in Statistical Practice
Book, January 1998, by Gardiner
• Explorations With Texas Instruments TI-85
Book, November 1992, by Kenelly
• Exploratory and Multivariate Data Analysis
Book, August 1991, by Jambu
• Foundations of Estimation Theory
Book, November 1987, by Kubacek
• Functional Equations in Applied Sciences
Book, November 2004, by Castillo
• Functional Inequalities Markov Semigroups and Spectral Theory
Book, February 2006, by Wang
• Fundamentals of Applied Probability and Random Processes
Book, October 2005, by Ibe
• Handbook of Applied Multivariate Statistics and Mathematical Modeling
Book, April 2000, by Tinsley
• Handbook of Dynamical Systems
Book, August 2002, by Hasselblatt
• Handbook of Dynamical Systems
Book, November 2005, by Katok
• Handbook of Dynamical Systems
Book, February 2002, by Fiedler
• Handbook of Econometrics
Book, November 1984, by Griliches
• Handbook of Econometrics
Book, November 2001, by Heckman
• Handbook of Econometrics
Book, December 1994, by Engle
• Handbook of Econometrics
Book, June 1986, by Intriligator
• Handbook of Econometrics
Book, November 1983, by Intriligator
• Handbook of Game Theory with Economic Applications
Book, November 1992, by Aumann
• Handbook of Game Theory with Economic Applications
Book, December 1994, by Aumann
• Handbook of Heavy Tailed Distributions in Finance
Book, March 2003, by Rachev
• Handbook of Latent Variable and Related Models
Book, February 2007, by Lee
• Handbook of Longitudinal Research
Book, October 2007, by Menard
• Handbook of Measure Theory
Book, October 2002, by Pap
• Handbook of Statistical Analysis and Data Mining Applications
Book, May 2009, by Nisbet
• Handbook of Statistics
Book, May 2005, by Rao
• Handbook of Statistics
Book, May 2013, by Rao
• Handbook of Statistics
Book, November 2007, by Rao
• Handbook of Statistics
Book, August 2012, by Rao
• Handbook of Statistics
Book, November 2005, by Dey
• Handbook of Statistics
Book, November 2006, by Rao
• Handbook of Statistics
Book, May 2012, by Rao
• Handbook of Statistics
Book, January 2004, by Balakrishnan
• Handbook of Statistics 10: Signal Processing and its Applications
Handbook, September 1993, by Bose
• Handbook of Statistics 11: Econometrics
Handbook, November 1993, by Maddala
• Handbook of Statistics 12: Environmental Statistics
Handbook, February 1994, by Patil
• Handbook of Statistics 13: Design and Analysis of Experiments
Handbook, November 1996, by Ghosh
• Handbook of Statistics 14: Statistical Methods in Finance
Handbook, December 1996, by Maddala
• Handbook of Statistics 15: Robust Inference
Handbook, May 1997, by Maddala
• Handbook of Statistics 16: Order Statistics: Theory & Methods
Handbook, July 1998, by Balakrishnan
• Handbook of Statistics 17: Order Statistics: Applications
Handbook, July 1998, by Balakrishnan
• Handbook of Statistics 18: Bioenvironmental and Public Health Statistics
Handbook, April 2000, by Sen
• Handbook of Statistics 19: Stochastic Processes: Theory and Methods
Handbook, January 2001, by Shanbhag
• Handbook of Statistics 1: Analysis of Variance
Handbook, January 1984, by Krishnaiah^†
• Handbook of Statistics 20: Advances in Reliability
Handbook, August 2001, by Balakrishnan
• Handbook of Statistics 21: Stochastic Processes: Modeling and Simulation
Handbook, February 2003, by Shanbhag
• Handbook of Statistics 22: Statistics in Industry
Handbook, June 2003, by Khattree
• Handbook of Statistics_29A
Book, September 2009, by Pfeffermann
• Handbook of Statistics_29B
Book, September 2009, by Pfeffermann
• Handbook of Statistics 2: Classification, Pattern Recognition and Reduction of Dimensionality
Handbook, March 1983, by Krishnaiah^†
• Handbook of Statistics 3: Time Series in the Frequency Domain
Handbook, February 1984, by Brillinger
• Handbook of Statistics 4: Nonparametric Methods
Handbook, May 1985, by Krishnaiah^†
• Handbook of Statistics 5: Time Series in the Time Domain
Handbook, August 1985, by Hannan^†
• Handbook of Statistics 6: Sampling
Handbook, July 1988, by Krishnaiah^†
• Handbook of Statistics 7: Quality Control and Reliability
Handbook, July 1988, by Krishnaiah^†
• Handbook of Statistics 8: Statistical Methods in Biological and Medical Sciences
Handbook, November 1991, by Rao
• Handbook of Statistics 9: Computational Statistics
Handbook, September 1993, by Rao
• Handbook of the Geometry of Banach Spaces
Book, May 2003, by AUTHOR
• Hybrid Censoring: Models, Methods and Applications
Book, October 2014
• Information-Theoretic Methods for Estimating of Complicated Probability Distributions
Book, August 2006, by Zong
• Introduction to Applied Statistical Signal Analysis
Book, December 2006, by Shiavi
• Introduction to Probability
Book, December 2013, by Roussas
• Introduction to Probability and Statistics for Engineers and Scientists
Book, January 2009, by Ross
• Introduction to Probability Models
Book, January 2014, by Ross
• Introduction to Probability Models
Book, December 2009, by Ross
• Introduction to Probability Models, ISE
Book, November 2006, by Ross
• Introduction to Robust Estimation and Hypothesis Testing
Book, December 2004, by Wilcox
• Introduction to Robust Estimation and Hypothesis Testing
Book, December 2011, by Wilcox
• Introductory Statistical Thermodynamics
Book, December 2010, by Dalarsson
• Introductory Statistics
Book, February 2010, by Ross
• Introductory Statistics for Engineering Experimentation
Book, August 2003, by Nelson
• Kohonen Maps
Book, June 1999, by Oja
• Linear Models
Book, October 1996, by Sidak
• Markov Chains
Book, May 1984, by Revuz
• Markov Processes
Book, October 1991, by Gillespie
• Markov Processes for Stochastic Modeling
Book, September 2008, by Ibe
• Mathematical Modelling
Book, August 2007, by Haines
• Mathematical Modelling
Book, June 2003, by Lamon
• Mathematical Modelling in Education and Culture
Book, May 2003, by Ye
• Mathematical Models for Society and Biology
Book, June 2013, by Beltrami
• Mathematical Models for Society and Biology
Book, December 2001, by Beltrami
• Mathematical Statistics with Applications
Book, March 2009, by Ramachandran
• Mathematical Tools for Applied Multivariate Analysis
Book, September 1997, by Chaturvedi
• Matlab
Book, June 2013, by Attaway
• Modelling and Mathematics Education
Book, November 2001, by Matos
• Modelling Stock Market Volatility
Book, November 1996, by Rossi
• Multiparametric Statistics
Book, September 2007, by Serdobolskii
• Multivariate Analysis
Book, January 2014, by Kent
• Multivariate Environmental Statistics
Book, January 1994, by Patil
• Neutron Fluctuations
Book, October 2007, by Pazsit
• New Trends in System Reliability Evaluation
Book, November 1993, by Misra
• Parameter Estimation and Inverse Problems
Book, January 2012, by Aster
• Practical Business Statistics
Book, January 2011, by Siegel
• Practical Business Statistics with STATPAD
Book, September 2011
• Practical Data Analysis in Chemistry
Book, July 2007, by Maeder
• Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications
Book, January 2012, by Miner
• Probabilistic Approach to Mechanisms
Book, January 1984, by Sandler
• Probabilities and Potential, A
Book, January 1979, by Dellacherie
• Probabilities and Potential, B
Book, January 1982, by Dellacherie
• Probabilities and Potential, C
Book, January 1988, by Dellacherie
• Probability & Measure Theory
Book, December 1999, by Ash
• Probability and Random Processes
Book, September 2004, by Childers
• Probability and Random Processes
Book, January 2012, by Childers
• Probability and Random Variables
Book, March 2005, by Beaumont
• Probability Models for Computer Science
Book, June 2001, by Ross
• Probability Theory with Applications
Book, February 1984, by Rao
• Providing Quality of Service in Heterogeneous Environments
Book, August 2003, by Charzinski
• Psychological Experiments on the Internet
Book, March 2000, by Birnbaum
• Quantum Probability
Book, August 1988, by Gudder
• R and Data Mining
Book, December 2012, by Zhao
• Random Matrices
Book, October 2004, by Lal Mehta
• Recent Advances and Trends in Nonparametric Statistics
Book, October 2003, by Akritas
• Regenerative Stochastic Simulation
Book, October 1992, by Shedler
• Regression Analysis
Book, March 2006, by Freund
• Regression Analysis for Social Sciences
Book, June 1998, by von Eye
• Simulation
Book, October 2012, by Ross
• Sobolev Spaces
Book, June 2003, by Adams
• Spectral Analysis and Time Series, Two-Volume Set
Book, October 1982, by Priestley
• Statistical Aspects of Quality Control
Book, October 1996, by Cyrus
• Statistical Bioinformatics
Book, January 2010, by Mathur
• Statistical Data Analysis for Ocean and Atmospheric Sciences
Book, November 1994, by Thiebaux
• Statistical Design - Chemometrics
Book, January 2006, by Bruns
• Statistical Mechanics
Book, February 2011, by Pathria
• Statistical Method for Meta-Analysis
Book, July 1985, by Hedges
• Statistical Methods
Book, July 2010, by Freund
• Statistical Methods for Physical Science
Book, November 1994, by Stanford
• Statistical Methods for Social Scientists
Book, January 1977, by Rossi
• Statistical Methods in Food and Consumer Research
Book, November 2008, by Gacula, Jr.
• Statistical Methods in Longitudinal Research
Book, October 1990, by von Eye
• Statistical Methods in Longitudinal Research
Book, October 1990, by von Eye
• Statistical Methods in the Atmospheric Sciences
Book, May 2011, by Wilks
• Statistical Methods in the Atmospheric Sciences, 59
Book, January 1995, by Wilks
• Statistical Methods in Water Resources
Book, April 1992, by Helsel
• Statistical Optimization for Geometric Computation: Theory and Practice
Book, March 1996, by Kanatani
• Statistics for Physical Sciences
Book, January 2012
• Statistics in Medicine
Book, August 2005, by Riffenburgh
• Statistics in Medicine
Book, July 2012, by Riffenburgh
• Stochastic Analysis
Book, October 1984, by Itô
• Stochastic Control by Functional Analysis Methods
Book, January 1982, by Bensoussan
• Stochastic Differential Equations and Diffusion Processes
Book, January 1981, by Ikeda
• Stochastic Methods in Economics and Finance
Book, December 1981, by Malliaris
• Stochastic Modelling in Process Technology
Book, July 2007, by Dehling
• Stochastic Models in Queueing Theory
Book, October 2002, by Medhi
• Stochastic Wave Propagation
Book, May 1985, by Sobczyk
• Student Solutions Manual for Introductory Statistics
Book, October 2005, by Ross
• Teletraffic Engineering in the Internet Era
Book, August 2001, by de Souza
• The Computation of Style
Book, June 1982, by Kenny
• The Data Analysis Handbook
Book, September 1994, by Frank
• Theory and Applications of Fractional Differential Equations
Book, January 2006, by Kilbas
• Theory of Rank Tests
Book, March 1999, by Sidak
• The Single Server Queue
Book, January 1982, by Cohen
• The Spectral Analysis of Time Series
Book, May 1995, by Koopmans
• The Theory of Gambling and Statistical Logic
Book, November 2012, by Epstein
• The Theory of Gambling and Statistical Logic
Book, September 2009, by Epstein
• Visualization of Categorical Data
Book, January 1998, by Blasius
• Wave Mechanics for Ocean Engineering
Book, July 2000, by Boccotti | {"url":"http://www.elsevier.com/books/subjects/mathematics/statistics-and-probability","timestamp":"2014-04-16T11:03:04Z","content_type":null,"content_length":"71916","record_id":"<urn:uuid:5d0312b2-bcad-42e5-bc45-6f978330e73b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illinois PER Interactive Examples: v versus t
Website Detail Page
written by Gary Gladding
published by the University of llinois Physics Education Research Group
This interactive homework problem shows a graph of velocity versus time for a moving object and the student is asked to find the final position. The problem is accompanied by a sequence
of questions designed to encourage critical thinking and engage students beyond traditional textbook problems. The questions are user-activated, and carefully designed to guide beginning
students through conceptual analysis before attempting the mathematics.
This item is part of a larger collection of interactive problems developed by the Illinois Physics Education Research Group.
Editor's Note: Physics education research shows that students often enter college courses with limited understanding of the meaning behind velocity vs. time graphs and position vs. time
graphs. This particular tutorial will help students realize that velocity is the time derivative of distance and acceleration is the time derivative of velocity, an important distinction
for understanding the physics of motion.
Subjects Levels Resource Types
Classical Mechanics
- Motion in One Dimension - Collection
= Acceleration - Instructional Material
= Position & Displacement - High School = Activity
= Velocity - Lower Undergraduate = Best practice
Education Practices = Problem/Problem Set
- Active Learning = Tutorial
= Problem Solving
Appropriate Courses Categories Ratings
- Physical Science
- Physics First - Activity
- Conceptual Physics - Assessment
- Algebra-based Physics - New teachers
- AP Physics
Intended User:
Access Rights:
Free access
© 2006 University of Illinois Physics Education Research Group
graph analysis, kinematics, motion, position vs time, problem solving, socratic questions, v-t graph, velocity
Record Cloner:
Metadata instance created January 25, 2008 by Alea Smith
Record Updated:
March 12, 2013 by Lyle Barbato
Last Update
when Cataloged:
June 16, 2006
Other Collections:
velocity graphs
Author: KJ
Posted: February 4, 2011 at 11:01AM
use link for review of velocity graph
Post a new comment on this item
AAAS Benchmark Alignments (2008 Version)
2. The Nature of Mathematics
2A. Patterns and Relationships
• 9-12: 2A/H1. Mathematics is the study of quantities and shapes, the patterns and relationships between quantities or shapes, and operations on either quantities or shapes. Some of
these relationships involve natural phenomena, while others deal with abstractions not tied to the physical world.
2C. Mathematical Inquiry
• 9-12: 2C/H3. To be able to use and interpret mathematics well, it is necessary to be concerned with more than the mathematical validity of abstract operations and to take into account
how well they correspond to the properties of the things represented.
4. The Physical Setting
4F. Motion
• 6-8: 4F/M3a. An unbalanced force acting on an object changes its speed or direction of motion, or both.
9. The Mathematical World
9C. Shapes
• 9-12: 9C/H1. Distances and angles that are inconvenient to measure directly can be found from measurable distances and angles using scale drawings or formulas.
12. Habits of Mind
12B. Computation and Estimation
• 6-8: 12B/M3. Calculate the circumferences and areas of rectangles, triangles, and circles, and the volumes of rectangular solids.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.1 Make sense of problems and persevere in solving them.
Geometry (K-8)
Solve real-world and mathematical problems involving area, surface area, and volume. (6)
• 6.G.1 Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these
techniques in the context of solving real-world and mathematical problems.
Solve real-life and mathematical problems involving angle measure, area, surface area, and volume. (7)
• 7.G.6 Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes,
and right prisms.
Understand and apply the Pythagorean Theorem. (8)
• 8.G.7 Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions.
High School — Functions (9-12)
Interpreting Functions (9-12)
• F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.^? Materials
High School — Geometry (9-12) Similar
Expressing Geometric Properties with Equations (9-12)
• G-GPE.7 Use coordinates to compute perimeters of polygons and areas of triangles and rectangles, e.g., using the distance formula.^?
This resource is part of 3 Physics Front Topical Units.
Kinematics: The Physics of Motion
Unit Title:
This is a web-based homework problem that helps students understand velocity vs. time graphs (v vs. t). A sequence of user-activated questions guides beginners through a full conceptual
analysis before introducing the math. Based on PER principles (physics education research).
Link to Unit:
Kinematics: The Physics of Motion
Unit Title:
Motion in One Dimension
This is a web-based homework problem that helps students understand velocity vs. time graphs (v vs. t). A sequence of user-activated questions takes beginners through a full conceptual
analysis before introducing the math. It was developed using principles of physics education research. Appropriate for gifted/talented middle school students.
Links to Units:
Kinematics: The Physics of Motion
Unit Title:
Motion in One Dimension
This is a web-based homework problem that helps students understand velocity vs. time graphs (v vs. t). A sequence of user-activated questions guides beginners through a full conceptual
analysis before introducing the math. Based on PER principles (physics education research).
Link to Unit:
ComPADRE is beta testing Citation Styles!
<a href="http://www.thephysicsfront.org/items/detail.cfm?ID=6384">Gladding, Gary. Illinois PER Interactive Examples: v versus t. Urbana: University of llinois Physics Education Research
Group, June 16, 2006.</a>
G. Gladding, (University of llinois Physics Education Research Group, Urbana, 2006), WWW Document, (http://research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/01/IE_v_vs_t).
G. Gladding, Illinois PER Interactive Examples: v versus t (University of llinois Physics Education Research Group, Urbana, 2006), <http://research.physics.illinois.edu/per/IE/ie.pl?
Gladding, G. (2006, June 16). Illinois PER Interactive Examples: v versus t. Retrieved April 17, 2014, from University of llinois Physics Education Research Group: http://
Gladding, Gary. Illinois PER Interactive Examples: v versus t. Urbana: University of llinois Physics Education Research Group, June 16, 2006. http://research.physics.illinois.edu/per/IE/
ie.pl?phys111/ie/01/IE_v_vs_t (accessed 17 April 2014).
Gladding, Gary. Illinois PER Interactive Examples: v versus t. Urbana: University of llinois Physics Education Research Group, 2006. 16 June 2006. 17 Apr. 2014 <http://
@misc{ Author = "Gary Gladding", Title = {Illinois PER Interactive Examples: v versus t}, Publisher = {University of llinois Physics Education Research Group}, Volume = {2014}, Number =
{17 April 2014}, Month = {June 16, 2006}, Year = {2006} }
%A Gary Gladding
%T Illinois PER Interactive Examples: v versus t
%D June 16, 2006
%I University of llinois Physics Education Research Group
%C Urbana
%U http://research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/01/IE_v_vs_t
%O text/html
%0 Electronic Source
%A Gladding, Gary
%D June 16, 2006
%T Illinois PER Interactive Examples: v versus t
%I University of llinois Physics Education Research Group
%V 2014
%N 17 April 2014
%8 June 16, 2006
%9 text/html
%U http://research.physics.illinois.edu/per/IE/ie.pl?phys111/ie/01/IE_v_vs_t
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Illinois PER Interactive Examples: v versus t:
Is Part Of Illinois PER: Interactive Examples
A link to the full collection of interactive homework tutorials by the same author. Included are problems designed for a variety of physics courses, including preparatory, algebra-based,
and calculus-based. Topics include mechanics, light, electricity and magnetism, thermal physics, waves and quantum mechanics, and modern physics.
relation by Caroline Hall
See details...
Know of another related resource? Login to relate this resource to it. | {"url":"http://www.thephysicsfront.org/items/detail.cfm?ID=6384","timestamp":"2014-04-17T21:31:35Z","content_type":null,"content_length":"58357","record_id":"<urn:uuid:d8866ee6-739d-4bdc-979d-69e128ed79dc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ben Andrews
BSc PhD ANU
Fellow of the Australian Academy of Science
Fellow of the American Mathematical Society
Fellow of the Australian Mathematical Society
Senior Fellow
Centre for Mathematics and its Applications
Office: Room 2131B, John Dedman Mathematical Sciences Building
Telephone: 6125 3458
Fax: 6125 5549
email: Ben.Andrews@anu.edu.au
I am a member of the
Applied and Nonlinear Analysis Research Group
of the Mathematical Sciences Institute.
Research interests:
Differential Geometry: I am interested in many areas of differential geometry, including
□ geometry of curves, surfaces and hypersurfaces;
□ Riemannian geometry;
□ effects of local curvature conditions on global geometry and topology;
□ geodesics, minimal surfaces, and related problems;
□ isoperimetric inequalities;
□ differential geometry associated with conformal transformations, projective transformations, affine transformations and other groups;
□ general relativity and semi-Riemannian geometry;
□ geometry of convex bodies;
□ Finsler geometry.
Partial Differential Equations:
□ Partial differential equations applied to differential geometry
☆ Heat flows applied to prove results in global differential geometry;
○ Moving curves, surfaces and hypersurfaces;
○ Deforming Riemannian metrics or connections;
○ Smoothing effects of heat flows;
☆ Minimal surfaces, surfaces of prescribed curvature, and related problems;
☆ Harmonic functions and harmonic maps, related variational problems;
□ Fully nonlinear elliptic and parabolic equations
☆ regularity theory;
☆ qualitative and asymptotic behaviour
□ Nonlinear wave equations, general relativity;
□ Eigenvalue problems arising in geometry;
□ Integrable systems arising in geometry.
□ Tumbling stones and their eventual shapes;
□ Image processing;
□ Moving interfaces, phase changes, and related problems;
□ Reaction-Diffusion systems in mathematical biology;
Personal interests
ACT Rogaining association
Orienteering ACT
ACT Hockey Association | {"url":"http://maths-people.anu.edu.au/~andrews/","timestamp":"2014-04-16T04:11:47Z","content_type":null,"content_length":"5932","record_id":"<urn:uuid:ae1ec931-798d-4de9-946e-7144c5d66b65>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi and Polygons
Date: 03/14/99
From: Anonymous
Subject: Pi and Polygons
A simple iterative derivation inscribes an n-polygon inside a circle
until the limit approaches pi (the circumference of the unit circle).
Is there a formula to find the angle of an n-sided polygon given
that it has x sides? This formula is obviously convergent. Take the
limit as theta -> 180 (or pi radians) and the infinity*0 = pi.?
Date: 03/14/99
From: Doctor Ken
Subject: Re: Pi and Polygons
This method of finding successive approximations to Pi is one of the
oldest methods known, because it is one of the easiest to understand,
and you can draw nice pictures to explain it.
If a regular polygon has n sides, then we can draw lines from its
center to all the vertices, and these lines will divide the pie-shaped
picture into n wedges. Each wedge's central angle will have a measure
of 360/n degrees. So, we can draw this picture of one wedge, where C
is at the center of the polygon:
/ |
/ |
/ |
/ |
C /__________| D
\ |
\ |
\ |
\ |
\ |
Segment AB here is one side of the original regular polygon.
Since angle ACB is 360/n, angle ACD is 180/n. Therefore, if the length
of AC is 1/2, the length of AD is sin(180/n)/2. Therefore, the length
of AB is sin(180/n), and the perimeter of the entire polygon is
So, you are correct: when you let n go toward infinity, sin(180/n)
will tend towards zero. Since we know that a circle whose radius is
1/2 has a circumference of Pi, the 0 and infinity balance each other
in the limit.
Of course, there is a practical problem with all of this. In order to
calculate the sines, you need to know a thing or two about Pi. It is
true that for some special angles like 30, 45, and 60 degrees (and
their sums and differences, etc.) you can write down an explicit
elementary expression for their sines, this is not true of some other
angles like 180/11.
So, how do we calculate the perimeters without knowing Pi already? We
need to find some trick, or we need to find some other method entirely
of approximating Pi. And that is where a great part of the glorious
history of mathematics starts.
For more about Pi, its role in history, and the various attempts to
know it better, check out the excellent book _A History of Pi_ by Petr
Beckmann. People have done some pretty clever things to get to know
- Doctor Ken, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/53912.html","timestamp":"2014-04-17T15:56:44Z","content_type":null,"content_length":"7604","record_id":"<urn:uuid:b0479839-0061-4dd4-88ab-f5f18f9cebd2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Important Messages from the Instructors
Sunday, November 28th, 2010
Your presentation must be a white board lecture.
Tuesday, November 23rd, 2010
Preparing Your Presentation
Your presentations should consist of two parts: a background section and a work section. The former should bring across the essence of the chosen paper; the latter is about your work, your model,
and the insights you derived from your work. The transition between the two sections is about what questions the paper left you with and which parts of the paper you decided not to include in
your work/model and why.
Your talk should fit into the normal 30-minute conference slot. Do not think of the two sections of your presentation as two halves, i.e., 15 minutes each. Indeed, it is unlikely that you will
dedicate 15 minutes to each with a bridge segment of 0 seconds. Your challenge is to figure out how to allocate the time on the two sections and the bridge.
The key to organizing a talk is to develop a mental model of the audience. You know what your classmates know about PL. Base your talk on that. For some papers, you need to bring in additional
background, e.g., how the http protocol works. To assume less background in your audience is a better mistake to make than to assume too much.
If you are presenting with a partner, you are welcome to split the presentation in any way you want between the two of you. Each of you should speak for half the time (some 15 minutes).
Finally, when you give conference talks, you must answer questions. In PPL, the audience will be allowed to ask questions at any point, and you will respond in turn. In other words, each of you
must be ready to answer all questions not just questions about your part.
Sunday, November 21st, 2010
Problem 9(1) was overly ambitious for a plain homework problem. I have simplified it.
Sunday, November 14th, 2010
After grading the solutions to problem set 8, I thought I should explain how you might have come up with a clean design (some did, some didn't).
For problem 5/2, you added recursive functions to Iswim like this:
(rec x (lambda (x) e) in e)
Since problem 8/1 describes
(loop x_loop (x_init e_init) e_body)
as a construct that sets up a recursive function and calls it on e_init, it is possible to implement it withrec like this:
(rec x_loop (lambda (x_init) e_body) (x_loop e_init))
Also in class I had described the solution for rec as a use of Y_v, which would get you this much:
((Y_v (lambda (x_loop) (lambda (x_init) e_body))) e_init)
Turning this last insight into an instruction for the CEK machine is straightforward:
< (loop x_loop (x_init e_init) e_body), rho, k >
< ((Y_v (lambda (x_loop) (lambda (x_init) e_body))) e_init), rho, k >
Alternatively, you could use the fixpoint equation for Y_v and step-for-step derive CC, CK, and CEK instructions. Either way you come up with a simple and elegant design for loop instructions on
the CEK machine.
The questions concerning loop should have clarified that x_loop is an explicitly named continue statement.
As for the introduction of the break, the minimum to recognize is that it demands some way to mark the entrance to the loop on the control stack. Popping the stack to that point gives you dynamic
breaks; creating a fresh marker for the stack and associating it with the name of marker. The problem is that the name is now used for both the recursive function and the marker on the stack.
Solving this problem required some creativity and was worth five out of 40 points (or some 12%) on this set.
Friday, November 12th, 2010
Problem set 10 is in final form.
The due date for problem set 9 is now 11/23 to give you some additional time for the project.
Here is the promised sketch of a solution for the break feature:
[--> ((loop x_L (x e_0) e_L) rho k)
(e_0 rho (init x_L x e_L rho k))]
[--> (v rho_v (init x_L x e_L rho k))
(extend (extend rho x_L (LL x_tag (lambda (x) e_L) rho)) x (v rho_v))
(loop x_tag k))
(fresh x_tag)]
[--> ((break x_L e_exn) rho k) (e_exn rho (cut k x_tag))
(where (LL x_tag v rho_v) (lookup rho x_L))]
You should be able to guess at the modification to continuations and closure values.
Wednesday, November 10th, 2010
The solutions for problem set 7 are graded. You may pick up your assignment between 4:00pm and 5:30pm in case you need to see the feedback before Friday.
The interpreter for B seems to be a small problem for everyone, so I wrote down two solutions:
;; the basic B language
(define-language B
(e true
(* e e))
(v true
Here is an interpreter in plain Racket:
;; B/e -> E/v
;; evaluate programs from B according to rules
(define (eval-b-racket t)
[(eq? 'true t) t]
[(eq? 'false t) t]
(define lft (eval-b-racket (second t)))
[(eq? 'true lft) 'true]
[(eq? 'false lft) (eval-b-racket (third t))])]))
And here is one in mostly Redex:
;; B/e -> E/v
;; evaluate programs from B according to rules
(define (eval-b-racket t)
(define-metafunction B
evalB : e -> v
[(evalB v) v]
[(evalB (* e_1 e_2)) true
(side-condition (eq? (term (evalB e_1)) (term true)))]
[(evalB (* e_1 e_2)) (evalB e_2)])
(term (evalB ,t)))
Tuesday, November 2nd, 2010
Here is my primary reaction to your project memos.
A project memo should consist of two distinct parts: a summary of the background and a goal statement. Since the goal of the course project is to digest the content of an existing research paper
via the construction of a Redex model, the background summary should recapitulate in your own words what you currently think the paper reports. The goal statement should specify with respect to
this paper summary, what you wish to understand in detail.
The (boring but) safest way is to explicitly label the parts of your memo as "summary" and as "goal" statement and to present them in that order. If you are confident about your writing skills,
you may naturally deviate from this rudimentary presentation and mix-and-match pieces.
Avoid "weasel words" and "weasel paragraphs" in both the summary and the goal statement. They simply indicate that you are insecure or lack understanding. Instead, express what you are insecure
about and make it a part of the goal.
Use complete sentences for the summary and the goal statement. It is acceptable to have an enumeration of phrases, but when you use this technique make sure to connect the phrases to the context
so that you get complete sentences.
For your project report, follow the same outline but replace the goal statement with two parts: what you have learned and what you still fail to grasp from the paper.
Saturday, October 30th, 2010
Most solutions for problem 6(1) suffer from basic organization problems. Here is a Racket version in a syntax according to the first two weeks of this course:
;; ISWIM/e -> ISWIM/answer
(define (evaluate program)
(create-initial-state program)))
;; ISWIM/C??State -> ISWIM/C??State
(define (evaluate-according-to-machine s)
(if (final? s)
(evaluate-according-to-machine (transition-function s))))
(No an experienced Racketeer would not write quite this code.)
Here is a BraceLanguage-style piece of code:
ISWIM/answer evaluate(ISWIM/e program) {
State s = createInitialState(program);
while !isFinalState(s) {
s = transitionFunction(s);
return unloadFinalState(s);
From here on, you just develop and present the transition function.
Of course, after an hour of covering random checking in Redex, I did expect some attempt to relate the evaluator program to the Redex model from which you started.
Problem 6(2) called for a revision of a REDEX model. If you wrote an evaluator in Racket instead, I reduced 10 points and graded out of these 10 points, if there was an attempt to deal with
full-fledged define and the rest of the evaluator looked 'decent'.
Problem 6(3) must have stumped people in ways that I couldn't foresee. Since only one pair got this 100% correct and all others failed completely, I have graded the problem set on a 40-point
basis and have assigned the successful pair 10 extra points.
(I am grateful that some people came to see me and asked me questions. BUT, I think I heard the question I wanted to hear and the answers I gave were answers that you thought related to the
questions that were asked.)
The lemma requested in problem 6(3) came up as a "simple" step during the proof of consistency and Church-Rosser. After explaining why it should be true (not provable) and seeing some doubts in
your faces, I said "this is a good problem to add to the homework" and I did so. Given that the lemma is needed to prove consistency and CR, you can't use the latter to prove it. Keep the context
in mind!
The Claim:
e e_1 e_3 e_5 e'
\ / \ / \ / ... .... ... \ /
\ / \ / \ / \ /
e_0 e_2 e_4 e_n
(where each pair of subsequent lines is
a pair of -->> and <<-- or
a pair of <<-- and -->> relations)
The Example: When you're stumped, make examples. The design recipe tells you so.
\x.((\y.y)2+1) = \x.1+2
Here is a bunch of proof steps:
[1] (\x.(\y.y)2+1) = (\x.(\y.y)3) by -> base
[2] (\x.(\y.y)3) = (\x.3) by -> base
[3] (\x.(\y.y)2+1) = (\x.3) by [1]/[2] transitivity
[4] (\x.2+1) = (\x.3) by -> base
[5] (\x.3) = (\x.2+1) by [4] symmetry
[6] (\x.1+2) = (\x.3) by -> base
[7] (\x.3) = (\x.1+2) by [6] symmetry
[8] (\x.(\y.y)2+1) = (\x.2+1) by [3]/[5] transitivity
[9] (\x.(\y.y)2+1) = (\x.3) by [8]/[4] transitivity
[A] (\x.(\y.y)2+1) = (\x.1+2) by [9]/[6] transitivity
(now this is not the only possible way to use the def of =)
Here are the same proof steps arranged as the desired wave:
(\x.(\y.y)2+1) (\x.2+1) (\x.1+2)
\ -> / \ /
(\x.(\y.y)3) / <- \ -> / <-
\ -> / \ /
(\x.3) (\x.3)
The Proof:
Assume e = e'.
Prove that there is a sequence e_0 .. e_n of terms/expressions that are
chained via a pair of -->> and <<-- or a pair of <<-- and --
-- -->> relations.
The = relation is the symmetric-transitive-reflexive closure of the
reduction relation -->.
We proceed by induction on the size of the argument that establishes e = e'
and by cases on the last step
* reflexivity
e = e' because e is identical to e'
pick the empty chain
* transitivity
e = e' because e = e" and e" = e'
by inductive hypothesis, there exist chains
e, ..., e_i, e"
e", e_j, ..., e'
that satisfy the desired 'wave' criteria
if e_i -->> e" and e" -->> e_j, pick e_0, ..., e_i, e_j, ... e'
if e_i -->> e" and e" <<-- e_j, pick e_0, ..., e_i, e", e_j, ..., e'
ditto for e_i <<-- e" and e" -->> e_j
* symmetry
e = e' because e' = e"
by inductive hypothesis, there exists a chain; use it
* base
e = e' because e -> e'
pick the chain e, e' (yes, degenerate chains are fine, too)
Friday, October 29th, 2010
The partner switch is effective as of today.
Problem 7(2) calls for an evaluator for language B. Implement this evaluator in RACKET.
Problem 7(2) also calls for an evaluator based on the CK machine for language B. Implement this evaluator in REDEX.
Finally, problem 7(2) demands a thorough comparison between the two. Use redex-check to implement this check. NOTE: random testing is making an appearance in a wide spectrum of languages, and it
is good for you to explore it at least once in your grad student career.
Wednesday, October 27th, 2010
Jose points out that the pair of mutually recursive functions odddP and evenP should be written like this:
(define evenP (lambda (x) (if0 x 1 (oddP (- x 1)))))
(define oddP (lambda (x) (if0 x 0 (evenP (- x 1)))))
It avoids the infinite loop for
(oddP 0)
I will be happy if you get either version to work.
Tuesday, October 26th, 2010
Someone asks:
We don't know if we should model the first problem using Racket only (i.e. without using Redex). Can we use a combination of Racket and Redex?
The answer is no, the evaluator must be in plain Racket.
Someone else asks:
Is the transition relation for the CC machine really a total function? It doesn't seem to work on states such as < v , E > ?
I meant to say that the transition relation is a total function for proper expressions, i.e., non-values. Since states such as the ones mentioned in the question are
states, the relation should
be defined on them.
Monday, October 25th, 2010
I have released a complete set of problems for the rest of the semester so that you can calibrate how to allocate your efforts.
Just in case you have already read ahead, I had to change the statements for problem set 7 to fit into the overall plan.
Email me if you discover inconsistencies/typos/etc.
Saturday, October 23rd, 2010
I have modified the project page, mostly to include a grading guide.
I graded solutions for problem 5(4) liberally and the scores are all over the map. The purpose of the problem was to figure out the essence of a standardization theorem---that it is about an
algorithm (a function) for reducing programs to values. The goal here isn't to mimic and adapt proofs as they are but to determine what is really needed. A proof follows.
;; -----------------------------------------------------------------------------
Problem 5.4:
def. language:
b = tt | ff | b * b
def. notion of reduction:
tt * b r tt
ff * b r b
def. general context:
C = [] | C * b | b * C
def. reduction relation:
C[tt * b] -> C[tt]
C[ff * b] -> C[b]
->> is the transitive reflexive closure of ->
def. evaluation
eval-b(b) = tt iff b ->> tt
eval-b(b) = ff iff b ->> ff
theorem 2.6 (in book): eval is a total function, i.e., for all b, (b,*) in eval
;; -----------------------------------------------------------------------------
The reduction relation is a proper relation, meaning for any given b, there are
usually many different reduction sequences from b to tt or ff. The purpose of a
standard reduction theorem is to identify one of these paths as the canonical
one and to provide an algorithm for detecting this path.
def. evaluation context
E = [] | E * b
def. standard reduction
E[tt * b] |--> E[tt]
E[ff * b] |--> E[b]
|-->> is the transitive reflexive closure of |-->
def. standard evaluation
eval-s(b) = tt iff b |-->> tt
eval-s(b) = ff iff b |-->> ff
THEOREM: eval-b = eval-s
Proof: Show that
(b,tt) in eval-b iff (b,tt) in eval-s [i]
(b,ff) in eval-b iff (b,ff) in eval-s [ii]
Wolg, pick [i] and show both directions.
;; -----------------------------------------------------------------------------
R2L: assume (b0,tt) in eval-s (in excessive detail)
.: b0 |-->> tt (by def of eval-s)
.: b0 |--> b1 |--> b2 |--> ... |--> tt (by def of trans-refl. closure)
.: E_0[b0_r] |--> E_1[b1_r] |--> E_2[b2_r] |--> ... |--> tt (by def of |-->)
for some E_i, bi_r
.: E_0[b0_r] --> E_1[b1_r] --> E_2[b2_r] --> ... --> tt (all Es are Cs)
.: b0 --> b1 --> b2 --> ... --> tt (by def of -->)
.: b0 -->> tt (by def of trans-refl closure)
.: (b0,tt) in eval-b (by def of eval-b)
;; -----------------------------------------------------------------------------
L2R: assume (b0,tt) in eval-b
.: b0 -->> tt (by def of eval-b)
.: b0 |-->> tt or b0 |-->> ff (by lemma below)
.: the second result is impossible, as we show by contradiction.
assume b0 |-->> ff
.: b0 -->> ff (by R2L direction of proof)
but this is a contradiction to the global assumption.
.: so this nested assumption can't work.
Hence, b0 |-->> tt, because it is the only possibility left
.: (b0,tt) in eval-s
Lemma: for all b, b |-->> tt or b |-->> ff
(1) by induction on the structure of b, b is either tt, ff, or |-->
reducible. Assume b is b1 * b2. Either b1 is tt or ff, in which case
E = []. Or by induction hypothesis, there b1 |--> b1_s. By definition
of E, b1 * b2 |--> b1_s * b2. QED
(2) by definition of |-->, if b1 |--> b2, then b2 has fewer *s than b1.
Putting (1) and (2) together, every b starts a finite reduction sequence of
as many steps as there are *s in b and the end is either tt or ff. QED
Friday, October 22nd, 2010
I have marked a part of problem 6(2) as challenge so that you can allocate your time properly.
Rather than provide you with a library for your work, I decided to supply source code that you can use for evaluation S-expressions as Racket programs:
;; ISWIM/e -> Any
;; evaluate an S-expression that represents an ISWIM program in RACKET
(define (eval-racket e)
(define raw-result
(with-handlers ((exn:fail? (lambda (x) 'STUCK)))
(eval e racket-ns)))
(if (procedure? raw-result) 'function raw-result))
;; a name space for evaluating Racket programs
(define racket-ns
(parameterize ([current-namespace (make-base-empty-namespace)])
(namespace-require 'racket)
The one serious problem left with this code is that it goes into an infinite loop for ISWIM programs that have infinite reduction sequences. You have seen how to limit reduction sequences in
Redex; to limit evaluations in Racekt, see
call-with-limits and with-limits
in the Racket sandbox library.
Start looking for a partner with whom you will do problem sets 7 through 10.
I will write up grading guidelines for the project over the weekend. These guidelines are not a checklist for getting a perfect grade, but a tool for understanding what a research project is like
and how to allocate your time for the last six weeks of the semester.
Tuesday, October 19th, 2010
Someone asked:
Is the ISWIM0 term (if0 ok (* 3 4) (^ 3 4)) considered stuck?
is some variable name. The answer is that -- as defined in the text book -- programs are
terms, i.e., terms without free variables.
Upon request, I have slightly modified problems 2 and 3 for assignment 5 so that you may either extend Lambda0 or the preceding solutions.
Monday, October 18th, 2010
This week's lectures will use the ISWIM flavor of problem 4(3). I have therefore posted a solution on-line so that you can explore the ideas from lecture interactively without having to worry
about accidental mistakes in your language and reduction systems.
Robby Findler just presented Redex to the FOOL Workshop co-located with OOPSLA. His presentation is on-line as a pdf and I encourage you to flip through the slides over the next week or so. His
FOOL Keynote complements my lectures with a slightly different and extended example.
Saturday, October 9th, 2010
Please stay on top of your readings.
After grading the third problem set, I realized that we introduced 'contracts' for metafunctions after we published the book. So here is a way to specify the depth function with a header that
ensures proper usage:
;; determine the depth of a context
(define-metafunction L
depth : C -> number
[(depth hole) 0]
[(depth (C * e)) ,(+ 1 (term (depth C)))]
[(depth (e * C)) ,(+ 1 (term (depth C)))])
Naturally, I did not subtract points for missing such a contract. An informal contract suffices.
Wednesday, October 6th, 2010
Someone asks about problem 0, problem set 3:
this will not always return the first irreducible form of any arbitrary t with respect to any arbitrary relation R. If R applied to t leads in ANY step to an infinite chain of computations,
apply-reduction-relation* will never return.
Consider the problem in the context of problem set 3 only. Use the purpose statement to explain why your answer solves the problem or why it only approximates a solution.
Tuesday, October 5th, 2010
Someone requested a posting of the Redex model of addition from Friday (1 Oct 2010). It is now on-line.
Some of you still haven't specified a project partner. Also don't forget the due date for choosing a paper.
Sunday, October 3rd, 2010
A draft of problem set 4 is now available.
The general information about the course has been revised to specify a grade policy.
Saturday, October 2nd, 2010
Here is a solution for problem 2 from the second problem set. Note that the problem statement contained the question "how would you prove that your implementation is equivalent to the specified
semantics" and that truly called for a proof idea, not a full proof (difficult!). The challenge of sketching a proof was also supposed to make you experiment with alternative ways of designing
this function.
The idea of proving the correctness of a programming language implementation is no longer far fetched. Indeed, some projects tackle the entire implementation stack: hardware, os, compiler. The
purpose of such projects is to create highly reliable -- and especially easily defendable -- systems that are neither prone to random crashes nor easy attacks.
Friday, October 1st, 2010
Partners for course project: Choose a partner for your course project. One of you must send me an email by Tuesday, October 5 (before class) and cc the other one to let me know of your choice.
Choice of paper for course project: Read the abstracts and introductions of PL papers that might interest you. One of the partners must send me email to inform me of the chosen paper (with a cc
to the partner). The papers on the project web page will be assigned on a first-come/first-serve basis.
Partners for problem sets 3 and 4: Choose a partner for your next two problem sets. One of you must send me an email by Tuesday, October 5 (before class) and cc the other one to let me know of
your choice. This task is distinct from the first one.
Handing in solutions: Please hand in a paper copy of your complete solutions at the beginning of class, and email me an attachment of Racket files for all problems that call for programming.
Friday, October 1st, 2010
Most of you mangled problem 1.4 badly. Here is a solution. Read it, study it, experiment. You should understand this solution properly.
Just because someone shows you hashtables during a lecture does not mean that you should use hashtables in your programs. My intention was to see whether you thought through the problem or
whether you are still in the mode of copying existing programs and changing them until they (kind of) do what you want; your use of hashtables informs the grader into which class of program
designer you fall.
Saturday, September 25th, 2010
The first draft of problem set 3 has been posted.
Please choose a new and distinct partner for problem set 3. If you can't find a new partner, speak up at the beginning of class on 10/1. If you hand in solutions for problem set 3 with the same
partner, you will get no credit.
It is also time to choose a project partner and topic. For the selection of a project topic, read the abstracts and introductions of (some, all) papers (whose titles, looks, authors) you find
intriguing. Ask yourself whether you understand the problem that the paper addresses and whether you find the problem interesting.
This is also how you select initial research topics and advisors; eventually you end up with a dissertation topic that way.
The schedule now comes with reading assignments so that you can prepare yourself for the lectures.
Tuesday, September 21st, 2010
The lecture notes contain the source files for accumulator and intermediate-data structure programming. I will add notes tomorrow.
Homework #2 is final. If you're ready, tackle it.
In lecture I used the notation
to denote the transitive closure. Someone reminded me after the lecture that the conventional notation is
(Even if the notation is just to say "this is a different relation", I should stick to convention, because these conventions are used widely -- even in the Redex book.)
Tuesday, September 14th, 2010
The first lecture is now available on-line. See tab on the left.
Sunday, September 12th, 2010
The web pages are now set up. Most importantly, the first assignment is released, and I urge you to get started immediately. It is extremely labor intensive, making up for significant gaps in
your undergraduate education. Other assignments will be released in a timely fashion, matching the progress we make in lecture.
Also, the description and instructions for the project are out. You should immediately start reading abstracts and introduction to determine which paper(s) appeal to you. I will explain some
points in class again, and you will have a chance to ask questions.
Finally, check the schedule to get an idea of what we cover and how quickly. Due to assignment 1, the reading load in this course will be substantial.
Friday, September 10th, 2010
I will share the results of the first quiz with you on Tuesday. In the meantime, I have posted a brief introduction to proof by induction from Matthew Flatt (UUtah).
I have also written up two solutions of the programming problem,
□ one in Java, because several people tried to solve the quiz problem in this language and failed;
□ another one in Typed Racket, which is my primary programming language. (Racket is derived from Lisp and Scheme.)
Tuesday, September 7th, 2010
Welcome to the PhD-level course on programming languages.
Our organizational meeting will take place on Friday 9/10 in WVH 166. Note that this is not the regular location. If you cannot attend this meeting, please send me email as soon as possible.
In the meantime, here is an entertaining essay from the Wall Street Journal weekend edition (30 Jul 2010) on (natural) language and thinking:
Does Language Influence Culture
For the bi-lingual among you, this essay may be of particular interest. (A similar but longer piece appeared more recently in the NYT.) | {"url":"http://www.ccs.neu.edu/home/matthias/7400-f10/blog.html","timestamp":"2014-04-20T13:20:20Z","content_type":null,"content_length":"35246","record_id":"<urn:uuid:46fbfd6c-a7c5-4ed5-9e55-6e5690749dcf>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Let's make triangular numbers.
Star Member
Re: Let's make triangular numbers.
a triangle with 26 rows, pair them up for 13 pairs of 27.
270 + 81 = 351, yup.
igloo myrtilles fourmis
Re: Let's make triangular numbers.
Is the sum of two other traingular numbers.
378 = Triangular(12) + Triangular(24)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
I see what you’re up to here.
Everyone is holding off from posting because you all want to hit post #31.
So I’m going to have it this way.
The North Circular Road around inner London = 406
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Let's make triangular numbers.
(root(4) x 10 + (4 + 5)) x 3 x 5 = 435
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Let's make triangular numbers.
Triangular 45 with a triangular 6 between = 465
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Let's make triangular numbers.
496 Arrrrhh! That’s perfect!
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Let's make triangular numbers.
2^9 + 2^4 = 528
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
if HCF(561,b)=1 then b^560 is congruent to 1 modulo 561
Last edited by anonimnystefy (2011-08-15 20:39:57)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
630=2*3*3*5*7=97+101+103+107+109+113 which are consecutive primes
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
The mark of the beast.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
703-a Kaprekar number
Last edited by anonimnystefy (2011-05-26 20:22:06)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
740-a non-totient number
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
741 a triangular number.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
oh yes you are right
catching up:
780-also a hexagonal number,Harshad number,sum of ten consecutive primes:
780=59 + 61 + 67 + 71 + 73 + 79 + 83 + 89 + 97 + 101
820-Harshad number:
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Star Member
Re: Let's make triangular numbers.
861 = 41 + 820
41 by 21 = 800 + 1 + 40 + 20
igloo myrtilles fourmis
Re: Let's make triangular numbers.
903 not a prime.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
946-not a prime either.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
990 almost a prime.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
1035-first triangular number greater than 1000.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
1081 also not a prime.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
1128-not not a not-prime
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Let's make triangular numbers.
1176 add 600 and it has meaning today!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Let's make triangular numbers.
The next triangular number is 1275.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=175180","timestamp":"2014-04-18T23:39:51Z","content_type":null,"content_length":"37512","record_id":"<urn:uuid:ca30164f-b7eb-4d07-b9bb-cd8819860042>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution on the Whole Plane
August 31st 2008, 11:23 AM #1
Global Moderator
Nov 2005
New York City
Solution on the Whole Plane
Consider the equation, $u_x+2xy^2u_y=0$ on $D = \mathbb{R}^2 - \{ 0 \} \times \mathbb{R}$.
Say that $y$ solves the equation $y' = 2xy^2$ on $(-\infty,0)\cup (0,\infty)$.
Then it must mean $y=0$ or $y=\tfrac{1}{C - x^2}$.
By the method of charachteristics it means $u(x,y) = f(x^2 + \tfrac{1}{y})$ where $f$ is some differenciable function.
However, this is just a solution $D$.
Is there a non-trivial solution on $\mathbb{R}^2$?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/47258-solution-whole-plane.html","timestamp":"2014-04-19T05:03:32Z","content_type":null,"content_length":"32348","record_id":"<urn:uuid:b65b4213-202f-4b09-9bc4-e5785b2080c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the polynomial equation... (inside!) Please show work!! and explain! Thanks!! :)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5098aa60e4b02ec0829c8eeb","timestamp":"2014-04-18T18:44:13Z","content_type":null,"content_length":"49966","record_id":"<urn:uuid:b0dba8bf-fd65-4627-8e7c-90c862763ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entropy of geometric structures
Last updated: 22/Oct/2010
(This is the first draft of a note to be submitted to a special issue of the Bulletin of the Brazilian Mathematical Society)
Geometric structures
In this note, by a geometric structure, we mean a normed vector bundle $A \to M$ over a vector bundle $M$ (i.e. on each fiber there is a Banach norm, which varies continuously from fiber to fiber),
together with a bundle map $\sharp: A \to TM$ from $A$ to the tangent bundle of $M$, which we will call the anchor map.
Many fundamental structures in geometry are indeed geometric structures.
- A Riemannian metric is a geometric structure, where $A = TM$, the anchore map is the identity map, and the norm on $A$ is given by the metric.
- Each vector field can be viewed as a geometric structure with $A = M \times \mathbb{R}$ together with a norm (up to isomorphisms such a norm is unique). The anchor map maps the section of norm 1 in
$A$ to the vector field in question.
- A folitation or distribution on a manifold can also be viewed as a geometric structure: the bundle $A$ is a subbundle of $TM$, the anchor map is the inclusion map. The norm on $A$ gives a metric on
the foliation/distribution.
- More generally, a singular foliation or distribution can often be viewed as arising from a geometric structure (where one retains only the image of the anchor map, and forget about the original
normed vector bundle $A$)
- If $(M,\Pi)$ is a Poisson manifold, then on can take $A = T^*M$ (the associated cotangent algebroid), with the anchor map being the contraction map with the Poisson tensor $\Pi$, and choose a norm
on $T^*M$.
- Rudolf Clausiu (circa 1865 ?) invented the word entropy(for thermodynamics). From Greek origin: “en” means “internal”, and “tropy” means “transformations”. (Degree of mixing, chaos, …)
- Boltzmann (1870s): formula $S = k. \ln \Omega$ for Clausius’ entropy (entropy of a system is equal to a constant times the logarithm of the number of indistinguishable micro-states which have the
same macro-state of the system).
- Shannon (1940s): entropy in information theory, with a formula similar to Boltzmann’s. In Shannon’s theory, entropy means the amount of information.
- Kolmogorov (1950s): defined measure-theoretic entropy (also called metric entropy) for dynamical systems, by a formula similar to that of Boltzmann and Shannon. Kolmogorov’s entropy may be viewed
as the speed of loss of information in a (deterministic) dynamical system due to mixing phenomena of the system. Sinai (late 1950s ?) proved a famous result which say that two Bernoulli shifts are
isomorphic if and only if they have the same metric entropy.
- Adler et al. gave a notion of topological entropy for dynamical systems. Dinaburg, Bowen , etc. studied it. (1960s-1970s ..)
- Ghys, Langevin, Walczak (1988): geometric entropy of foliations
- Bis (2004): entropy of regular distributions
Many other definitions of entropy in physics, mathematics, etc.
The aim of this note is to define an invariant, called (geometric) entropy, for an arbitrary geometric structure, and study of some its basic properties. Our notion extends that of Ghys, Langevin,
Walczak and Bis in a natural way (to a more general situation, with singularities). In particular, for foliations with a metric, our entropy coincides with geometric entropy of G-L-W.
Definition of entropy
A-admissible paths of speed at most $r$. An A-admissible path on $M$ is a (piecewise-smooth) path $\gamma: [0,1] \to M$ which can be lifted to a piecewise-smooth path $\hat{\gamma}: [0,1] \to A$ such
that $\pi(\hat{\gamma}(t)) = \gamma(t)$ for all $t$, where $\pi: A \to M$ is the projection map, and such that $\sharp \hat{\gamma} = d \gamma / dt$ (for all $t$ except maybe a finite number of
points). The path is said to have speed at most $r$ if we can choose its A-lift $\hat{\gamma}$ so that $\| \hat{\gamma}\| \leq r$ (almost everywhere).
Remark. The above definition of A-admissible paths is similar to that of A-paths in the theory of Lie algebroids. The difference is that here we talk about speed, and look at the paths on $M$ instead
of the lifted paths.
For each $x \in M$ and $r \geq 0$ we will denote by $P(x)$ the set of A-admissible paths starting at $x$, and by $P_r(x)$ the set of A-admissible paths starting at $x$ whose speed does not exceed $r$
Metric $d_r$. Fix a metric $d$ on $M$ (the entropy that we define will not depend on the choice of $d$). For each $r > 0$ define a new metric$d_r$ on $M$ as follows:
$d_r(x,y) = \sup_{\gamma \in P_r(x)} \inf_{\lambda \in P_r(y)} \sup_{t \in [0,1]} d(\gamma(t),\lambda(t)) +$$\sup_{\lambda \in P_r(y)} \inf_{\gamma \in P_r(x)} \sup_{t \in [0,1]} d(\gamma(t),\lambda
Intuitively, the metric $d_r$ measures how far can $x$ and $y$ can get away from each other by A-admissible paths with speed at most $r$.
Lemma. $d_r$ satisfies the axioms of a metric, and moreover $d_r \geq d_{s}$ for any $r \geq s \geq 0$, and $d_0 = d$.
The proof is rather straightforward.
The compact case. Let us first assume that the manifold $M$ is compact (to make the definition simple). For each $\epsilon > 0$ denote by $N_\epsilon (r)$ the maximal number of points in $M$ which
are at least $\epsilon$-separated by the metric $d_r$. Then put:
$h_\epsilon = \limsup_{r \to \infty} \ln N_\epsilon (r)$
$h = \lim_{\epsilon \to 0+} h_\epsilon$
By de finition, the above number $h$ is the entropy of our geometric structure.
The noncompact case.
$h(M) = \sup_U h(U)$
where $U$ are precompact subsets of $M$.
The local case. Here $B$ is a compact subset of $M$ (for example $B$ is just a point)
$h_{local}(B) = \inf_U h(U)$
where $U$ are precompact neighborhoods of $B$
Comparison to previous notions of entropy
Theorem. In the case of regular foliation with a Riemannian metric, the above definition is equivalent to the definition of metric entropy by Ghys-Langevin-Walczak.
Theorem. In the case of vector fields, the above definition gives the topological entropy (multiplied by 2).
Theorem. In the case of regular distributions, the above definition is equivalent to the definition given by Bis.
Zero vs positive entropy
Theorem. The entropy depend on the norm of $A$ (the bigger the norm, the bigger the entropy, and if we multiply the norm by a positive constant, then the entropy will be multiplied by the same
constant). However, the fact that it is zero or non-zero doesn’t depend on the choice of the norm.
Theorem. If the anchor map is surjective then the entropy is zero. In particular, symplectic structures have zero entropy
Theorem. If a regular distribution satisfy the bracket-generating condition (and so it’s controllable) then its entropy is 0. (The case of contact structures was proved by Bis).
Proof: Use ball-box theorem in the general case.
Sub-additivity property
(to be added)
Entropy of Poisson structures
In dynamics, integrable systems often have zero entropy.
However, zero-entropy and integrability for Poisson manifolds and two different notions.
Example: integrable Poisson structure with positive entropy (take the wedge product of a hyperbolic vector field times with a constant vector field in a additional dimension)
Example: Non-integrable Poisson structures whose leaves are $S^2$ in $$\mathbb{R}^3$. | {"url":"http://zung.zetamu.net/2010/10/entropy-of-geometric-structures/","timestamp":"2014-04-16T22:53:41Z","content_type":null,"content_length":"126418","record_id":"<urn:uuid:cef90553-d0ac-41f0-aefb-6bbbe3d37a39>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Occoquan Prealgebra Tutor
Find an Occoquan Prealgebra Tutor
...We helped launch a tutor lab, from which I gained teaching experience. I'm familiar with many concepts that students often struggle with. I can adapt to various learning styles by explaining
with pictures, using analogies of real-life situations, or combining content from various sources to convey the large picture.
10 Subjects: including prealgebra, chemistry, Spanish, algebra 1
...I have a degree in Mechanical Engineering and my math skills got me through it; that material is far tougher than discrete math, but more importantly I explain mathematics well. I can
effectively tutor the math and logic related questions for the GRE - I am a former full time high school math te...
28 Subjects: including prealgebra, chemistry, calculus, physics
...I have been tutoring Anatomy and Physiology, Cell Structure and Function, General Biology, and Microbiology since 2012. Tutoring similar college level biology courses as listed are my
specialty. Prior to 2012, I have had varying other tutoring experiences such as algebra one and high school chemistry and writing.
9 Subjects: including prealgebra, writing, biology, ESL/ESOL
...I've taught third grade for the last three years and have loved every minute doing so. On for the last several years I have also worked to help students in after school programs in reading,
math and science. I believe in helping children by linking their living to their learning and vise versa.
22 Subjects: including prealgebra, reading, geometry, grammar
...D. Continuity as a Property of Functions. II.
21 Subjects: including prealgebra, calculus, statistics, geometry | {"url":"http://www.purplemath.com/occoquan_prealgebra_tutors.php","timestamp":"2014-04-21T02:15:54Z","content_type":null,"content_length":"23750","record_id":"<urn:uuid:6fea789d-4e08-4276-bac6-fce19ca76d85>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Incorrect Application of PEMDAS and Order of Operations
Date: 09/14/2006 at 16:13:38
From: Monica
Subject: Understanding of the Order of Operations
I was working with students on the order of operations today and
explained that multiplication and division are done from left to
right, as are addition and subtraction. Apparently, they believe they
were taught in the past to do all addition and then all subtraction.
I tried to show examples of why that wouldn't work, but they simply
did the problem their way, obtained a different answer and asked why
it was incorrect.
Are there any examples or explanations that would clearly explain why
they must be done from left to right?
Date: 09/14/2006 at 16:37:38
From: Doctor Peterson
Subject: Re: Understanding of the Order of Operations
Hi, Monica.
It's impossible to show that they MUST be done from left to right;
that is nothing more than a convention we all agree on. Your class
has shown that it makes a difference which order you use; that proves
that we MUST make some choice that we can all follow. What that
choice is, is not so definite. But it makes a lot of sense to go left
to right, for the following reason.
We define subtraction this way:
a - b = a + -b
This allows us to think of any subtraction as an addition; we
essentially just attach the negative sign to the number following it,
rather than taking it as a different operation. The subtraction
requires no extra rules, just the rules we already have for addition.
If we do this, then
2 - 3 + 4 = 2 + -3 + 4 = 3
That is the same result we get if we do the operations from left to
right (and it doesn't depend on whether we do the ADDITIONS from left
to right, since addition is commutative!). If we did the addition
first, we would get
2 - 3 + 4 = 2 - (3 + 4) = 2 + -(3 + 4) = 2 + -7 = -5
Note that this time, the negative sign ended up applying to ALL the
following numbers, rather than just to the one after it.
So doing additions and subtractions from left to right makes it easier
to transform an expression into one involving only addition; and since
addition is commutative and associative, it is MUCH nicer to work
The rule, therefore, arises from the wish to make expressions easier
to handle. Without it, a lot of algebra would turn out to be a lot
harder. So your students should thank whoever first made this choice!
Now, your student's misunderstanding of the rule very likely comes
from the use of PEMDAS or something equivalent, which is meant to be
only a summary of the rules. It sounds as if A comes before S, but
that twists the intended meaning of the mnemonic. See this page for
another thought:
Confusion over Interpretation of PEMDAS
That says essentially the same thing I just said, but about
multiplication and division, which is an even bigger problem. (Did you
know that in other countries they use BODMAS instead of PEMDAS, so
students often think division should be done first?)
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/69146.html","timestamp":"2014-04-19T02:37:04Z","content_type":null,"content_length":"8646","record_id":"<urn:uuid:1107c3c7-b4a2-430d-b63c-ef0340097ca8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Central Limit Theorem Under Metric Entropy with $L_2$ Bracketing
The Annals of Probability
A Central Limit Theorem Under Metric Entropy with $L_2$ Bracketing
Let $(\mathbf{S}, \rho)$ be a metric space, $(\mathbf{V}, \mathscr{V}, \mu)$ be a probability space, and $f: \mathbf{S} \times \mathbf{V} \rightarrow \mathbb{R}$ be a real-valued function on $\mathbf
{S} \times \mathbf{V}$ which has mean zero and is Lipschitz in $L_2(\mu)$ with respect to $\rho$. Let $V$ be a random variable defined on $(\mathbf{V}, \mathscr{V}, \mu)$, and let $\{V_i: i \geq 1\}$
be a sequence of independent copies of $V$. The limiting behavior of the process $S_n(s) = n^{-1/2}\sum^n_{i=1} f(s, V_i)$ is studied under an integrability condition on the metric entropy with
bracketing in $L_2(\mu)$. This metric entropy condition is analogous to one which implies the continuity of the limiting Gaussian process. A tightness result is derived which, in conjunction with the
results of Andersen and Dobric (1987), shows that a central limit theorem holds for the $S_n$-process. This result generalizes those of Dudley (1978), Dudley (1981) and Jain and Marcus (1975). | {"url":"http://projecteuclid.org/euclid.aop/1176992072","timestamp":"2014-04-20T15:56:57Z","content_type":null,"content_length":"29911","record_id":"<urn:uuid:32554983-f01b-4e40-abe9-304b4cea829c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Epsilon delta definition of a limit and application of it for definite integrals (Riemann sums)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Anyone know of resources I could learn more about this topic?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Khan academy?
Best Response
You've already chosen the best response.
@RnR , I do not believe that Khan academy does a rigorous justification of riemann sums in combination with epsilon-delta limit definition. @Kanwar245 , that's kind of trivial to mention, but I
guess I might recheck the pages. Wikipedia is often very mathematically rigorous but not friendly to the layman like me...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
In that case refer to some book
Best Response
You've already chosen the best response.
I had to do these this year and last year, mostly I just googled around and "borrowed" notes from other Universities on it (sharing is caring!). But if you have any questions on it I could try
and answer them, though it is an awfully boring topic in my opinion haha
Best Response
You've already chosen the best response.
we know that integration means, area under the graph how can we approximate that? -> say using rectangles..
Best Response
You've already chosen the best response.
definition of a derivative \[ \lim_{\Delta x\to0}\frac{\Delta y}{\Delta x}={dy\over dx} \]
Best Response
You've already chosen the best response.
Well I'm reading Thomas and Finney, and they intrduce both topics, but I'm hoping for some sort of more rigorous justification such that calculus feels more solid. Also, I'm hoping understanding
the proofs for the FTC will help me comprehend the applications of integrals in Calc II I'm struggling for
Best Response
You've already chosen the best response.
@inkyvoyd are you looking only for a reference?
Best Response
You've already chosen the best response.
Yes @electrokid . I'm wondering if I could find a nice book on calculus that was fairly rigorous. I've tried reading a real analysis book but that hurt my poor brain with rigor.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
but it would not be a bad idea to engage into some hardcore discussion here. I am sure people would not bother to answer such topics unless they know what they are talking about!
Best Response
You've already chosen the best response.
@electrokid okay I will copy down what the textbok mentions and what I don't understand. Thanks!
Best Response
You've already chosen the best response.
One way of looking at a riemann sum is:|dw:1363819858302:dw| The limit of the bigger rectangles, the Upper Sum, and the Lower Sum as delta x tends to 0. The inequality is as such: L<theintegral
<U. Though this isn't very rigorous. The upper and lower sum's themselves can be defined as the supremum and infimum of a summation. | {"url":"http://openstudy.com/updates/514a3c20e4b07974555d63e2","timestamp":"2014-04-17T19:00:47Z","content_type":null,"content_length":"1048827","record_id":"<urn:uuid:b5967520-bf2b-4cb2-a2a4-3a98a80000c0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which options gives f(7-x)
I simply worked f(7-x) out and got the answer exactly described in (d). And x ≠ 19/3 and 0 ≤ x ≤ 7 seem appropriate too.
Now I wonder if there is any tricky thing in other areas. Would any fellow please take a look when you have time. | {"url":"http://mathhelpforum.com/algebra/121692-options-gives-f-7-x.html","timestamp":"2014-04-17T04:00:12Z","content_type":null,"content_length":"38443","record_id":"<urn:uuid:a409a3a7-ca89-42a5-a338-edbcca353a15>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework - September 26, 2001
Topics Covered: Principle of Mathematical Induction, proofs by contradiction, vectors spaces (Sec. 2.1) Reading Assignment:
1. Appendix C - Fields pp. 510-513,
2. Handout
3. Sect. 1.2 and 1.3
Homework Problems:
1. In no more than one page write a summary of what is induction and why it works as a method of proof. Assume your audience is someone with very little mathematical sophistication.
2. Use induction to prove that
3. Use induction to prove that
4. Find an error in the following inductive "proof" that all positive integers
5. Use the strong form of induction to prove that any integer
6. Sect. 1.2
Math 24 Fall 2001 2001-09-24 | {"url":"http://www.math.dartmouth.edu/archive/m24f01/public_html/sept26/sept26.html","timestamp":"2014-04-19T17:57:04Z","content_type":null,"content_length":"4677","record_id":"<urn:uuid:8b2ba13a-b8cd-4543-95a5-9ba53d74e320>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ronja, a solution to the Fresnel Zone problem
Solving the Fresnel Zone problem with Ronja
What is the Fresnel Zone?
Line of sight is not enough for radio wireless networks. The wave needs certain amount of space around the line of sight to travel. If there are obstacles within this space, there will be a signal
loss to the link, or even then signal may disappear completely. According to Wikipedia, there is a critical zone in which no obstruction is to be tolerated during planning. Keeping more clearance is
actually being recommended.
The obstruction-intolerable space has ellipsoid shape and is thickest in the middle between receiver and transmitter. The radius of the ellipsoid in the middle can be calculated according to the
formula r = 5.196 × sqrt(d / f) where d is distance between receiver and transmitter in kilometers, f frequency in gigahertz and sqrt is square root. The resulting radius is in meters.
│Distance │Fresnel zone radius for 2.45 GHz │
│100 m │1 m │
│300 m │1.8 m │
│700 m │2.8 m │
│1.4 km │3.9 m │
│3 km │5.7 m │
Fresnel Zone Disruption
Fresnel zone can be disrupted by various stationary objects - ground, terrain features, houses and trees. Such disruption manifests as permanently weak or even absent reception. Moving objects can
disrupt Fresnel zone as well - buses, trams, trains, lorries, cars or even pedestrians. These cause temporary losses of function.
Fresnel Zone and Ronja
Ronja has a Fresnel Zone as well, but it is very small. If we use the above formula for Ronja's range 1.4km and frequency 476 THz (630 nm), we get 9mm. The beam width is 130mm. After adding the 9mm
on each side, we get 148mm thick sausage that we need to keep clear. That's practically equivalent with a line of sight.
With such small Fresnel zone, Ronja is a solution for cases where Fresnel zone for microwave would be obstructed - when the line of sight goes over roofs of houses, tips of tree or above a road where
the traffic can cause dropouts in the signal.
Ronja installations taking advantage of the tiny Fresnel zone
In this particular installation (left), the beam is closely passing by electric power line cables. But because the Fresnel zone of Ronja is only 9mm wide, it doesn't matter. If the link were realized
by a microwave, the power cable in the way would pose an exemplary violation of the Fresnel zone rule.
Below you can see 4 other installations where the beam is close to an obstacle. Click pictures to zoom if you cannot see the red light on the other side. | {"url":"http://ronja.twibright.com/fresnel.php","timestamp":"2014-04-18T15:39:41Z","content_type":null,"content_length":"7340","record_id":"<urn:uuid:c9a09fdc-37ea-40d3-9b96-a443ff458cba>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining Heat Energy Requirements
The objective of any heating application is to raise or maintain the temperature of a solid, liquid or gas to or at a level suitable for a particular process or application. Most heating applications
can be divided into two basic situations; applications which require the maintenance of a constant temperature and applications or processes which require work product to be heated to various
temperatures. The principles and calculation procedures are similar for either situation.
Constant Temperature Applications
Most constant temperature applications are special cases where the temperature of a solid, liquid or gas is maintained at a constant value regardless of ambient temperature. Design factors and
calculations are based on steady state conditions at a fixed difference in temperature. Heat loss and energy requirements are estimated using “worst case” conditions.
For this reason, determining heat energy requirements for a constant temperature application is relatively simple. Comfort heating (constant air temperature) and freeze protection for piping are
typical examples of constant temperature applications. The equations and procedures for calculating heat requirements for several applications are discussed later in this section.
Variable Temperature Applications
Variable temperature (process) applications usually involve a start-up sequence and have numerous operating variables. The total heat energy requirements for process applications are determined as
the sum of these calculated variables. As a result, the heat energy calculations are usually more complex than for constant temperature applications. The variables are:
Total Heat Energy Absorbed — The sum of all the heat energy absorbed during start-up or operation including the work product, the latent heat of fusion (or vaporization), make up materials,
containers and equipment.
Total Heat Energy Lost — The sum of the heat energy lost by conduction, convection, radiation, ventilation and evaporation during start-up or operation.
Design Safety Factor — A factor to compensate for unknowns in the process or application.
Process Applications
The selection and sizing of the installed equipment in a process application is based on the larger of two calculated heat energy requirements. In most process applications, the start-up and
operating parameters represent two distinctly different conditions in the same process. The heat energy required for start-up is usually considerably different than the energy required for operating
conditions. In order to accurately assess the heat requirements for an application, each condition must be evaluated. The comparative values are defined as follows:
• Calculated heat energy required for process start-up over a specific time
• period.
• Calculated heat energy required to maintain process temperatures and operating conditions over a specific cycle time.
Determining Heat Energy Absorbed
The first step in determining total heat energy requirements is to determine the heat energy absorbed. If a change of state occurs as a direct or indirect part of the process, the heat energy
required for the change of state must be included in the calculations. This rule applies whether the change occurs during startup or later when the material is at operating temperature. Factors to be
considered in the heat absorption calculations are shown below:
Start-Up Requirements (Initial Heat-Up) Operating Requirements (Process)
• Heat absorbed during start-up by: • Heat absorbed during operation by:
□ Work product and materials □ Work product in process
□ Equipment (tanks, racks, etc.) □ Equipment loading (belts, racks, etc.)
• Latent heat absorption at or during start-up: □ Make up materials
□ Heat of fusion • Latent heat absorption during operation:
□ Heat of vaporization □ Heat of fusion
• Time factor □ Heat of vaporization
• Time (or cycle) factor, if applicable
Determining Heat Energy Lost
Objects or materials at temperatures above the surrounding ambient lose heat energy by conduction, convection and radiation. Liquid surfaces exposed to the atmosphere lose heat energy through
evaporation. The calculation of total heat energy requirements must take these losses into consideration and provide sufficient energy to offset them. Heat losses are estimated for both start-up and
operating conditions and are added into the appropriate calculation.Heat Losses at Start-Up — Initially, heat losses at start-up are zero since the materials and equipment are all at ambient
temperature. Heat losses increase to a maximum at operating temperature. Consequently, start-up heat losses are usually based on an average of the loss at start-up and the loss at operating
temperature.Heat Losses at Operating Temperature —Heat losses are at a maximum at operating temperature. Heat losses at operating temperature are taken at full value and added to the total energy
Estimating Heat Loss Factors
The heat losses just discussed can be estimated by using factors from the charts and graphs provided in this section. Total losses include radiation, convection and conduction from various surfaces
and are expressed in watts per hour per unit of surface area per degree of temperature (W/hr / ft^2 / °F).
Note — Since the values in the charts are already expressed in watts per hour, they are not influenced by the time factor “t” in the heat energy equations.
Design Safety Factors
In many heating applications, the actual operating conditions, heat losses and other factors affecting the process can only be estimated. A safety factor is recommended in most calculations to
compensate for unknowns such as ventilation air, thermal insulation, make up materials and voltage fluctuations. As an example, a voltage fluctuation (or drop) of 5% creates a 10% change in the
wattage output of a heater.
Safety factors vary from 10 to 25% depending on the level of confidence of the designer in the estimate of the unknowns. The safety factor is applied to the sum of the calculated values for heat
energy absorbed and heat energy lost.
Total Heat Energy Requirements
The total heat energy (Q[T]) required for a particular application is the sum of a number of variables. The basic total energy equation is:
Q[T] = Q[M] + Q[L] + Safety Factor
• Q[T] = The total energy required in kilowatts
• Q[M] = The total energy in kilowatts absorbed by the work product including latent heat, make up materials, containers and equipment
• Q[L] = The total energy in kilowatts lost from the surfaces by conduction, convection, radiation, ventilation and evaporation
• Safety Factor = 10% to 25%
While Q[T] is traditionally expressed in Btu’s (British Thermal Units), it is more convenient to use watts or kilowatts when applying electric heaters. Equipment selection can then be based directly
on rated heater output. Equations and examples in this section are converted to watts
Basic Heat Energy Equations
The following equations outline the calculations necessary to determine the variables in the above total energy equation. Equations 1 and 2 are used to determine the heat energy absorbed by the work
product and the equipment. The specific heat and the latent heat of various materials are listed in this section in tables of properties of non-metallic solids, metals, liquids, air and gases.
Equations 3 and 4 are used to determine heat energy losses. Heat energy losses from surfaces can be estimated using values from the curves in charts G-114S, G-125S, G-126S or G-128S. Conduction
losses are calculated using the thermal conductivity or “k” factor listed in the tables for properties of materials.
Equation 1 — Heat Energy Required to Raise the Temperature of the Materials (No Change of State)
The heat energy absorbed is determined from the weight of the materials, the specific heat and the change in temperature. Some materials, such as lead, have different specific heats in the different
states. When a change of state occurs, two calculations are required for these materials, one for the solid material and one for the liquid after the solid has melted.
Q[A] = Lbs x C[P] x ΔT 3412 Btu/kW
• Q[A] = kWh required to raise the temperature
• Lbs = Weight of the material in pounds
• C[p] = Specific heat of the material (Btu/lb/°F)
• ΔT = Change in temperature in °F [T[2 (Final)] - T[1 (Start)]]
Equation 2 — Heat Energy Required to Change the State of the Materials
The heat energy absorbed is determined from the weight of the materials and the latent heat of fusion or vaporization.
Q[F] or Q[v] = Lbs x H[fus] or H[vap] 3412 Btu/kW
• Q[F] = kWh required to change the material from a solid to a liquid
• Q[v] = kWh required to change the material from a liquid to a vapor or gas
• Lbs = Weight of the material in pounds
• H[fus] = Heat of fusion (Btu/lb/°F)
• H[vap] = Heat of vaporization (Btu/lb/°F)
Equation 3 — Heat Energy Lost from Surfaces
The heat energy lost from surfaces by radiation, convection and evaporation is determined from the surface area and the loss rate in watts per square foot per hour.
• Q[LS] = kWh lost from surfaces by radiation, convection and evaporation
• A = Area of the surfaces in square feet
• L[S] = Loss rate in watts per square foot at final temperature (W/ft^2/hr from charts)
Equation 4 — Heat Energy Lost by Conduction through Materials or Insulation
The heat energy lost by conduction is determined by the surface area, the thermal conductivity of the material, the thickness and the temperature difference across the material.
Q[LC] = A x k x ΔT d x 3412 Btu/kW
• Q[LC] = kWh lost by conduction
• A = Area of the surfaces in square feet
• k = Thermal conductivity of the material in Btu/inch/square foot/hour (Btu/in/ft^2/hr)
• ΔT = Temperature difference in °F across the material [T2 - T1]
• d = Thickness of the material in inches
Summarizing Energy Requirements
Equations 5a and 5b are used to summarize the results of all the other equations described on this page. These two equations determine the total energy requirements for the two process conditions,
start-up and operating.
Equation 5a — Heat Energy Required for Start-Up
Q[T] = ( Q[A] + Q[F] [or Q[V] ] t + Q[LS] + QLC 2 ) (1 + SF)
• Q[T] = The total energy required in kilowatts
• Q[A] = kWh required to raise the temperature
• Q[F] = kWh required to change the material from a solid to a liquid
• Q[V] = kWh required to change the material from a liquid to a vapor or gas
• Q[LS] = kWh lost from surfaces by radiation, convection and evaporation
• Q[LC] = kWh lost by conduction
• SF = Safety Factor (as a percentage)
• t = Start-up time in hours^2
Equation 5b — Heat Energy Required to Maintain Operation or Process^3
Q[T] = (Q[A] + Q[F] [or Q[V]] + Q[LS] + Q[LC])(1 + SF)
• Q[T] = The total energy required in kilowatts
• Q[A] = kWh required to raise the temperature of added material
• Q[F] = kWh required to change added material from a solid to a liquid
• Q[V] = kWh required to change added material from a liquid to a vapor or gas
• Q[LS] = kWh lost from surfaces by radiation, convection and evaporation
• Q[LC] = kWh lost by conduction
• SF = Safety Factor (as a percentage)
Equipment Sizing & Selection
The size and rating of the installed heating equipment is based on the larger of calculated results of Equation 5a or 5b.
Notes —
Loss Factors from charts in this section include losses from radiation, convection and evaporation unless otherwise indicated.
Time (t) is factored into the start-up equation since the start up of a process may vary from a period of minutes or hours to days.
Operating Requirements are normally based on a standard time period of one hour (t = 1). If cycle times and heat energy requirements do not coincide with hourly intervals, they should be recalculated
to a hourly time base. | {"url":"http://durexindustries.com/resources/engineering-tools/determining-heat-energy-requirements","timestamp":"2014-04-20T03:26:04Z","content_type":null,"content_length":"33398","record_id":"<urn:uuid:40a1f582-4358-4c00-8bd5-33bb7110f273>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
Thomas B. Sebastian, Philip N. Klein, Benjamin B. Kimia, "Recognition of Shapes by Editing Their Shock Graphs," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, pp.
550-571, May, 2004.
BibTex x
@article{ 10.1109/TPAMI.2004.1273924,
author = {Thomas B. Sebastian and Philip N. Klein and Benjamin B. Kimia},
title = {Recognition of Shapes by Editing Their Shock Graphs},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {26},
number = {5},
issn = {0162-8828},
year = {2004},
pages = {550-571},
doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2004.1273924},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - Recognition of Shapes by Editing Their Shock Graphs
IS - 5
SN - 0162-8828
EPD - 550-571
A1 - Thomas B. Sebastian,
A1 - Philip N. Klein,
A1 - Benjamin B. Kimia,
PY - 2004
KW - Shape deformation
KW - shock graphs
KW - graph matching
KW - edit distance
KW - shape matching
KW - object recognition
KW - dynamic programming.
VL - 26
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
Abstract—This paper presents a novel framework for the recognition of objects based on their silhouettes. The main idea is to measure the distance between two shapes as the minimum extent of
deformation necessary for one shape to match the other. Since the space of deformations is very high-dimensional, three steps are taken to make the search practical: 1) define an equivalence class
for shapes based on shock-graph topology, 2) define an equivalence class for deformation paths based on shock-graph transitions, and 3) avoid complexity-increasing deformation paths by moving toward
shock-graph degeneracy. Despite these steps, which tremendously reduce the search requirement, there still remain numerous deformation paths to consider. To that end, we employ an edit-distance
algorithm for shock graphs that finds the optimal deformation path in polynomial time. The proposed approach gives intuitive correspondences for a variety of shapes and is robust in the presence of a
wide range of visual transformations. The recognition rates on two distinct databases of 99 and 216 shapes each indicate highly successful within category matches (100 percent in top three matches),
which render the framework potentially usable in a range of shape-based recognition applications.
[1] N. Ayache and O. Faugeras, HYPER: A New Approach for the Recognition and Positioning of Two-Dimensional Objects IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 1, pp. 44-54,
[2] R. Basri, L. Costa, D. Geiger, and D. Jacobs, Determining the Similarity of Deformable Shapes Vision Research, vol. 38, pp. 2365-2385, 1998.
[3] S. Belongie, J. Puzhicha, and J. Malik, Shape Matching and Object Recognition Using Shape Contexts IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509-522, Apr. 2002.
[4] S. Belongie and J. Malik, Matching with Shape Contexts Proc. IEEE Workshop Content-based Access of Image and Video Libraries, 2000.
[5] H. Blum, Biological Shape and Visual Science J. Theoretical Biology, vol. 38, pp. 205-287, 1973.
[6] J. Bruce, P. Giblin, and C. Gibson, Symmetry Sets Proc. Royal Soc. Edinburgh, vol. 101A, pp. 163-186, 1985.
[7] H. Bunke, On a Relation between Graph Edit Distance and Maximum Common Subgraph Pattern Recognition Letters, vol. 18, no. 8, pp. 689-694, Aug. 1997.
[8] H. Bunke and K. Shearer, A Graph Distance Metric Based on the Maximal Common Subgraph Pattern Recognition Letters, vol. 19, nos. 3-4, pp. 255-259, 1998.
[9] H. Chui and A. Rangarajan, “A New Algorithm for Non-Rigid Point Matching,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 44-51, June 2000.
[10] I. Cohen, N. Ayache, and P. Sulger, Tracking Points on Deformable Objects Using Curvature Information Proc. European Conf. Computer Vision, pp. 458-466, 1992.
[11] C.M. Cyr and B.B. Kimia, 3D Object Recognition Using Shape Similarity-Based Aspect Graph Proc. Int'l Conf. Computer Vision, pp. 254-261, 2001.
[12] W. Dewaard, An Optimized Minimal Edit Distance for Hand-Written Word Recognition Pattern Recognition Letters, vol. 16, no. 10, pp. 1091-1096, 1995.
[13] Y. Gdalyahu and D. Weinshall, “Flexible Syntactic Matching of Curves and its Application to Automatic Hierarchical Classification of Silhouettes,” IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 21, no. 12, pp. 1312-1328, Dec. 1999.
[14] P.J. Giblin and B.B. Kimia, On the Intrinsic Reconstruction of Shape from Its Symmetries Proc. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, pp. 79-84, June 1999. IEEE Trans.
Pattern Analysis and Machine Intelligence, to appear.
[15] P.J. Giblin and B.B. Kimia, On the Local Form and Transitions of Symmetry Sets, and Medial Axes, and Shocks in 2D Proc. Int'l Conf. Computer Vision, pp. 385-391, 1999.
[16] P.J. Giblin and B.B. Kimia, Transitions of the 3D Medial Axis under a One-Parameter Family of Deformations Proc. European Conf. Computer Vision, pp. 718-724, 2002.
[17] S. Gold and A. Rangarajan, “A Graduated Assignment Algorithm for Graph Matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 4, pp. 377-388, Apr. 1996.
[18] D.G. Kendall, D. Barden, T.K. Carne, and H. Le, Shape and Shape Theory. Chichester, UK: John Wiley and Sons, Inc., 1999.
[19] B.B. Kimia, Conservation Laws and a Theory of Shape PhD dissertation, McGill Center for Intelligent Machines, McGill Univ., Montreal, Canada, 1990.
[20] B.B. Kimia, J. Chan, D. Bertrand, S. Coe, Z. Roadhouse, and H. Tek, A Shock-Based Approach for Indexing of Image Databases Using Shape SPIE: Multimedia Storage and Archiving Systems II, vol.
3229, pp. 288-302, 1997.
[21] P. Klein, Computing the Edit Distance between Unrooted Ordered Trees Proc. European Symp. Algorithms, pp. 91-102, 1998.
[22] P. Klein, T. Sebastian, and B. Kimia, Shape Matching Using Edit-Distance: An Implementation ACM-SIAM Symp. Discrete Algorithms, pp. 781-790, 2001.
[23] P. Klein, S. Tirthapura, D. Sharvit, and B. Kimia, A Tree-Edit Distance Algorithm for Comparing Simple, Closed Shapes Proc. ACM-SIAM Symp. Discrete Algorithms, pp. 696-704, 2000.
[24] L.J. Latecki, R. Lakämper, and U. Eckhardt, “Shape Descriptors for Non-Rigid Shapes with a Single Closed Contour,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 424-429, 2000.
[25] T. Liu and D. Geiger, Approximate Tree Matching and Shape Similarity Proc. Int'l Conf. Computer Vision, pp. 456-462, 1999.
[26] E. Milios and E.G.M. Petrakis, “Shape Retrieval Based on Dynamic Programming,” IEEE Trans. Image Processing, vol. 9, no. 1, pp. 141-146, 2000.
[27] R. Myers, R. Wilson, and E. Hancock, Bayesian Graph Edit Distance IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp. 628-635, June 2000.
[28] R.L. Ogniewicz, Discrete Voronoi Skeletons. Hartung-Gorre, 1993.
[29] P.J. Giblin and B.B. Kimia, On the Intrinsic Reconstruction of Shape from Its Symmetries IEEE Trans. Pattern Analysis and Machine Intelligence, 2003.
[30] M. Pelillo, Matching Free Trees, Maximal Cliques, and Monotone Game Dynamics IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 11, pp. 1535-1541, Nov. 2002.
[31] M. Pelillo, K. Siddiqi, and S.W. Zucker, “Matching Hierarchical Structures Using Association Graphs,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 11, pp. 1105-1120, 1999.
[32] M. Pelillo, K. Siddiqi, and S.W. Zucker, Many-to-Many Matching of Attributed Trees Using Association Graphs and Game Dynamics Proc. Int'l Workshop Visual Form, pp. 583-593, 2001.
[33] T.B. Sebastian, P.N. Klein, and B.B. Kimia, On Aligning Curves IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 1, pp. 116-125, Jan. 2003.
[34] T.B. Sebastian, J.J. Crisco, P.N. Klein, and B.B. Kimia, Construction of 2D Curve Atlases Proc. IEEE Workshop Math. Methods in Biomedical Image Analysis, pp. 70-77, 2000.
[35] T.B. Sebastian and B.B. Kimia, Metric-Based Shape Retrieval in Large Databases Proc. Int'l Conf. Pattern Recognition, vol. 3, pp. 291-296, 2002.
[36] T.B. Sebastian, P.N. Klein, and B.B. Kimia, Alignment-Based Recognition of Shape Outlines Proc. Int'l Workshop Visual Form, pp. 606-618, 2001.
[37] T.B. Sebastian, P.N. Klein, and B.B. Kimia, Recognition of Shapes by Editing Shock Graphs Proc. Eighth Int'l Conf. Computer Vision, pp. 755-762, July 2001.
[38] T.B. Sebastian, P.N. Klein, and B.B. Kimia, Shock-Based Indexing into Large Shape Databases Proc. European Conf. Computer Vision, pp. 731-746, 2002.
[39] D. Sharvit, J. Chan, H. Tek, and B.B. Kimia, Symmetry-Based Indexing of Image Databases J. Visual Comm. and Image Representation, vol. 9, no. 4, pp. 366-380, 1998.
[40] K. Siddiqi, B. Kimia, A. Tannenbaum, and S. Zucker, Shocks, Shapes, and Wiggles Image and Vision Computing, vol. 17, nos. 5-6, pp. 365-373, 1999.
[41] K. Siddiqi and B.B. Kimia, A Shock Grammar for Recognition Proc. Conf. Computer Vision and Pattern Recognition, pp. 507-513, 1996.
[42] K. Siddiqi, A. Shokoufandeh, S. Dickinson, and S. Zucker, Shock Graphs and Shape Matching Int'l J. Computer Vision, vol. 35, no. 1, pp. 13-32, 1999.
[43] H. Tagare, Shape-Based Nonrigid Correspondence with Application to Heart Motion Analysis IEEE Trans. Medical Imaging, vol. 18, no. 7, pp. 570-579, 1999.
[44] K.-C. Tai, The Tree-to-Tree Correction Problem J. Assoc. Computing Machinery, vol. 26, pp. 422-433, 1979.
[45] H. Tek, The Role of Symmetry Maps in Representating Objects in Images PhD dissertation, Division of Eng., Brown Univ., Providence, R.I., July 1999.
[46] H. Tek and B.B. Kimia, Symmetry Maps of Free-Form Curve Segments via Wave Propagation Proc. Int'l Conf. Computer Vision, pp. 362-369, 1999.
[47] A. Torsello and E.R. Hancock, Computing Approximate Tree Edit Distance Using Relaxation Labeling Pattern Recognition Letters, pp. 1089-1097, 2003.
[48] A. Torsello and E.R. Hancock, A Skeletal Measure of 2D Shape Similarity Proc. Int'l Workshop Visual Form, pp. 260-271, 2001.
[49] R. Wagner and M. Fischer, The String-to-String Correction Problem J. Assoc. Computing Machinery, vol. 21, pp. 168-173, 1974.
[50] W. Wallis, P. Shoubridge, M. Kraetz, and D. Ray, Graph Distances Using Graph Union Pattern Recognition Letters, vol. 22, no. 6-7, pp. 701-704, 2001.
[51] K. Zhang and D. Sasha, Simple Fast Algorithms for the Editing Distance between Trees and Related Problems SIAM J. Computing, vol. 18, pp. 1245-1262, 1989.
[52] S.C. Zhu and A.L. Yuille, FORMS: A Flexible Object Recognition and Modeling System Int'l J. Computer Vision, vol. 20, no. 3, pp. 187-212, 1996.
[53] P.J. Giblin and B.B. Kimia, On the Local Form and Transitions of Symmetry Sets, Medial Axes, and Shocks Int'l J. Computer Vision, vol. 54, no. 1-3, pp. 143-157, 2003.
[54] H. Tek and B.B. Kimia, Symmetry Maps of Free-Form Curve Segments Via Wave Propagation Int'l J. Computer Vision, vol. 54, no. 1-3, pp. 35-81, 2003.
Index Terms:
Shape deformation, shock graphs, graph matching, edit distance, shape matching, object recognition, dynamic programming.
Thomas B. Sebastian, Philip N. Klein, Benjamin B. Kimia, "Recognition of Shapes by Editing Their Shock Graphs," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, pp.
550-571, May 2004, doi:10.1109/TPAMI.2004.1273924
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tp/2004/05/i0550-abs.html","timestamp":"2014-04-18T00:43:21Z","content_type":null,"content_length":"66296","record_id":"<urn:uuid:b6b53b4b-387c-4ffd-9c12-e91e474ce6e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Degrees of Unsolvability
Results 1 - 10 of 40
"... Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefix-free Kolmogorov complexity. Such H-trivial reals are interesting due to the connection
between algorithmic complexity and effective randomness. We give a new, easier construction of an H-trivi ..."
Cited by 57 (31 self)
Add to MetaCart
Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefix-free Kolmogorov complexity. Such H-trivial reals are interesting due to the connection
between algorithmic complexity and effective randomness. We give a new, easier construction of an H-trivial real. We also analyze various computability-theoretic properties of the H-trivial reals,
showing for example that no H-trivial real can compute the halting problem. Therefore, our construction of an H-trivial computably enumerable set is an easy, injury-free construction of an incomplete
computably enumerable set. Finally, we relate the H-trivials to other classes of "highly nonrandom " reals that have been previously studied.
, 1991
"... An explicit recursion-theoretic definition of a random sequence or random set of natural numbers was given by Martin-Löf in 1966. Other approaches leading to the notions of n-randomness and weak
n-randomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of n-rando ..."
Cited by 46 (4 self)
Add to MetaCart
An explicit recursion-theoretic definition of a random sequence or random set of natural numbers was given by Martin-Löf in 1966. Other approaches leading to the notions of n-randomness and weak
n-randomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of n-random and weakly n-random sequences with an emphasis on the structure of their Turing degrees.
After an introduction and summary, in Chapter II we present several equivalent definitions of n-randomness and weak n-randomness including a new definition in terms of a forcing relation analogous to
the characterization of n-generic sequences in terms of Cohen forcing. We also prove that, as conjectured by Kurtz, weak nrandomness is indeed strictly weaker than n-randomness. Chapter III is
concerned with intrinsic properties of n-random sequences. The main results are that an (n + 1)-random sequence A satisfies the condition A (n) ≡T A⊕0 (n) (strengthening a result due originally to
Sacks) and that n-random sequences satisfy a number of strong independence properties, e.g., if A ⊕ B is n-random then A is n-random relative to B. It follows that any countable distributive lattice
can be embedded
- Proc. London Math. Soc , 1966
"... The degrees of unsolvability have been extensively studied by Sacks in (4). This paper studies problems concerned with lower bounds of pairs of recursively enumerable (r.e.) degrees. It grew out
of an unpublished paper written in June 1964 which presented a proof of the following conjecture of Sacks ..."
Cited by 42 (0 self)
Add to MetaCart
The degrees of unsolvability have been extensively studied by Sacks in (4). This paper studies problems concerned with lower bounds of pairs of recursively enumerable (r.e.) degrees. It grew out of
an unpublished paper written in June 1964 which presented a proof of the following conjecture of Sacks ((4) 170): there exist two r.e. degrees a, b whose greatest lower bound is 0. This result was
first announced by Yates (6); the present author's proof is superficially at least quite different from that of Yates. The author is grateful to Yates for pointing out two errors in the original
proof of Lemma 3, and for his careful reading of the whole of the earlier paper. The result already mentioned is Theorem 1 of this paper. As a by-product of the construction we can obtain a ' = b ' =
0'; Yates has made a similar observation regarding his construction. In Theorem 2 it is proved that there are r.e. degrees a, b whose greatest lower bound is 0 such that a, b are the degrees of
maximal r.e. sets. Next, we show
"... The biinterpretability conjecture for the r.e. degrees asks whether, for each sufficiently large k, the # k relations on the r.e. degrees are uniformly definable from parameters. We solve a
weaker version: for each k >= 7, the k relations bounded from below by a nonzero degree are uniformly definabl ..."
Cited by 34 (13 self)
Add to MetaCart
The biinterpretability conjecture for the r.e. degrees asks whether, for each sufficiently large k, the # k relations on the r.e. degrees are uniformly definable from parameters. We solve a weaker
version: for each k >= 7, the k relations bounded from below by a nonzero degree are uniformly definable. As applications, we show that...
"... Let R be a notion of algorithmic randomness for individual subsets of N. We say B is a base for R randomness if there is a Z �T B such that Z is R random relative to B. We show that the bases
for 1-randomness are exactly the K-trivial sets and discuss several consequences of this result. We also sho ..."
Cited by 34 (15 self)
Add to MetaCart
Let R be a notion of algorithmic randomness for individual subsets of N. We say B is a base for R randomness if there is a Z �T B such that Z is R random relative to B. We show that the bases for
1-randomness are exactly the K-trivial sets and discuss several consequences of this result. We also show that the bases for computable randomness include every ∆ 0 2 set that is not diagonally
noncomputable, but no set of PA-degree. As a consequence, we conclude that an n-c.e. set is a base for computable randomness iff it is Turing incomplete. 1
- Trans. Amer. Math. Soc
"... Abstract. One approach to understanding the fine structure of initial segment complexity was introduced by Downey, Hirschfeldt and LaForte. They define X ≤K Y to mean that (∀n) K(X ↾ n) ≤ K(Y ↾
n) +O(1). The equivalence classes under this relation are the K-degrees. We prove that if X ⊕ Y is 1-rand ..."
Cited by 32 (6 self)
Add to MetaCart
Abstract. One approach to understanding the fine structure of initial segment complexity was introduced by Downey, Hirschfeldt and LaForte. They define X ≤K Y to mean that (∀n) K(X ↾ n) ≤ K(Y ↾ n) +O
(1). The equivalence classes under this relation are the K-degrees. We prove that if X ⊕ Y is 1-random, then X and Y have no upper bound in the K-degrees (hence, no join). We also prove that
n-randomness is closed upward in the K-degrees. Our main tool is another structure intended to measure the degree of randomness of real numbers: the vL-degrees. Unlike the K-degrees, many basic
properties of the vL-degrees are easy to prove. We show that X ≤K Y implies X ≤vL Y, so some results can be transferred. The reverse implication is proved to fail. The same analysis is also done for
≤C, the analogue of ≤K for plain Kolmogorov complexity. Two other interesting results are included. First, we prove that for any Z ∈ 2ω, a 1-random real computable from a 1-Z-random real is
automatically 1-Z-random. Second, we give a plain Kolmogorov complexity characterization of 1-randomness. This characterization is related to our proof that X ≤C Y implies X ≤vL Y. 1.
- In Proceedings of IMS workshop on Computational Prospects of Infinity , 2006
"... ABSTRACT. For any rational number r, we show that there exists a set A (weak truthtable reducible to the halting problem) such that any set B weak truth-table reducible to it has effective
Hausdorff dimension at most r, where A itself has dimension at least r. This implies, for any rational r, the e ..."
Cited by 23 (2 self)
Add to MetaCart
ABSTRACT. For any rational number r, we show that there exists a set A (weak truthtable reducible to the halting problem) such that any set B weak truth-table reducible to it has effective Hausdorff
dimension at most r, where A itself has dimension at least r. This implies, for any rational r, the existence of a wtt-lower cone of effective dimension r. 1.
- Journal of the London Mathematical Society , 2006
"... Consider the countable semilattice RT consisting of the recursively enumerable Turing degrees. Although RT is known to be structurally rich, a major source of frustration is that no specific,
natural degrees in RT have been discovered, except the bottom and top degrees, 0 and 0 ′. In order to overco ..."
Cited by 22 (16 self)
Add to MetaCart
Consider the countable semilattice RT consisting of the recursively enumerable Turing degrees. Although RT is known to be structurally rich, a major source of frustration is that no specific, natural
degrees in RT have been discovered, except the bottom and top degrees, 0 and 0 ′. In order to overcome this difficulty, we embed RT into a larger degree structure which is better behaved. Namely,
consider the countable distributive lattice Pw consisting of the weak degrees (also known as Muchnik degrees) of mass problems associated with non-empty Π 0 1 subsets of 2ω. It is known that Pw
contains a bottom degree 0 and a top degree 1 and is structurally rich. Moreover, Pw contains many specific, natural degrees other than 0 and 1. In particular, we show that in Pw one has 0 < d < r1 <
inf(r2, 1) < 1. Here, d is the weak degree of the diagonally non-recursive functions, and rn is the weak degree of the n-random reals. It is known that r1 can be characterized as the maximum weak
degree ofaΠ 0 1 subset of 2ω of positive measure. We now show that inf(r2, 1) can be characterized as the maximum weak degree of a Π 0 1 subset of 2ω, the Turing upward closure of which is of
positive measure. We exhibit a natural embedding of RT into Pw which is one-to-one, preserves the semilattice structure of RT, carries 0 to 0, and carries 0 ′ to 1. Identifying RT with its image in
Pw, we show that all of the degrees in RT except 0 and 1 are incomparable with the specific degrees d, r1, and inf(r2, 1) inPw. 1.
, 1991
"... : A set A of nonnegative integers is recursively enumerable (r.e.) if A can be computably listed. It is shown that there is a first order property, Q(X), definable in E, the lattice of r.e. sets
under inclusion, such that: (1) if A is any r.e. set satisfying Q(A) then A is nonrecursive and Turing i ..."
Cited by 21 (4 self)
Add to MetaCart
: A set A of nonnegative integers is recursively enumerable (r.e.) if A can be computably listed. It is shown that there is a first order property, Q(X), definable in E, the lattice of r.e. sets
under inclusion, such that: (1) if A is any r.e. set satisfying Q(A) then A is nonrecursive and Turing incomplete; and (2) there exists an r.e. set A satisfying Q(A). This resolves a long open
question stemming from Post's Program of 1944, and it sheds new light on the fundamental problem of the relationship between the algebraic structure of an r.e. set A and the (Turing) degree of
information which A encodes. Recursively enumerable (r.e.) sets have been a central topic in mathematical logic, in recursion theory (i.e. computability theory), and in undecidable problems. They are
the next most effective type of set beyond recursive (i.e. computable) sets, and they occur naturally in many branches of mathematics. This together with the existence of nonrecursive r.e. sets has
enabled them to pl... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=265755","timestamp":"2014-04-21T08:23:46Z","content_type":null,"content_length":"36630","record_id":"<urn:uuid:79111d46-56bd-432b-8751-f63d107d2898>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
topological group
Basic concepts
Group Theory
Classical groups
Finite groups
Group schemes
Topological groups
Lie groups
Super-Lie groups
Higher groups
Cohomology and Extensions
A topological group is a topological space with a continuous group structure: a group object in the category Top.
More explicitly, it is a group equipped with a topology such that the multiplication and inversion maps are continuous.
Uniform structure
A topological group $G$ carries two canonical uniformities: a right and left uniformity. The left uniformity consists of entourages $\sim_{l, U}$ where $x \sim_{l, U} y$ if $x y^{-1} \in U$; here $U$
ranges over neighborhoods of the identity that are symmetric: $g \in U \Leftrightarrow g^{-1} \in U$. The right uniformity similarly consists of entourages $\sim_{r, U}$ where $x \sim_{r, U} y$ if $x
^{-1} y \in U$. The uniform topology for either coincides with the topology of $G$.
Obviously when $G$ is commutative, the left and right uniformities coincide. They also coincide if $G$ is compact Hausdorff, since in that case there is only one uniformity whose uniform topology
reproduces the given topology.
Let $G$, $H$ be topological groups, and equip each with their left uniformities. Let $f: G \to H$ be a group homomorphism.
The following are equivalent:
• The map $f$ is continuous at a point of $G$;
• The map $f$ is continuous;
• The map $f$ is uniformly continuous.
Suppose $f$ is continuous at $g \in G$. Since neighborhoods of a point $x$ are $x$-translates of neighborhoods of the identity $e$, continuity at $g$ means that for all neighborhoods $V$ of $e \in H$
, there exists a neighborhood $U$ of $e \in G$ such that
$f(g U) \subseteq f(g) V$
Since $f$ is a homomorphism, it follows immediately from cancellation that $f(U) \subseteq V$. Therefore, for every neighborhood $V$ of $e \in H$, there exists a neighborhood $U$ of $e \in G$ such
$x y^{-1} \in U \Rightarrow f(x) f(y)^{-1} = f(x y^{-1}) \in V$
in other words such that $x \sim_U y \Rightarrow f(x) \sim_V f(y)$. Hence $f$ is uniformly continuous with respect to the left uniformities. By similar reasoning, $f$ is uniformly continuous with
respect to the right uniformities.
Unitary representation on Hilbert spaces
A unitary representation $R$ of a topological group $G$ in a Hilbert space $\mathcal{H}$ is a continuous group homomorphism
$R \colon G \to \mathcal{U}(\mathcal{H})$
where $\mathcal{U}(\mathcal{H})$ is the group of unitary operators on $\mathcal{H}$ with respect to the strong topology.
Why the strong topology is used
The reason that in the definition of a unitary representation, the strong operator topology on $\mathcal{U}(\mathcal{H})$ is used and not the norm topology, is that only few homomorphisms turn out to
be continuous in the norm topology.
Example: let $G$ be a compact Lie group and $L^2(G)$ be the Hilbert space of square integrable measurable functions with respect to its Haar measure. The right regular representation of $G$ on $L^2
(G)$ is defined as
$R: G \to \mathcal{U}(L^2(G))$
$g \mapsto (R_g: f(x) \mapsto f(x g))$
and this will generally not be continuous in the norm topology, but is always continuous in the strong topology.
Which topological groups admit Lie group structure?
A proof is spelled out by Todd Trimble here on MO.
The following monograph is not particulary about group representations, but some content of this page is based on it:
• Martin Schottenloher: A mathematical introduction to conformal field theory. Springer, 2nd edition 2008 (ZMATH entry) | {"url":"http://www.ncatlab.org/nlab/show/topological+group","timestamp":"2014-04-18T23:57:38Z","content_type":null,"content_length":"65688","record_id":"<urn:uuid:ac4c9e8f-0f7c-48b9-aee4-b3bc49b43d2b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Pitman's Test of difference in variance...
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: Pitman's Test of difference in variance...
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Pitman's Test of difference in variance...
Date Tue, 25 Mar 2008 10:20:13 -0000
First off, many medical statisticians refer to a plot of difference
versus mean for two variables as a Bland-Altman plot because (Stata
users) Bland and Altman popularised that very useful idea in many
articles (and their textbooks). The idea is naturally much older.
I guess you refer to the test of Pitman (1939) which is based on
calculating the correlation between difference and mean. In one
this is a test statistic for a null hypothesis of equal variances
given bivariate normality. See also Snedecor and Cochran (1989,
Without independent confirmation of such normality there must be a
question over its robustness, although the point could be explored by
simulation. Personally, I prefer to regard it as an exploratory
diagnostic. A value near zero implies concordance.
Pitman, E. J. G. 1939. A note on normal correlation. Biometrika 31:
Snedecor, G. W., and W. G. Cochran. 1989. Statistical Methods.
Ames, IA:
Iowa State University Press.
These and many other references are given in the help file for -concord-
(-search concord- for locations).
Amr Al Sayed
I have a problem understanding the interpretation of
Pitman's Test of difference in variance on doing Bland
Altman plot. I did search the particular meaning of r
and p value, but could not find an exact meaning in
this particular situation of Bland Altman plot (other
than it is a permutation non-parametric test...).
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-03/msg00926.html","timestamp":"2014-04-19T20:54:04Z","content_type":null,"content_length":"6658","record_id":"<urn:uuid:5d5eaa7a-a3f7-4692-97f1-d0eb66daf075>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Iselin SAT Math Tutor
...I truly have a good time doing SAT Math questions; when I was studying for the SAT during my junior year of high school, I would do SAT Math in my spare time because I had fun with it. Scoring
a 770 in this section, I have developed some tricks to approaching problems and solving them as quickly as possible. The students I have previously tutored said that I really helped them.
12 Subjects: including SAT math, algebra 1, algebra 2, trigonometry
...If the student’s algebraic, geometrical, or logical foundation is not up to par, I can help build the student’s mathematical toolbox. I learned the value of an excellent tutor firsthand for SAT
Math. She was able to offer keen insights and tricks to solve problems on the test where time is so valuable.
9 Subjects: including SAT math, chemistry, physics, calculus
...I can help make your science class better. I tutor students in elementary science. I work with planets and space, chemistry, earth science, ecology, geology and more.
56 Subjects: including SAT math, reading, English, writing
...Actuaries use probability and statistics to help keep insurance companies in business by charging the correct rates for insurance. I have passed four of the Casualty Actuarial Society exams
which rely heavily on probability and statistics and are extremely difficult. I have tutored in both probability and statistics before and have also taught these topics at a community college many
28 Subjects: including SAT math, physics, GRE, calculus
...I finished my degree at Princeton in 2006 majoring in Politics specializing in Political Theory and American Politics so I'm very well equipped to tutor social studies and history along with
related fields. I also took many classes in English at college so I can work with students in that area t...
40 Subjects: including SAT math, chemistry, reading, physics | {"url":"http://www.purplemath.com/Iselin_SAT_math_tutors.php","timestamp":"2014-04-16T10:47:23Z","content_type":null,"content_length":"23805","record_id":"<urn:uuid:60e72412-474a-4da9-9d61-89bd543d4d8d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAPP Online Low Frequency Polar Data Acquisition
MAPP Online Low Frequency Polar Data Acquisition TECHNICAL REPORT
Download PDF Version (3 Mb)
By Perrin Meyer
August 2003
The high-resolution polar data used by Meyer Sound MAPP Online is acquired at 1 degree spatial resolution using an automated positioning device in an anechoic chamber. The measurement computer uses
a SIM-type algorithm to acquire 1/24th octave complex (magnitude and phase) data from 0 Hz to 20 kHz.
However, the wedges used in our anechoic chamber are only rated to 100 Hz. Below 100 Hz, the data in our anechoic chamber, while pretty good, is not accurate enough to give good results in MAPP
Online. For this reason, until now, we have not allowed any predictions in MAPP Online below 100 Hz.
Building an anechoic chamber that would give a good measurement of the free-field (anechoic) response of a subwoofer would be very difficult and expensive. The wavelength of sound at 30 Hz is 11.4
meters (38 feet), and therefore the anechoic wedges would need to be on the order of 10 to 20 feet long in order to absorb, and not reflect, low-frequency energy.
However, measuring outdoors has its problems as well. Cars and trucks create low-frequency rumble that can contaminate measurements. Wind noise can also be a big problem. But if done carefully,
outdoor measurements correlate well with indoor anechoic chamber measurements.
Figure 1 shows an on-axis frequency response measurement of a single Meyer Sound M2D loudspeaker at 4 meters, measured on a flat surface with the microphone placed directly on the ground. The
ground is known as a half plane, and on axis to a single loudspeaker it creates a very accurate measurement of the frequency response compared with a single loudspeaker measured in an anechoic
chamber. The half plane causes the magnitude of the response to increase by 6 dB compared with the anechoic chamber measurement.
The blue trace in Figure 1 shows the outdoor SIM half plane measurement of a single M2D, minus 6 dB. The red trace shows the anechoic chamber measurement of a single M2D. Note the excellent
correlation between the two measurements between 100 Hz and 10 kHz. Above 10 kHz, ground plane measurements are inaccurate due to the size of the microphone compared with the small wavelengths of
the sound. We have found through extensive testing that our anechoic chamber is very accurate above 10 kHz.
Below 100 Hz, as expected, the two traces differ. Since our chamber wedges are only rated to 100 Hz, we have found that ground plane measurements are more accurate. Especially visible in the red
trace is a 10 dB hole centered at around 80 Hz. This acoustic hole in our anechoic chamber below 100 Hz is the reason that we have not allowed predictions in MAPP Online below 100 Hz.
Figure 1. The on-axis frequency response measurement of a single M2D loudspeaker at 4 meters; the blue trace was measured outdoors and the red trace was measured in Meyer Sound s anechoic chamber
Boundary Element Modeling
In order to correct our anechoic chamber measurements below 100 Hz, we could measure outdoors at 4 meters every 1 degree. However, this is tedious and prone to errors.
Instead, we have chosen to simulate the polar response of our loudspeakers below 100 Hz using a mathematical technique called Boundary Elements.
Boundary Element Methods in computational acoustics are similar to Finite Element methods in mechanics. A mesh is created of the object to model, and the physics of wave phenomena are solved for
each element in order to provide a solution of the wave equation. Figure 2 shows the model mesh of an M2D loudspeaker. Even though this mesh might look boxy, it is more than sufficient for
accuracy at low frequencies. A general rule of thumb used in Boundary Element Modeling is to use a mesh grid that corresponds to 1/10th of a wavelength of the highest frequency predicted. At 100
Hz, the wavelength of sound is approximately 3 meters (about 12 feet), and so 1/10th of a wavelength is about 1 foot, which is larger than the model mesh used for the M2D. This mesh would not yield
accurate results for mid or high frequencies, but for these frequency ranges we use the very accurate data from our anechoic chamber.
Figure 2. Model mesh of an M2D loudspeaker
By specifying the velocity boundary conditions of the model loudspeaker mesh, we can inform the model of where the vibrating structure corresponds to the dual ten-inch woofers of an M2D. The other
mesh elements are then assumed to be rigid (reflecting).
Once we specify the velocity boundary conditions, the software solves the Kirchhoff-Helmholtz Integral Equation in order to find the surface pressure on the loudspeaker mesh. Figure 3 shows a
colormap of the surface pressure (at 90 Hz).
Figure 3. Color map showing surface pressure at 90 Hz
Once we know both the velocity boundary conditions and the surface pressure on the loudspeaker mesh, the software can solve the integral equation to find the pressure at any point in free space. In
our case, we have asked the software to solve for the pressure on a 4-meter circular radius, which corresponds to the 4-meter radius in our anechoic chamber.
Figures 4 and 5 show how we calculate the complex pressure at 4 meters every 1 degree. In our software package (Sysnoise), this is called Directivity.
Figure 4. (Top) and Figure 5. (Bottom) show how the complex pressure is calculated at 4 meters every 1 degree
By combining the on-axis 4-meter outdoor half plane measurement and the Boundary Element 1-degree Directivity calculations, we can supply the accurate complex polar data that MAPP Online requires
below 100 Hz. Figures 6, 7 and 8 show a MAPP Online prediction of a single M2D with a microphone at 1 meter. Notice how the Frequency Response and the Spectrum calculations now extend (accurately)
below 100 Hz.
Figure 6. (Top), Figure 7. (Center) and Figure 8. (Bottom) show a MAPP Online prediction of a single M2D with a microphone at 1 meter
Not all of the loudspeakers in MAPP Online have been corrected at this time. MAPP Online will automatically display the spectrum and frequency response below 100 Hz if the polar data file has been
corrected. If information below 100 Hz does not display, the data have not yet been corrected.
More information about boundary element methods in acoustics can be found in the book, Inverse Acoustic and Electromagnetic Scattering Theory, Second Edition, David Colton and Rainer Kress,
Springer-Verlag, Berlin, 1998. | {"url":"http://meyersound.com/support/papers/LF_polar_data/","timestamp":"2014-04-16T16:51:21Z","content_type":null,"content_length":"18439","record_id":"<urn:uuid:4c222233-ed9e-4907-afdc-6ec6b8c1d454>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about tidbits on that's almost right
The Avrocar located in Dayton, Ohio, US | Atlas Obscura | Curious and Wondrous Travel Destinations
In flight-testing, the Avrocar proved to have unresolved thrust and stability problems. The saucer proved immensely difficult to fly with very sensitive controls, and one pilot likened flying it
to “balancing on a beach ball.” Though the Avrocar was made to fly up to 190 km/h and it was believed with some modifications the project was salvageable, funding ran out and the project was
canceled in September 1961.
Nowadays, when any geek who can follow directions can build a four-rotor helicopter with cheap microcontrollers that adjust the attitude 30 times a second, it seems that all those designs for flying
stuff that was dangerously unstable would now be a piece of cake. Of course, I could be wrong. | {"url":"http://alltoosimple.wordpress.com/category/tidbits/","timestamp":"2014-04-18T19:07:42Z","content_type":null,"content_length":"43466","record_id":"<urn:uuid:bb1fb92b-a973-4961-8959-b1ebf428996e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 19th 2009, 10:13 PM #1
Super Member
Feb 2008
Show that to find Rem(n,4) one just needs to replace n e N with the last two digits of n.
Why is it that only the last three digits of n e N need to be considered when calculating Rem(n,8)?
Hi jezllt,
I am not sure if you are familiar with modulos. The solution relies on the use of it.
Show that for the Rem(n,4) you will just need to observe the last two digits of an n digit number. (I'm assuming Rem stands for remainder)
Part 1:
First note that any n digit number can be rewritten using the linear combination of place digits (i.e ones, tens,etc.) multiplied by some power to the 10 ( note the power of 10 is one less the
the place of the digit).
Ex: $439 = 4(10^2)+3(10^1)+9(10^0)$
let x be your n digit number then in the form $x=a_na_{n-1}...a_1a_0 \; n\geq2$ the $a_i 's$ are natural numbers
Then noting the earlier observation
$x= a_n(10^{n-1}) +a_{n-1}(10^{n-2})+...+a_2(10^2)+a_1(10)+a_0$
factoring a $10^2$ from every term except the last 2 (this is possible since $n \geq3$)
$x= 10^2[ a_n(10^{n-3} +a_{n-1}(10^{n-4})+...a_2] +a_1(10)+a_0$
Note $(10)^2 = (2*5)^2=2^2*5^2=4*5^2$
$x(Mod4)= \{4*5^2[ a_n(10^{n-3} +a_{n-1}(10^{n-4})+...a_2] +a_1(10)+a_0\} (Mod4)$ ( The '=' should be replaced with the congruent symbol)
$= 4*5^2[ a_n(10^{n-3} +a_{n-1}(10^{n-4})+...a_2] (Mod4) + [a_1(10)+a_0] (Mod4)$
$x(Mod4) = 0(Mod4) + [a_1(10)+a_0] (Mod 4) = [a_1(10)+a_0] (Mod4)= (a_1a_0) (Mod 4)$
Thus evaluation of the remainder of an n digit number when divided by 4 is equivalent to the the remainder of the last two digits divided by 4:
For Rem (n,8) try the same technique but factor a 10^3 from all digits except the last 3 and apply (Mod 8).
Hope this helps.
Last edited by Nokio720; March 19th 2009 at 11:30 PM.
March 19th 2009, 11:07 PM #2
Feb 2009
Loma Linda, CA | {"url":"http://mathhelpforum.com/number-theory/79606-dividing.html","timestamp":"2014-04-18T00:53:59Z","content_type":null,"content_length":"34119","record_id":"<urn:uuid:0aa4bb00-60d5-4cea-ac10-c69231108a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Hyperbolic Triangle
Using three semicircles, this Demonstration creates a hyperbolic triangle, measures and totals its angles, and calculates its area using the Gauss–Bonnet formula. You can vary the radii of the
circles, thus changing all the measurements.
In the upper half-plane model of hyperbolic geometry, the hyperbolic straight lines are the vertical rays and the semicircles touching the axis.
The measure of the angle formed by the intersection of two semicircles is calculated by measuring the angle of the tangents to the circles at that point. The angle sum of a hyperbolic triangle is
less than 180 degrees.
The Gauss–Bonnet formula states that the area of a hyperbolic triangle is the difference of (or 180°) and the sum of the interior angles of the triangle. | {"url":"http://demonstrations.wolfram.com/HyperbolicTriangle/","timestamp":"2014-04-16T13:11:03Z","content_type":null,"content_length":"43918","record_id":"<urn:uuid:2eeed688-d8a5-475d-ad1d-fca21f1584ef>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coordinate Graphing Worksheets
What are coordinate graphs used for?
These graphs are used as a visual aid, usually to show a relationship between whatever X is and whatever Y is. When we deal with numbers alone, sometimes we miss patterns. By putting the information
on a coordinate graph, the picture is clearer. Graphs like these appear as descriptions or trends over time, like how an investment has increased or decreased, or the growth of plants with different
Any time two things can be related (so that a change in one means a change in the other) a coordinate graph can draw a helpful picture of what's happening.
A basic problem in coordinate graphing.
A gardener wants to know how much plant food to use on her tomatoes. She tries different amounts and measures how many pounds of tomatoes she gets with each amount of fertilizer. After gathering all
these numbers, she puts the information on a coordinate graph to see if there is a pattern.
Her graph looks like this:
She puts a point on the graph for each time she changes the amount of fertilizer (X) and how many pounds of tomatoes she gets (Y). Finally, in the second coordinate graph, she draws lines connecting
the points. This final picture makes it obvious that she gets more tomatoes when she uses more fertilizer, but only up to a point - where the line is highest. Adding more fertilizer after that
actually lowers the amount of tomatoes.
Who first used this form of math?
RenÉ Descartes (pronounced day-kart) is credited with developing the Cartesian coordinate system, another name for coordinate graphing. Descartes was an amazing genius who lived in the 17th century.
Besides being a mathematician, he was also a philosopher, a scientist and a writer. You may already know one of his quotes: I think, therefore I am.
An interesting fact about coordinate graphing:
The GPS (Global Positioning System) that we find in cars is based on coordinate graphing. If an imaginary grid is drawn around the entire earth, each point (or location) on the earth can then be
expressed as an X and Y coordinate. With GPS, these are called latitude and longitude, but the idea is exactly the same.
By knowing where you are on this global grid, the distance and direction to anywhere else can be calculated. This is how GPS figures out what to tell you. | {"url":"http://www.mathworksheetsworld.com/bytopic/coordinategraphing.html","timestamp":"2014-04-18T15:39:20Z","content_type":null,"content_length":"10978","record_id":"<urn:uuid:89662a35-3f04-43fb-956b-72bf10df0135>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highbridge, NY Precalculus Tutor
Find a Highbridge, NY Precalculus Tutor
...I have a Ph.D. in chemical engineering from the California Institute of Technology, and a minor concentration in applied mathematics. I have worked over 20 years in research in the oil,
aerospace, and investment management industries. I also have extensive teaching experience -- both as a mathematics tutor and an adjunct professor.
11 Subjects: including precalculus, calculus, algebra 1, SAT math
...Keeping a good attitude can be a key part of mastering physics. As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I
bring all the tools you'll need to succeed! Of course, a big part of physics is math, and I am experienced ...
18 Subjects: including precalculus, reading, GRE, physics
...I love words, and I think it helps students that I'm able to define the words we encounter in a fun and relatable way, without the aid of a dictionary. I also teach a number of memory
techniques to assist students in building their vocabularies and to aid in the memorization they do for other su...
36 Subjects: including precalculus, English, chemistry, calculus
...I am now a certified Grade 7-12 Mathematics Classroom Teacher in New York State since 2013. I spent Spring 2013 as a student teacher at East Bronx Academy for the Future. I also worked as a
part-time Math and Statistics Tutor at Bronx Community College.
8 Subjects: including precalculus, algebra 1, algebra 2, SAT math
...I taught high school students and privately tutored all levels, various subjects. I was brought up in Paris (France) and Toronto (Canada), thus I am perfectly bilingual. My main challenging
tutoring position was to tutor an 11 year old English speaking boy who joined the French Lycee in Toronto, Canada.
18 Subjects: including precalculus, chemistry, calculus, physics
Related Highbridge, NY Tutors
Highbridge, NY Accounting Tutors
Highbridge, NY ACT Tutors
Highbridge, NY Algebra Tutors
Highbridge, NY Algebra 2 Tutors
Highbridge, NY Calculus Tutors
Highbridge, NY Geometry Tutors
Highbridge, NY Math Tutors
Highbridge, NY Prealgebra Tutors
Highbridge, NY Precalculus Tutors
Highbridge, NY SAT Tutors
Highbridge, NY SAT Math Tutors
Highbridge, NY Science Tutors
Highbridge, NY Statistics Tutors
Highbridge, NY Trigonometry Tutors
Nearby Cities With precalculus Tutor
Allerton, NY precalculus Tutors
Beechhurst, NY precalculus Tutors
Bronx precalculus Tutors
Castle Point, NJ precalculus Tutors
Fort George, NY precalculus Tutors
Fort Lee, NJ precalculus Tutors
Hamilton Grange, NY precalculus Tutors
Hillside, NY precalculus Tutors
Inwood Finance, NY precalculus Tutors
Manhattanville, NY precalculus Tutors
Morsemere, NJ precalculus Tutors
Parkside, NY precalculus Tutors
Rochdale Village, NY precalculus Tutors
West Englewood, NJ precalculus Tutors
West Fort Lee, NJ precalculus Tutors | {"url":"http://www.purplemath.com/Highbridge_NY_precalculus_tutors.php","timestamp":"2014-04-17T16:10:38Z","content_type":null,"content_length":"24459","record_id":"<urn:uuid:12350270-3dd2-4b32-8970-8a0da735cac9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimum solid angle and aspect ratio of an $n$-simplex
up vote 2 down vote favorite
In computational geometry and other fields, it is of interest to have degeneracy measures for shapes of simplices, which quantitatively seperate the regular simplex from degenerate simplices.
In two dimensions, two such shape measures are the minimum angle of a triangle and its aspect ratio, i.e. the quotient of the radii of insphere and circumsphere.
While many of these shape measures naturally generalize to higher dimensions, and are documented in literature for arbitrary dimension, I haven't found any source which relates the minimum solid
angle of a simplex with any such shape measure in arbitrary dimensions. It is "obvious" that simplices with small solid angles at the corner vertices are degenerate, but I haven't found any source on
this literature.
Question or reference request: Can you relate the minimum solid angle of a $d$-dimensional simplex with its aspect ratio for arbitrary $d$?
A possible answer would generalize Theorem 6.1 of "A. Liu and B. Joe. Relationship between tetrahedron shape measures, BIT, 34 (1994)" which states:
For any tetrahedron $T$ we have $\sqrt{3}/24 \rho^2 \leq \sigma_{\min} \leq (2/(3^{1/4})) \sqrt{\rho}$, where $\sigma_{\min}$ is the minimum solid angle of $T$ and $\rho$ denotes the aspect ratio of
euclidean-geometry geometry convex-polytopes
Of course you cannot have an equality relating $\sigma_{\min}$ to $\rho$ because you could fix $\sigma_{\min}$ (say, in a "needle" tetrahedron) while varying $\rho$. So are you asking for tighter
bounds than provided by Liu-Joe? Do they say the bounds are not tight? – Joseph O'Rourke Feb 29 '12 at 1:40
I conjecture he is asking for this in arbitrary dimension... – Igor Rivin Feb 29 '12 at 3:35
Well, I have corrected an error. Of course, the quotient is insphere to circumsphere, not the other way around. - Any falsification of this, which I don't believe to be true, would pose some
conceptual problems in computational geometry for higher dimensions, so this question is indeed motivated. And yes, I am asking for this in arbitrary dimension. I made it explicit in the text body.
– Martin Feb 29 '12 at 10:40
@Martin & @Igor: OK, I see the question now. Thanks. – Joseph O'Rourke Feb 29 '12 at 11:13
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged euclidean-geometry geometry convex-polytopes or ask your own question. | {"url":"http://mathoverflow.net/questions/89807/minimum-solid-angle-and-aspect-ratio-of-an-n-simplex","timestamp":"2014-04-16T10:58:12Z","content_type":null,"content_length":"51451","record_id":"<urn:uuid:9ecef255-6902-4825-8966-3a57a8c91371>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acceleration of Gravity
November 26th 2007, 10:27 PM #1
Aug 2007
Acceleration of Gravity
The acceleration of gravity can be measured by projecting a body upward and measuring the time that it takes to pass two given points in both directions.
Show that if the time the body takes to pass a horizontal line $A$ in both directions is $T_A$, and the time to go by a second line $B$ in both directions is $T_B$, then, assuming that the
acceleration is constant, its magnitude is $g = \frac{8h}{T_{a}^{2} - T_{b}^{2}}$ where $h$ is the height of line $B$ above line $A$.
So this is a constant acceleration problem. Also the path is parabolic. How would I deduce the above equality?
The acceleration of gravity can be measured by projecting a body upward and measuring the time that it takes to pass two given points in both directions.
Show that if the time the body takes to pass a horizontal line $A$ in both directions is $T_A$, and the time to go by a second line $B$ in both directions is $T_B$, then, assuming that the
acceleration is constant, its magnitude is $g = \frac{8h}{T_{a}^{2} - T_{b}^{2}}$ where $h$ is the height of line $B$ above line $A$.
So this is a constant acceleration problem. Also the path is parabolic. How would I deduce the above equality?
The height of a projectile as a function of time is:
$x=- ~\frac{gt^2}{2}+v_0t +h_0$
So the projectile passess a height $h_a$ at the roots of:
$- ~\frac{gt^2}{2}+v_0t +(h_0-h_a)=0$,
which are:
\frac{-v\pm \sqrt{v^2+2g(h_0-h_a)}}{-g}.
The difference between these roots is:
$<br /> T_a=\frac{2}{g}\sqrt{v^2+2g(h_0-h_a)}<br />$.
$<br /> \frac{g^2T_a^2}{4}=v^2+2g(h_0-h_a)<br />$.
Now if we have two such data points we may subtract them:
$<br /> \frac{g^2T_a^2}{4}-\frac{g^2T_b^2}{4} = 2g(h_a-h_b)<br />$
$<br /> \frac{g}{4}(T_a^2-T_b^2)=2(h_a-h_b)<br />$
and the rest is algebra.
November 26th 2007, 10:52 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-applied-math/23581-acceleration-gravity.html","timestamp":"2014-04-18T21:26:49Z","content_type":null,"content_length":"38256","record_id":"<urn:uuid:9a67e751-3e14-468f-ad06-f298bdf7f595>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECE 531: Detection and Estimation
University of Illinois at Chicago, ECE
Spring 2011
Instructor: Natasha Devroye, devroye@ece.uic.edu
Course coordinates: Tuesday, Thursday from 2-3:15pm in Lincoln Hall 103.
Office hours: Tuesdays from 3:30-4:30pm and Thursdays 5-6pm in SEO 1039, or by appointment
Welcome to ECE 531! This course is a graduate-level introduction to detection and estimation theory, whose goal is to extract information from signals in noise. A solid background in probability and
some knowledge of signal processing is needed.
Course Textbook: Fundamentals of Statistical Signal Processing, Volume 1: Estimation Theory, by Steven M. Kay, Prentice Hall, 1993 and (possibly) Fundamentals of Statistical Signal Processing, Volume
2: Detection Theory, by Steven M. Kay, Prentice Hall 1998.
Other useful references:
Harry L. Van Trees, Detection, Estimation, and Modulation Theory, Part I, II, III, IV
H. Vincent Poor, Introduction to Signal Detection and Estimation
Louis L. Scharf and Cedric Demeure, Statistical Signal Processing: Detection, Estimation, and Time Series Analysis
Carl Helstrom, Elements of Signal Detection and Estimation. It's out of print, so here's my pdf copy.
A nice survey on the EM algorithm
Notes: I will follow the course textbooks fairly closely, using a mixture of slides (highlighting the main points and with nice illustrations) and more in-depth blackboard derivations/proofs in
class. I will post a pdf version of the slides as they become ready here, but the derivations will be given in class only.
Topics: Estimation Theory:
General Minimum Variance Unbiased Estimation, Ch.2+Ch.3, and Chapter 5 notes
Cramer-Rao Lower Bound, Ch.3
Linear Models+Unbiased Estimators, Ch.4 and Ch. 6 notes
Maximum Likelihood Estimation, Ch.7 notes
Least squares estimation, Ch.8 notes
Bayesian Estimation, select Ch.10-12 notes
Kalman filtering, select Ch.12-13 notes
Detection Theory:
Statistical Detection Theory, Ch.3 notes
Deterministic Signals, Ch.4< notes
Random Signals, Ch.5 notes
Statistical Detection Theory 2, Ch.6 notes
Grading: Weekly homeworks (15%), Exam 1 = max(Exam1, Exam 2, Final) (20%), Exam 2 = max(Exam 2, Final) (20%), Project (20%), Final exam (25%).
Homework: Will be handed out each Thursday, due the next Thursday (1 week). All assignments from HW2 onwards MUST be submitted electronically as a latex file, and a printed pdf copy handed in during
class. I will create the solutions from the best solutions I receive (with credit to the authors!). Submit the latex file via the appropriate assignment # on the Blackboard site. Please make sure the
"Title" you enter is of the form "HW#_Student_Name" (e.g. HW4_Natasha_Devroye). You can find latex resources for Windows and for MAC by googling around. Finally, here is a template for you to use for
homework submissions.
HW1: out 1/13 due 1/20 -- Book 1, problems 2.1, 2.3, 2.8, 2.9, Solutions
Hw2: out 1/20 due 1/27 -- Book 1, problems 3.3, 3.11, 3.15 Solutions
HW3: out 1/27 due 2/3 -- Book 1, problem 4.6, 4.13, 5.3, 5.9 Solutions
HW4: out 2/3, due 2/10 -- Book 1, problem 6.7, 6.9, 7.3, 7.14 Solutions
HW5: out 2/17, due 2/24 -- Book 1, problem 7.18, 7.20, 8.5, 8.10 Solutions
HW6: out 2/24, due 3/3 -- Book 1, problem 8.20, 8.27, 10.3 Solutions
HW7: out 3/3, due 3/10 -- Book 1, problem 11.3, 12.1, 12.11 Solutions
HW8: out 3/10, due 3/17 -- Book 1, problem 13.4, 13.12, 13.15 Solutions
HW9: out 3/31, due 4/7 -- Book 2, problem 3.4, 3.6, 3.12, 3.18 Solutions
HW10: out 4/7, due 4/14 -- Book 2, problem 4.6, 4.10, 4.19, 4.24 Solutions
HW11: out 4/14, due 4/21 -- Book 2, problem 5.14, 5.17, 6.2 Solutions
Exams: For midterm 1, you may have one 8.5x11 double-sided sheet which you can fill with anything you like. No other books, notes or calculators. For midterm 2, you may have 2 of these crib sheets
and for the final exam you may have 3 such crib sheets.
Practice midterm1 2009, Midterm 1 2009, Pratice midterm2 2009, Midterm 2 2009, Practice final 2009
Project: The project, to be done individually, will conists of an in-depth study of an implementation of detection and estimation principles. The goal is to explore contemporary research topics in
the area of detection, estimation and generally statistical signal processing that are not covered in class. Pick (or suggest) a topic of interest to you and provide a comprehensive treatment of it:
introduce the problem/topic, survey what has been done by whom on the topic (we expect many citations to relevant journal and conference papers), implement (in Matlab, C, whatever works for you) the
detection/estimation technique for various scenarios and make a demo, to be performed and explained live in class during your presentation, that illustrates its performance. You may compare various
methods (e.g. different tracking algorithms), or may look at your implementation under a variety of conditions. The goal is to show a deep understanding of the subtleties of that detection/estimation
scheme, where is it useful, its limitations and strengths, and some of the nitty-gritty details which you did not expect to encounter. As the applications of detection and estimation theory span
several fields, there is no single journal in which to look for articles. One useful resource is IEEE Xplore where you can search many of the relevant IEEE journals such as IEEE Transactions on
Signal Processing, IEEE Transactions on Wireless Communications, IEEE Transactions on Antennas and Propagation, IEEE Transactions on Information Theory, and all sorts of other ones. More recent
developments are often found in conference proceedings, which are sometimes also found on IEEE Xplore, or sometimes by navigating conference websites. Again, as detection and estimation spans a
variety of fields, there is no unique relevant conference but some relevant ones include the IEEE Workshop on Statistical Signal Processing, IEEEE ICASSP, Asilomar Conference on Signals, Sysems and
Computers, ISIT, Globecom, IEEE Radar Conference, and many more! Plagiarism -- or copying someone else's work or code -- is not permitted and will be dealt with according to UIC policy, see UIC's
policy. Some help with bibtex and more. Here's a template which you can use if you like. It's in the standard IEEE format for journal articles. You'll need the IEEEtran document class (IEEEtrans.cls
needs to be in the directory in which your file is), which you can download and read more about here.
The project will consist of 3 parts: 1) a 5-8 pages, single spaced latex 11-12pt report, 2) a live demo, of your implementation, or comparison between different implementations/techniques, and 3) a
presentation in front of the class which will introduce the class to your selected topic through slides and the live demo, which should be 15 minutes long. More details will be given during the term.
Project grading: You will be graded on the quality of your written report, your live demo, and your presentation. Possible project topics: the EM algorithm and its applications, the Kalman filter and
(one or more of) its applications, spectral estimation, white-spaces detection, distributed detection and estimation, sensor fusion, sequential detection and estimation, applications in your domain
of interest (biology, image processing, optics, etc.), Markov Chain Monte Carlo, particle filters, all aspects of radar signal processing (detection, tracking, SAR imaging). Your project must have an
implementable component illustrating the key ideas and performance.
Project timeline: Choose topic (2/23), make a list of relevant papers, skeleton of the paper, outline of the demo (3/29), hand-in final paper to entire class -- will be posted so other students can
read before presentation (4/19), in-class presentations and demos (4/21, 4/26).
Important dates (subject to change):
1/11: First class, introduction. Introduction slides
1/13: Minimum variance unbiased estimation and the Cramer-Rao Lower Bound.
1/18: Cramer-Rao Lower Bound.
1/20: Cramer-Rao Lower Bound, Linear Model.
1/25: Linear Model.
1/27: General MVUE.
1/28: MAKEUP lecture for 2/8, to be held in LH 103 from 4-5pm.
2/1: General MVUE.
2/3: Best Linear Unbiased Estimator (BLUE).
2/8: Devroye out of town, class cancelled.
2/10: Midterm 1.
2/15: Maximum likelihood estimation (MLE).
2/17: MLE.
2/22: MLE
2/24: MLE, Least Squares Estimation. Deadline to select project topic
3/3: Selected topics Ch.10-12, Bayesian estimation.
3/8: Bayesian estimation.
3/10: Kalman filtering.
3/15: Midterm 2.
3/17: Detection theory - intro to statistical detection theory.
3/22: Spring Break, no class
3/24: Spring Break, no class
3/29: Detection theory, chapter 3 continued. Project outline due.
3/31: Detection theory - Deterministic Signals Ch 4.
4/5: Ch.4 on Detection of Deterministic Signals in Gaussian noise continued.
4/7: Ch.5 on Detecion of Random Signals in Gaussian noise.
4/12: Ch 5 + Ch.6 on Statistical Decision Theory II.
4/14: Ch.6 on Statistical Decision Theory II.
4/19: Buffer. Written project report due.
4/21: Last class Review notes.
4/26: Project presentations.
4/28: Project presentations.
Final exam week 5/2 - 5/6 | {"url":"http://www.ece.uic.edu/~devroye/courses/ECE531/","timestamp":"2014-04-16T18:56:36Z","content_type":null,"content_length":"13115","record_id":"<urn:uuid:c32aff1e-878d-4c2c-be38-0a59302cad34>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
KL divergence(s) comparison,
up vote 2 down vote favorite
$P_1$, $P_2$, $P_3$ are probability distributions defined on the same support.
Knowing that $H(P_1) < H(P_2) < H(P_3)$, can we compare $D_{KL}(P_2,P_1)$ and $D_{KL}(P_3,P_1)$ ?
(H is the Shannon Entropy and $D_{KL}$ is the Kullback–Leibler divergence)
Thank you.
it.information-theory pr.probability
add comment
1 Answer
active oldest votes
In general there is no relation between the two divergences. In fact, both of the divergences may be either finite or infinite, independent of the values of the entropies.
up vote 3 To be precise, if $P_1$ is not absolutely continuous w.r.t. $P_2$, then $D_{KL}(P_2,P_1)=\infty$. Similarly, $D_{KL}(P_2,P_1)=\infty$. This fact is independent of the entropies of $P_1$,
down vote $P_2$ and $P_3$. Hence, by continuity, the ratio $D_{KL}(P_2,P_1)/D_{KL}(P_3,P_1)$ can be arbitrary.
Thank you. If we specify that KL is continuous at $(S_2, S_1)$ (respectively $(S_3, S_1)) and that the distributions $S_1$, $S_2$, $S_3$ are strictly positive over all the support
elements, is it possible to characterize $D_{KL}(P_2,P_1)/D_{KL}(P_3,P_1)$ ? – Raskol Apr 1 '13 at 15:52
Thank you. If we specify that KL is continuous at $(S_2, S_1)$ (respectively $(S_3, S_1)$) and that the distributions $S_1$, $S_2$, $S_3$ are strictly positive over all the support
elements. Is it possible to characterize $D_{KL}(P_2,P_1)/D_{KL}(P_3,P_1)$ ? – Raskol Apr 1 '13 at 15:54
Consider the following distributions on a state space of cardinality $n+2$: $P_1=((1-2\epsilon)Q_1,\epsilon,\epsilon)$, $P_2=((1-2\epsilon)Q_2,2\lambda\epsilon,2(1-\lambda)\epsilon)$,
where $Q_1,Q_2$ are arbitrary distributions on $n$ states and $0<\lambda<1$. All three have full support, and for small $\epsilon$, $H(P_i)\approx H(Q_i)$. However, $D(P_2,P_1)=(1-4k\
epsilon)D(Q_2,Q_1) + \epsilon\log(1/2\lambda) + \epsilon\log(1/2(1-\lambda))$. By choosing $\lambda$ conveniently, $D(P_2,P_1)$ can be tuned to any value between $(1-4k\epsilon)D
(Q_2,Q_1)$ and $\infty$. – jarauh Apr 3 '13 at 12:33
add comment
Not the answer you're looking for? Browse other questions tagged it.information-theory pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/125884/kl-divergences-comparison","timestamp":"2014-04-17T01:59:18Z","content_type":null,"content_length":"52778","record_id":"<urn:uuid:fedf24e4-20b1-425d-bc88-de41b03a26d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
A proof about differentiable functions
May 8th 2011, 10:02 AM #1
Junior Member
Mar 2011
A proof about differentiable functions
I'm trying to prove that if two functions f and g are differentiable on R
and they satisfy for all x
f'(x) * g(x) != f(x) * g'(x)
then between every two vanishing points of f there is a vanishing point of g.
I've tried to assume contrarily that there are two vanishing points of f without a vanishing point of g between them or on them (call them a and b), define a function f/g, and use Rolle's theorem
to show that there exists a c in [a,b] such that f'(c)/g'(c) = 0. But that hasn't really helped...
I'm trying to prove that if two functions f and g are differentiable on R
and they satisfy for all x
f'(x) * g(x) != f(x) * g'(x)
then between every two vanishing points of f there is a vanishing point of g.
I've tried to assume contrarily that there are two vanishing points of f without a vanishing point of g between them or on them (call them a and b), define a function f/g, and use Rolle's theorem
to show that there exists a c in [a,b] such that f'(c)/g'(c) = 0. But that hasn't really helped...
I think that you have made a good start there. If there are two vanishing points of f without a vanishing point of g between them, then (g(x))^2 will be strictly positive throughout that
interval. Also, $0e \frac{f'(x)g(x) - f(x)g'(x)}{(g(x))^2} = \frac d{dx}\Bigl(\frac{f(x)}{g(x)}\Bigr).$ Now show that that contradicts Rolle's theorem for the function f/g.
Oh. It seems so obvious now. Thanks.
May 8th 2011, 10:20 AM #2
May 8th 2011, 11:07 PM #3
Junior Member
Mar 2011 | {"url":"http://mathhelpforum.com/calculus/179900-proof-about-differentiable-functions.html","timestamp":"2014-04-17T19:20:37Z","content_type":null,"content_length":"37248","record_id":"<urn:uuid:6ef54e31-006f-4996-b968-f935db161d27>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
cvs commit: ports/devel Makefile ports/devel/pear-Math_Fraction
Makefile distinfo pkg-descr
cvs commit: ports/devel Makefile ports/devel/pear-Math_Fraction Makefile distinfo pkg-descr
Martin Wilke miwi at FreeBSD.org
Sun Aug 5 14:03:15 PDT 2007
miwi 2007-08-05 21:03:15 UTC
FreeBSD ports repository
Modified files:
devel Makefile
Added files:
devel/pear-Math_Fraction Makefile distinfo pkg-descr
Classes that represent and manipulate fractions (x = a/b).
The Math_FractionOp static class contains definitions for:
- basic arithmetic operations
- comparing fractions
- greatest common divisor (gcd) and least common multiple (lcm)
of two integers
- simplifying (reducing) and getting the reciprocal of a fraction
- converting a float to fraction.
WWW: http://pear.php.net/package/Math_Fraction/
Revision Changes Path
1.2837 +1 -0 ports/devel/Makefile
1.1 +26 -0 ports/devel/pear-Math_Fraction/Makefile (new)
1.1 +3 -0 ports/devel/pear-Math_Fraction/distinfo (new)
1.1 +11 -0 ports/devel/pear-Math_Fraction/pkg-descr (new)
More information about the cvs-all mailing list | {"url":"http://lists.freebsd.org/pipermail/cvs-all/2007-August/229163.html","timestamp":"2014-04-21T07:24:03Z","content_type":null,"content_length":"3742","record_id":"<urn:uuid:e800b94b-0a76-4f0c-98c3-2d4a7ee21e90>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Formula for Intelligence: The Recursive Paradigm
An explanation of the recursive approach to artificial intelligence, written for “The Futurecast,” a monthly column in the Library Journal.
Originally published September 1992. Published on KurzweilAI.net August 6, 2001.
There is a profound satisfaction in simple explanations that can truly account for complicated phenomena. The search for unifying formulas has been a goal of science since its inception with the
Ionian Greeks 25 centuries ago.
Is there a formula that describes, explains, or underlies intelligence? At first, the answer might appear to be an obvious “No.” We have not been entirely successful in even defining intelligence,
much less in expressing it in a formula, a set of laws, or models. Intelligence would seem to be too complex for such reduction.
We should not be too hasty, however, in rejecting the idea of simple paradigms underlying the undeniably complex phenomenon of intelligence. Let us examine the playing of chess as an example of an
intelligent activity. Chess is often used as a prototype for examining issues in intelligence. Scientist Raj Reddy cites studies of chess as playing the same role in artificial intelligence (Al) as
studies of the bacterium E. Coli play in biology: an ideal laboratory for studying fundamental questions. I will describe a strikingly simple method for playing an outstanding game of chess. We will
then see that the method expands to a remarkably broad array of intelligent tasks.
Chess is a game of two opponents in which the players take turns adjusting the positions of their pieces on a playing board according to prescribed rules. Of course, many other games can be described
in the same way. Many activities in real life, such as war and business, have similar characteristics. Indeed, chess was originally formulated as a model of war.
To win, or provide the highest probability of winning, one selects the best possible move every time it is one’s turn to move. The question, then, is what is the best move? Our “recursive” method
provides the following rule, which, if you follow it carefully, will enable you to play an exceptionally good game of chess:
Every time it is your move, select your best move on the assumption that your opponent will do the same.
As I believe will become clear, this rule is all that is needed to play an excellent game of chess. If this is so, then we might conclude either that the recursive formula is a powerful and
deceptively simple formula for the algorithm of at least some forms of intelligence, or, alternatively, that chess is not an intelligent game.
What you gonna call?
Before delving further into the implications of the recursive formula, let us examine how it works. We fashion a program called “Pick Best Move.” When it is called, its job is to pick the best move.
It is what we call a recursive program because it is capable of calling itself. This capability, called recursion, is perfectly feasible in modern programming languages and is commonly used in AI
In order for Pick Best Move to select the best move, it must obviously consider each move, and thus it needs to generate a list of all possible moves. It does this by simply applying the rules of
chess. It thus includes a mechanism programmed with the rules of chess to generate its options. To play a different game, we simply replace this module with another one programmed with the rules of
that particular game.
We now have a list of possible moves. We examine each in turn and ask the question: Does this move enable me to win or lose? If the answer is lose, we do not select that move. If the answer is win,
we take that move. If more than one move enables us to win, it does not matter which one we take.
The problem now reduces to answering the question: Does this move enable me to win or lose? At this point we note that our winning or losing is affected by what our opponent might do. A reasonable
assumption is that our opponent will also choose his or her (or its) best move. We need to anticipate what that move might be, so we use our own ability to select the best move to determine what our
opponent is likely to do. In this we are following the part of the recursive rules that states, “Select the best move on the assumption that your opponent will do the same.”
Pick Best Move
Our program is now structured as follows. We generate a list of all possible moves allowed by the rules. We examine each possible move in turn. For each move, we generate a hypothetical board
representing what the placement of the pieces would be if we were in fact to make this move.
How are we to do this? It turns out we have a program that is designed to do exactly that. It is called “Pick Best Move.” Pick Best Move is, of course, the program we are already in, so this is where
the recursion comes in. Pick Best Move calls itself to determine what our opponent will do. When called to determine the best move for our opponent, Pick Best Move begins to determine all of the
moves that our opponent could make at this point. For each one, it wants to know how its opponent (which is now us) would respond and thus again calls Pick Best Move for each possible move of our
opponent to determine what our response to that move would (or should) be.
The program thus keeps calling itself, continuing to expand possible moves and countermoves in an ever expanding tree of possibilities. The next question is: Where does this all end? Let us start
with an attempt to play perfect chess. We continue to expand the tree of possible moves and countermoves until each branch results in an end game. Each end game provides the answer win, tie, or lose.
Thus, at the furthest point of expansion of moves and countermoves, some moves finally finish the garne. These final moves are the terminal leaves of our expanding tree of moves. Now, instead of
continuing to call Pick Best Move, we can now begin returning from these calls. As we begin to return from all the nested Pick Best Move calls, we have determined the best move at each point
(including the best move for our opponent), and so we can finally select the correct move for the current actual board situation.
The above procedure is guaranteed to play a perfect game of chess. This is because chess is a finite game. Interstingly, it is finite only because of the tie rule, which states that repetition of a
move results in a tie. If it were not for the tie rule, then chess would be an infinite game (the tree of possible moves and countermoves could expand forever), and we could not be sure of
determining the best move within a finite amount of time. Thus, this very simple recursive formula plays not just an excllent, but a perfect game of chess. The most complicated part of actually
implementing the recursive formula is generating the allowable moves at each point. Doing so, however, requires only a straightforward codification of the rules. Playing a perfect game of chess is
thus no more complicated than understanding the rules.
Trimming the tree
When shopping for services like car repair, the smart shopper is well advised to ask: How long will it take? The same question is quite appropriate in applying the recursive formula. Unfortunately,
with respect to playing a perfect game of chess, I have computed the answer to be approximately 40 billion years. And that is just to make the first move!
Playing perfect chess might be considered intelligent behavior, but not at 40 billion years per move. Before we throw out the recursive formula, however, let us attempt to modify it to take into
account our human patience (and mortality). Clearly, we need to put limits on how deep we allow the recursion to take place. How large we allow the move/countermove tree to grow should depend on how
much computation we have available. In this way, we can use the recursive formula on any equipment, from a pocket computer to the largest supercomputer.
If we cannot expand each sequence of moves and countermoves to an end game, then we need some way of evaluating the desirability of a board short of winning, losing, or tying. As it turns out, simple
methods such as counting the values of each piece (ten for the queen, five for a rook, etc.) do quite nicely. We thus end up with an algorithm that does indeed reduce to a simple formula and is
capable of playing a very good game of chess, with reasonable (and in tournament play, acceptable) response time.
The next question is: How good a game? In order to make linear progress with the recursive formula, we need to increase the available computation exponentially (for each additional move or
countermove that we want to look ahead, we need to multiply the available computation by approximately eight).
However, as I have pointed out several times in The Futurecast”, we are indeed making exponential progress in the power of our computers with the linear passing of time. And indeed, computers have
been gaining in their championship ratings at a remarkably consistent 45 points each year. They are now rated at about 2600, exceeded by only a few dozen humans. The world champion has a rating of
2800, so it is only a matter of a few years before computers overtake all human play.
The recursive formula can be (and has been) applied to many other types of tasks. In addition to board games, the recursive paradigm has been successfully applied to solving a wide range of problems
in mathematics, evaluating strategies in finance and business, and many other domains.
The three levels of intelligence
Is there a limit to the ability of this simple algorithm to emulate intelligence? In practical applications of the recursive paradigm to a variety of tasks, intelligent problems appear to divide into
three levels or classes. At one extreme are problems that require little enough computation that they can be completely analyzed. Examples are tic-tac-toe and such word problems as cannibals and
missionaries. In the middle are problems for which we cannot afford fun analysis but that appear to be successfully dealt with using a simple implementation of recursion. By “successfully dealt with”
I mean that it is possible to match the performance of most (and in some cases all) humans. Chess is in this second class. Finally, there is a third class of problems that are not amenable to
recursive solutions, at least not using simple procedures at the terminal leaf. In chess, when we stop expanding the tree of moves and countermoves, we can use a simple procedure of just counting up
piece values (or some other simple method). If instead we need to perform a “deep” analysis of the quality of the board, then we would not be relying just on recursion, but on this deep analysis.
Chess is a “level 2″ problem, so recursion alone does appear to be sufficient But level 3 problems require something more.
Interestingly, the Japanese board game of Go appears to require just this sort of deep analysis. The sequence of moves and countermoves to make meaningful progress in a game of Go is so long that the
recursive method alone does not play a very good game. Certain realms of mathematics, as well as many practical problems, appear to require a means of making judgements beyond the recursive expansion
of possibilities. So our next question becomes: Is there a simple formula to make these deep judgments for level 3 intelligent problems? As it turns out, there may be. These deep judgments can be
considered pattern recognition judgments, the ability to recognize patterns in new situations based on what we have learned from exposure to previous patterns. It involves learning and the ability to
apply learning to new problems. A paradigm called “neural nets” is another simple yet powerful organizing principle that appears to be capable of making intelligent judgements when given the
opportunity to learn from many previous examples of problem solving. It is based on a model of how neurons, the basic computational engine of the human brain, perform computation and has shown great
promise in making just the sort of deep pattern recognition-based judgments required for level 3 problems.
Interstingly, when humans play chess, we do not use the recursive paradigm to any great extent, but rely instead on our ability to recognize pattems learned from previous experience. Humans thus use
a level 3 technique to solve a level 2 problem.
Reprinted with permission from Library Journal, September 1992. Copyright © 1992, Reed Elsevier, USA | {"url":"http://www.kurzweilai.net/a-formula-for-intelligence-the-recursive-paradigm","timestamp":"2014-04-21T04:33:23Z","content_type":null,"content_length":"34045","record_id":"<urn:uuid:63aa9a42-dcf0-49e9-8b0e-049abebd8c88>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Nested Regular Polygons
This Demonstration shows an interesting nesting of successive regular polygons, starting with an equilateral triangle up to a chosen -gon. If the side length of the triangle is 1, can you show that
the side of the square is equal to (approximately 0.966) and that the side of the pentagon equals (approximately 0.905)? | {"url":"http://demonstrations.wolfram.com/NestedRegularPolygons/","timestamp":"2014-04-21T12:23:54Z","content_type":null,"content_length":"42235","record_id":"<urn:uuid:76a6f827-313b-4f5c-9510-5de719739c43>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Charlie Reeve Subroutines
STSPAC : Charlie Reeve Subroutines
Introduction Charlie Reeve wrote the subroutines available here while he was a staff member in the Statistical Engineering Division (SED). Although these routines were intended for his personal use
rather than as a publically distributable library, we are providing them because they contain some routines not readily available elsewhere. In particular, it contains routines for computing the
cumulative distribution function and generating random numbers for the doubly non-central F and doubly non-central t distributions (these routines are used by the Dataplot program).
This software is not formally supported and is not being further developed. It is provided on an "as is" basis. There is no formal documentation for the subroutines. However, most of the subroutines
contain usage instructions in the comments in the source code.
Publications Although there is no formal documentation for these subroutines, Charlie Reeve wrote a number of SED Notes that document some of the algorithms coded in this library. The NIST library
has graciously scanned these documents and converted them to PDF documents. These documents were scanned from personal copies, so some of them contain external markings. We apologize for this.
Contents The following subroutines and functions are available:
• FUNCTION ANORMI - Infinity norm of NxN matrix
• SUBROUTINE BARTLT - Bartlett's test for homogeneity of variances
• SUBROUTINE CDFBET - Compute the beta cumulative distribution function
• SUBROUTINE CDFDNF - Compute the doubly non-central F cumulative distribution function
• SUBROUTINE CDFDNT - Compute the doubly non-central t cumulative distribution function
• SUBROUTINE CDFF - Compute the F cumulative distribution function
• SUBROUTINE CDFGAM - Compute the gamma cumulative distribution function
• SUBROUTINE CDFNOR - Compute the normal cumulative distribution function
• SUBROUTINE CDFT - Compute the t cumulative distribution function
• SUBROUTINE CENSCL - Compute estimates of location and scale
• SUBROUTINE CIELIP - Compute inverse prediction for a linear fit (Eisenhart method)
• FUNCTION DGAMLN - Compute the log gamma function
• FUNCTION DNCMLN - Compute natural logarithm of "N choose M", N!/[M!(N-M)!]
• FUNCTION DPR1LN - Compute natural logarithm of:
N1!/[M1!(N1-M1)!] N2!/[M2!(N2-M2)!]
• SUBROUTINE DWESD - Compute the expected value and variance of the Durbin-Watson statistic
• SUBROUTINE ELLPTS - Compute N regularly spaced points on the perimeter of an ellipse
• SUBROUTINE FACTOR - Find the prime factors of the absolute value of N (N an integer of 10 digits or less)
• SUBROUTINE FIBMIN - Compute the minimum of a univariate FUNCTION within an interval using Fibonacci search algorithm
• FUNCTION IGCD - Compute the greatest common divisor of two integers
• FUNCTION IGCDM - Compute the greatest common divisor of the integers in a matrix
• FUNCTION IGCDV - Compute the greatest common divisor of the integers in a vector
• SUBROUTINE LINSYS - Solve an NxN system of linear equations (LU decomposition)
• SUBROUTINE LISYPD - Solve an NxN system of linear equations where the matrix is symmetric and positive definite (Cholesky factorization)
• SUBROUTINE LSQSVD - Solve linear least squares equations using singular value factorization
• SUBROUTINE LSTSQR - Solve linear unweighted least squares problem
• SUBROUTINE MAD - Compute median absolute deviation
• SUBROUTINE MATCNO - Compute the condition number of an NxN matrix
• SUBROUTINE MATHDI - Compute the diagonal of the hat matrix
• SUBROUTINE MATINV - Compute the inverse of a matrix
• SUBROUTINE MATIPD - Compute the inverse of a symmetric positive definite matrix
• SUBROUTINE MATMPI - Compute the Moore-Penrose psuedo-inverse of an NxM matrix (N > M)
• SUBROUTINE MATXXI - Compute the inverse and determinant of X'X where X is an NxM (N > M) matrix
• SUBROUTINE MEANSD - Compute the mean and standard deviation
• SUBROUTINE MEDIAN - Compute the median
• SUBROUTINE MINMAX - Compute the minimum and maximum
• SUBROUTINE NEXPER - Compute the next permutation of the integers 1, 2, ..., N
• SUBROUTINE PERMAN - Compute the permanent of an NxN matrix
• SUBROUTINE PLOTCR - Generate a line printer plot of Y vs X
• SUBROUTINE PPFBET - Compute the inverse cumulative distribution FUNCTION of the beta distribution
• SUBROUTINE QDASIM - Numerical integration using adaptive Simpson's rule
• SUBROUTINE QDTANH - Numerical integration using the TANH rule
• FUNCTION QDTRAP - Numerical integration using the trapezoidal rule
• SUBROUTINE QSMNMX - Find the minimum and maximum values of a quadratic surface
• FUNCTION RDBETA - Generate a beta random number
• FUNCTION RDCHI2 - Generate a chi-square random number
• SUBROUTINE RDCONS - Generate uniformly-space psuedo-random points on the surface of a cone
• SUBROUTINE RDCONV - Generate uniformly-space psuedo-random points within a cone
• SUBROUTINE RDCYLS - Generate uniformly-space psuedo-random points on the surface of a cylinder
• SUBROUTINE RDCYLV - Generate uniformly-space psuedo-random points within a cylinder
• SUBROUTINE RDELLS - Generate uniformly-spaced psuedo-random points on the surface of an ellipse
• SUBROUTINE RDELLV - Generate uniformly-spaced psuedo-random points within an ellipse
• FUNCTION RDF - Generate a F random number
• FUNCTION RDGAMM - Generate a gamma random number
• SUBROUTINE RDHLX - Generate uniformly-spaced psuedo-random points on a helix in three-dimensional space
• SUBROUTINE RDMNOR - Generate multivariate random numbers
• FUNCTION RDNF - Generate doubly non-central F random numbers
• FUNCTION RDNOR - Generate a normal random number
• FUNCTION RDNOR3 - Generate a normal random number (polar method)
• SUBROUTINE RDRECS - Generate uniformly-spaced psuedo-random points on the surface of a three-dimensional rectange
• SUBROUTINE RDRECV - Generate uniformly-spaced psuedo-random points within a three-dimensional rectange
• SUBROUTINE RDSPSH - Generate uniformly-spaced psuedo-random points in a region between the surfaces of two spheres
• FUNCTION RDT - Generate a t random number
• SUBROUTINE RDTORS - Generate uniformly-spaced psuedo-random points on the surface of a torus
• SUBROUTINE RDTORV - Generate uniformly-spaced psuedo-random points within a torus
• FUNCTION RDUNI - Generate a uniform random number (lagged Fibonacci generator)
• FUNCTION RDUNLL - Generate a uniform random number (congruential generator)
• FUNCTION RDUNWH - Generate a uniform random number (Wichman-Hill generator)
• SUBROUTINE REJ1 - Compute mean and standard deviaiton of normal data that may be "contaminated"
• SUBROUTINE SKEKUR - Compute skewness and kurtosis
• SUBROUTINE SORT1 - Sort a data vector
• SUBROUTINE SORT2 - Sort a data vector and "carry along" a second vector
• SUBROUTINE TRISYS - Solve a tridiagonal system of equations
• SUBROUTINE UPDATE - Compute mean and standard deviation using an "update" algorithm
• SUBROUTINE ZEROBR - Find the zero of a univariate function using Brent's method
Download the Fortran Source Code You can download the Fortran source code.
The source code is written in standard Fortran 77. It should be portable to systems that support 32-bit (or higher) Fortran 77 compilers. These routines were originally written for the 60-bit CDC
Fortran compiler, so you may need to convert some routines to double precision to maintain sufficient accuracy.
A few of these routines make calls to the LINPACK library. The LINPACK routines are not included (LINPACK is freely downloadble).
Privacy Policy/Security Notice
Disclaimer | FOIA
NIST is an agency of the U.S. Commerce Department.
Date created: 8/11/2004
Last updated: 10/30/2013
Please email comments on this WWW page to sedwww@nist.gov. | {"url":"http://www.itl.nist.gov/div898/software/reeves/homepage.htm","timestamp":"2014-04-18T23:15:07Z","content_type":null,"content_length":"16183","record_id":"<urn:uuid:413ce698-e969-4a73-87f2-15c8a130365a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental Algorithms. Vol. 1. The Art of Computer Programming
Results 1 - 10 of 18
- Journal of the ACM , 1996
"... Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small
fraction of the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds th ..."
Cited by 95 (8 self)
Add to MetaCart
Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small fraction of
the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds the minimum cut in an arbitrarily weighted undirected graph with high probability. The algorithm
runs in O(n 2 log 3 n) time, a significant improvement over the previous Õ(mn) time bounds based on maximum flows. It is simple and intuitive and uses no complex data structures. Our algorithm can be
parallelized to run in �� � with n 2 processors; this gives the first proof that the minimum cut problem can be solved in ���. The algorithm does more than find a single minimum cut; it finds all of
them. With minor modifications, our algorithm solves two other problems of interest. Our algorithm finds all cuts with value within a multiplicative factor of � of the minimum cut’s in expected Õ(n 2
� ) time, or in �� � with n 2 � processors. The problem of finding a minimum multiway cut of a graph into r pieces is solved in expected Õ(n 2(r�1) ) time, or in �� � with n 2(r�1) processors. The
“trace ” of the algorithm’s execution on these two problems forms a new compact data structure for representing all small cuts and all multiway cuts in a graph. This data structure can be efficiently
transformed into the
- Software Practice and Experience , 1993
"... this paper, I evaluate the costs of different dynamic storage management algorithms, including domain-specific allocators, widelyused general-purpose allocators, and a publicly available
conservative garbage collection algorithm. Surprisingly, I find that programmer enhancements often have little ef ..."
Cited by 79 (6 self)
Add to MetaCart
this paper, I evaluate the costs of different dynamic storage management algorithms, including domain-specific allocators, widelyused general-purpose allocators, and a publicly available conservative
garbage collection algorithm. Surprisingly, I find that programmer enhancements often have little effect on program performance. I also find that the true cost of conservative garbage collection is
not the CPU overhead, but the memory system overhead of the algorithm. I conclude that conservative garbage collection is a promising alternative to explicit storage management and that the
performance of conservative collection is likely to improve in the future. C programmers should now seriously consider using conservative garbage collection instead of explicitly calling free in
programs they write
, 1990
"... this paper the author was partially supported by an NSF grant. ..."
- AND COMPUTING , 2003
"... Dissemination of information derived from large contingency tables formed from confidential data is a major responsibility of statistical agencies. In this paper we present solutions to several
computational and algorithmic problems that arise in the dissemination of cross-tabulations (marginal sub- ..."
Cited by 14 (10 self)
Add to MetaCart
Dissemination of information derived from large contingency tables formed from confidential data is a major responsibility of statistical agencies. In this paper we present solutions to several
computational and algorithmic problems that arise in the dissemination of cross-tabulations (marginal sub-tables) from a single underlying table. These include data structures that exploit sparsity
to support efficient computation of marginals and algorithms such as iterative proportional fitting, as well as a generalized form of the shuttle algorithm that computes sharp bounds on (small,
confidentiality threatening) cells in the full table from arbitrary sets of released marginals. We give examples illustrating the techniques.
- J. ACM , 1989
"... We present optimal algorithms for sorting on parallel CREW and EREW versions of the pointer machine model. Intuitively, one can view our methods as being based on a parallel mergesort using
linked lists rather than arrays (the usual parallel data structure). We also show how to exploit the "locality ..."
Cited by 14 (5 self)
Add to MetaCart
We present optimal algorithms for sorting on parallel CREW and EREW versions of the pointer machine model. Intuitively, one can view our methods as being based on a parallel mergesort using linked
lists rather than arrays (the usual parallel data structure). We also show how to exploit the "locality" of our approach to solve the set expression evaluation problem, a problem with applications to
database querying and logic-programming, in O(log n) time using O(n) processors. Interestingly, this is an asymptotic improvement over what seems possible using previous techniques. Categories and
Subject Descriptors: E.1 [Data Structures]: arrays, lists; F.2.2. [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems---sorting and searching General Terms:
Algorithms, Theory, Verification Additional Key Words and Phrases: parallel algorithms, PRAM, pointer machine, linking automaton, expression evaluation, mergesort, cascade merging 1 Introduction One
of the primar...
- Real-Time Systems , 2000
"... The purpose of this paper is to introduce frameworks based on data-flow equations which provide for estimating the worst-case execution time (WCET) of (real-time) programs. These frameworks
allow several different WCET analysis techniques, which range from nave approaches to exact analysis, provided ..."
Cited by 12 (8 self)
Add to MetaCart
The purpose of this paper is to introduce frameworks based on data-flow equations which provide for estimating the worst-case execution time (WCET) of (real-time) programs. These frameworks allow
several different WCET analysis techniques, which range from nave approaches to exact analysis, provided exact knowledge on the program behaviour is available. However, data-flow frameworks can also
be used for symbolic analysis based on information derived automatically from the source code of the program. As a byproduct we show that slightly modified elimination methods can be employed for
solving WCET data-flow equations, while iteration algorithms cannot be used for this purpose.
"... Parallel computers are ideally suited to the Monte Carlo simulation of spin models using the standard Metropolis algorithm, since it is regular and local. However local algorithms have the major
drawback that near a phase transition the number of sweeps needed to generate a statistically independent ..."
Cited by 6 (2 self)
Add to MetaCart
Parallel computers are ideally suited to the Monte Carlo simulation of spin models using the standard Metropolis algorithm, since it is regular and local. However local algorithms have the major
drawback that near a phase transition the number of sweeps needed to generate a statistically independent configuration increases as the square of the lattice size. New algorithms have recently been
developed which dramatically reduce this ‘critical slowing down ’ by updating clusters of spins at a time. The highly irregular and non-local nature of these algorithms means that they are much more
difficult to parallelize efficiently. Here we introduce the new cluster algorithms, explain some sequential algorithms for identifying and labelling connected clusters of spins, and then outline some
parallel algorithms which have been implemented on MIMD machines.
, 2006
"... DEVS is a sound formal modeling and simulation (M&S) framework based on generic dynamic system concepts. Cell-DEVS is a DEVS-based formalism intended to model complex physical systems as cell
spaces. Time Warp is the most well-known optimistic synchronization protocol for parallel and distributed si ..."
Cited by 4 (1 self)
Add to MetaCart
DEVS is a sound formal modeling and simulation (M&S) framework based on generic dynamic system concepts. Cell-DEVS is a DEVS-based formalism intended to model complex physical systems as cell spaces.
Time Warp is the most well-known optimistic synchronization protocol for parallel and distributed simulations. This work is devoted to developing new techniques for executing DEVS and Cell-DEVS
models in parallel and distributed environments based on the WARPED kernel, an implementation of the Time Warp protocol. The resultant optimistic simulator, called as PCD++, is built as a new
simulation engine for CD++, an M&S toolkit that implements the DEVS and Cell-DEVS formalisms. Algorithms in CD++ and the WARPED kernel are redesigned to carry out optimistic simulations using a
non-hierarchical approach that reduces the communication overhead. The message-passing paradigm is analyzed using a high-level abstraction called wall clock time slice. A two-level user-controlled
state-saving mechanism is proposed to achieve efficient and flexible state saving at runtime. This mechanism is integrated with both the copy state-saving and periodic state-saving strategies to
realize a hybrid technique that gives simulator developers the full power to dynamically choose the best possible | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1050091","timestamp":"2014-04-20T19:39:49Z","content_type":null,"content_length":"35765","record_id":"<urn:uuid:33febf27-a67e-4350-ac17-a7b70c2de119>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rounded Corners in ggplot2 Graphics
February 11, 2011
By erik
Last night, while working on something else that I want to finish, I ended up indulging in a bit of "yak shaving". I wondered how easy it would be to generate graphics in ggplot2 with rounded
corners. I don't think that there is any native support for this in the theming system. Therefore, I had to descend into the depths of R's grid graphics system. Luckily, I have read Paul Murrell's
beyond excellent R Graphics book, so I was prepared for the venture.
Making my task even easier was the fact that there now exists the functionality in grid to generate rectangles with rounded corners. This is accomplished with the grid.roundrect and corresponding
roundrectGrob functions.
So, here is my first attempt.
p <- qplot(carat, price, data = diamonds[sample(1:nrow(diamonds), 100),]) +
grob <- ggplotGrob(p)
plot.rrg <- roundrectGrob(gp = gpar(fill = "skyblue1", col = NA),
r = unit(0.06, "npc"))
grob <- removeGrob(grob, "plot.background.rect",
grep = TRUE)
grid.draw(gList(plot.rrg, grob))
As you see, the objective was achieved, but not nearly as cleanly as I'd like. Essentially, I'm relying on a trick by removing the plot rectangle grob and creating a new one in a gList. This happens
to work since the new round rectangle is drawn first. Ideally, I'd like to add the new rounded rectangle grob into the gTree object in the 'proper' place, i.e., where the original
plot.background.rect rectangle was. I assume that is possible if I investigate further. It was not obvious to me how to do that though. Also, if I wanted the panel rectangle (grey colored in the
figure) to be rounded, I believe I would have to find a way to replace that grob with a rounded rectangle at its original location.
So, that was a quick first attempt. The plot was saved using the png function with a transparent background. I will update this post if I find a cleaner solution. Please let me know if you are aware
of how to add grobs in specific locations in a gTree, since that is what I am assuming I need to do.
for the author, please follow the link and comment on his blog:
sigmafield - R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/rounded-corners-in-ggplot2-graphics/","timestamp":"2014-04-21T14:50:30Z","content_type":null,"content_length":"37807","record_id":"<urn:uuid:67746d13-e4a8-4308-a36c-71299ea5c2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Queens Village Prealgebra Tutor
Find a Queens Village Prealgebra Tutor
...From Algebraic expressions, Functions and Relations, Composition and Inverses of Functions, Exponential and Logarithmic Functions, Trigonometric Functions and their graphs and more. I provide
all and or most of the practice materials for each and every subject. I can assist with Global History, Earth Science, Physics and English, of which I have 100% pass.
47 Subjects: including prealgebra, reading, chemistry, writing
...I took this course somewhat recently, 2011. I also use anatomy on a daily basis at my job as a physician assistant. I participated in 19 years of dance at Rosita Lee Dance Center in Hudson, NH.
11 Subjects: including prealgebra, algebra 1, algebra 2, precalculus
...I have worked with 1 on 1 Academic Tutors who provide Mathematics and English courses from K-12th grade. So, it is a huge advantage for yourself, if you are unable to understand, have
difficulty in finding solutions etc., I am ready to help you. I had an "A+" on Algebra 1, while I was in school...
7 Subjects: including prealgebra, reading, English, algebra 1
...I will begin my studies in medical school in August 2014. As a previous tutor and teaching assistant for various courses at Cornell, I witnessed firsthand the positive effects of collaboration
on student exam performance and retention on course material. The best route to improvement is being able to objectively identify which academic areas to strengthen.
17 Subjects: including prealgebra, chemistry, algebra 1, MCAT
...That's why I often ask for feedback, and I never bill for a lesson in which the student of parent is not completely satisfied with my tutoring. I try to be flexible with my time. While I do
have a 24-h-cancellation policy, I offer makeup classes.
55 Subjects: including prealgebra, English, reading, German | {"url":"http://www.purplemath.com/queens_village_prealgebra_tutors.php","timestamp":"2014-04-18T04:27:11Z","content_type":null,"content_length":"24299","record_id":"<urn:uuid:3f1bbcf7-498a-47ed-b548-a9267b365e39>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: K-60 down and fixed
remember running it in forward rotation will help it unscrew from the pipe. reverse will help feed into the line.
150' of roots is time for a jetter. the right jetter does the work for you. and the right nozzle will cut 100% of all roots.
phoebe it is
Re: K-60 down and fixed
Thanks for all the advice Rick but in these cases a jetter would just flood the basement.
The Model C or drill and Eel will be my friend on these type of jobs.
Right tool for the right job.
Re: K-60 down and fixed
Originally posted by fixitright View Post
Thanks for all the advice Rick but in these cases a jetter would just flood the basement.
The Model C or drill and Eel will be my friend on these type of jobs.
Right tool for the right job.
No need for a jetter on residential lines. Expansion blades do a wonderful job of cleaning up.
Re: K-60 down and fixed
basements are not too common here. but when I do have to snake from them, it's not fun since you really can't use the water to help flush and test without taking steps to contain the water.
what height are these cleanouts located? floor or wall?
I guess no chance of installing an outside c/o.
phoebe it is
Re: K-60 down and fixed
Originally posted by BuffaloPlumber View Post
In my opinion pulling back giant root balls is far easier than wrestling a heavy and bulky machine up several staircases,basements,and sometimes long hauls in the field.
A little extra grunt getting the machine in place saves a lot of extra grunt getting the job done. Also cuts down on extra trips.
Re: K-60 down and fixed
Originally posted by PLUMBER RICK View Post
basements are not too common here. but when I do have to snake from them, it's not fun since you really can't use the water to help flush and test without taking steps to contain the water.
what height are these cleanouts located? floor or wall?
I guess no chance of installing an outside c/o.
85% of my cleanouts are on the basement floor the others are coming through wall a little above grade or on a stack with an ocasional chest high one a few times per year.
Re: K-60 down and fixed
Originally posted by AssTyme View Post
85% of my cleanouts are on the basement floor the others are coming through wall a little above grade or on a stack with an ocasional chest high one a few times per year.
Same here. Seldom see outside clean outs and only in commercial work.
Remember, we have cold weather that keeps our lines deep.
Six foot or more is common out of the house and the street main
is often 20' or more under the street.
Remember too Rick that it gets 20 degrees below zero here in the winter.
Hoses freeze fast and metal breaks. You can circulate all the water you want
but it takes only a minute to have a frozen line or a fitting break.
There are companies that do it but they pay the price.
Last edited by fixitright; 10-16-2013, 09:58 AM.
Re: K-60 down and fixed
You can put me on the list now as a K60 fan. I've been using it exclusively now for about 6 months. I got it used from another user here, and it has been problem free. I'm sectional almost 100% time
now. I still have the jobs inside the home where I used the Model N on secondary lines, and I'm considering getting a K45 pistol rodder, but I love the sectional machines. I'd rather carry around the
K60 vs a heavy drum machine. I don't mind the smaller drum machines, but I have no need for a large drum machine. Sectionals are so much safer and are so much cheaper to own.
Re: K-60 down and fixed
will, who was the non believer?
I can always perform an exorcism and get the devil out of them. wouldn't be the first time I needed to do that. but once the devil is gone, they're a believer.
phoebe it is
Re: K-60 down and fixed
Originally posted by PLUMBER RICK View Post
will, who was the non believer?
I can always perform an exorcism and get the devil out of them. wouldn't be the first time I needed to do that. but once the devil is gone, they're a believer.
Looks like the guy who sold it to Will and went back to a drum
Re: K-60 down and fixed
no I believe he went out of business.
just sold another k60 yesterday to one of my plumbing contractor friends who would hire me for his tricky stuff.
now he has all the tricks. well not all of them, but at least the rabbit.
phoebe it is
Re: K-60 down and fixed
I traded him more or less a K60 for the K7500, there was other stuff involved too, but that was the main players in the trade. Think we both got what we wanted out of the trade, but I'm glad I don't
have to lug that heavy K7500 around anymore. My GO68HD is the next machine I need to move, any takers?
Re: K-60 down and fixed
I need some advice on a mainline machine
I have a lot of hillside houses hear in Los Angeles
I guess you dont like sectional machines,,OK what a good light weight drum machine
I am tired of lugging around a Gorlitz HD G68 HD drum machine
Does the Job buts its breakin my back
Any help appreciated
Dave Doyle
Rapid Rhino Plumbing
Los Angeles
cellular 626 524 4255
Re: K-60 down and fixed
I can relate with hillsides in los angeles. Remember that for a line to backup with a large drop, has to be a mean stoppage. I use a jetter when the distance or drop gets much. Much easier and much
safer than fighting gravity.
Not uncommon for me to go out 200+' on a hillside.
phoebe it is
Re: K-60 down and fixed
Originally posted by mistergato View Post
I need some advice on a mainline machine
I have a lot of hillside houses hear in Los Angeles
I guess you dont like sectional machines,,OK what a good light weight drum machine
I am tired of lugging around a Gorlitz HD G68 HD drum machine
Does the Job buts its breakin my back
Any help appreciated
Dave Doyle
Rapid Rhino Plumbing
Los Angeles
cellular 626 524 4255
I know nothing about the LA area, I prefer sectional machines for mainline stoppages as I feel they as safer, last longer, and are more effective than a Large drum machine. Inside I prefer a drum
machine like the Electric Eel Model N or Spartan 100(unless I can clear the blockage from a roof with a K60). I good small drum machine that can handle most mainline blockages I'd recommend a Spartan | {"url":"https://www.ridgidforum.com/forum/mechanical-trades/professional-plumbing-discussion/45693-k-60-down-and-fixed/page2?t=44559&page=2","timestamp":"2014-04-18T19:44:19Z","content_type":null,"content_length":"164213","record_id":"<urn:uuid:1d7fe41d-1eca-4ba2-a42a-8f5a6d97b5be>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is 30 KG IN POUNDS?
Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS!
All the definitions and meanings found are from third-party authors, please respect their copyright.
© 2014 - mrwhatis.net | {"url":"http://mrwhatis.net/30-kg-in-pounds.html","timestamp":"2014-04-16T20:03:24Z","content_type":null,"content_length":"36386","record_id":"<urn:uuid:5764b2c8-46b5-4647-a3fc-23a4e278f322>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Bijection
For a, take a step back. Look at what it is saying. Don't see 2n and don't see -2n+1. See odd and even. Now it should become clear, that all that the map is a bijection since all evens and all odds
are hit, and only positives go to evens and only negatives go to odds.
For b, lets do something almost exactly like a. Lets have the function f where f(x) = {x/2 if x is even and -(x-1)/2 if x is odd}.
C is probably the easiest one of all. f(x) = -x.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=33497","timestamp":"2014-04-16T04:36:45Z","content_type":null,"content_length":"10094","record_id":"<urn:uuid:9a10bf0b-4a2b-4985-97df-4fa2b52ae4ed>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaithersburg Precalculus Tutor
...I have been tutoring for 18 years in both math and science. I have helped children as young as 6 with both reading comprehension and mathematical concepts. Young learners are more eager to
learn new concepts but often have not learned logic or working with multiple steps to answer problems.
31 Subjects: including precalculus, chemistry, reading, physics
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including precalculus, calculus, physics, GRE
...Post-undergraduate, I began as a volunteer group tutor before also becoming a private one-on-one tutor as well. I enjoy every minute of it, and it's been one of the most rewarding experiences
of my life so far, one that has inspired me to become a secondary math teacher. I value a student's desire to learn and commitment to having a good educational relationship.
15 Subjects: including precalculus, chemistry, calculus, geometry
...We taught lessons to school-aged children (in Spanish) on human rights and their basic rights as a citizen of Spain. I spent my last year of undergraduate as an ESL (English as a Second
Language) tutor for a pregnancy center that targeted Spanish-speaking woman. There we tailored a curriculum t...
17 Subjects: including precalculus, Spanish, writing, physics
...I have a Bachelor's degree in Computer Science and a PhD in Applied Mathematics. I have published a number of research papers about computer science, mathematics and the teaching of children in
the most qualified international journals, such as Discrete Math, Applied Math, etc. I was selected one of the top 200 tutors in the entire country in 2011.
12 Subjects: including precalculus, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Gaithersburg_precalculus_tutors.php","timestamp":"2014-04-21T02:34:17Z","content_type":null,"content_length":"24482","record_id":"<urn:uuid:3ee34616-6cce-4017-9cd7-367fd32980d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time and Energy, Inertia and Gravity
The Relationship Between Time, Acceleration, and Velocity and its Affect on Energy, and the Relationship Between Inertia and Gravity
© 2002 Joseph A. Rybczyk
Presented is a theory in fundamental theoretical physics that establishes the relationship between time and energy, and also the relationship between inertia and gravity. This theory abandons the
concept that mass increases as a result of relativistic motion and shows instead that the extra energy related to an object undergoing such motion is a direct result of the affect that the slowing of
time has on velocity. In support of this premise, new formulas are introduced for acceleration and used in conjunction with a relativistic time transformation factor to develop new equations for
both kinetic and total energy that replace those of special relativity. These and subsequently derived equations for momentum, distance, acceleration, inertia, gravity, and others, establish a
direct relationship between the presented theory and the principles of an earlier theory, the millennium theory of relativity. A final consequence of this theoretical analysis is the discovery of
two new laws of physics involving acceleration, and the realization that a unified theory of physics is now possible.
Time and Energy, Inertia and Gravity
The Relationship Between Time, Acceleration, and Velocity and its Affect on Energy, and the Relationship Between Inertia and Gravity
© 2001 Joseph A. Rybczyk
1. Introduction
This paper introduces a new physical science that bridges the gap between classical and relativistic physics. It provides the final piece of the puzzle started by Newton with his Principia1 in 1687,
and expanded by Einstein with his Special Theory of Relativity2 in 1905.
What will be shown is the true relationship between acceleration, time, and velocity, and the affect this relationship has on energy and thus our perception of other physical phenomena. We will see
that time expansion is real and its affects on velocity and energy are real, but the affects on mass (see appendix A) and distance are only perceptual. We will accomplish this by evaluating and
correcting the classical formula for acceleration, and employ the new formula together with a relativistic transformation factor to directly derive a correct equation for kinetic energy. This new
equation replaces both the Newtonian and Einsteinian equations for kinetic energy and is subsequently used in the formulation of a new total energy equation that replaces Einstein’s famous E = MC^2
equation. In the process, the limitations of the former formulas and equations will be clearly demonstrated and their replacements validated. The final results of this work are new equations for
acceleration, momentum, kinetic and total energy, and transformation factors not directly dependent on velocity. With these new equations it is no longer necessary to distinguish between classical
physics and relativistic physics. In their place is the beginning of a new physics from which future discoveries may be anticipated.
2. The Millennium Transformation Factor and Time Transformation
In relativistic physics it is widely accepted and supported by evidence that time slows down in a moving frame of reference. The relationship of moving frame time relative to stationary frame time
can be expressed by the time transformation formulas,
where v is the relative velocity, c is the speed of light, t` is moving frame time relative to stationary frame time, and t is time in the stationary frame. In the given equations t and t` can be
used to represent a unit of time, or an interval of time. Also of note is the fact that the more familiar Lorentz transformation factors,
have been replaced by their respective millennium theory of relativity equivalents3 (see appendix B),
In the past, these factors were normally associated with time and distance transformation. In the present work their use will be expanded to include modification of velocity and a numerical constant
associated with acceleration.
3. Constant Time and Relative Time Defined
There is often confusion about time transformation and the terms used to define stationary frame time vs. moving frame time. It should be understood that time in the moving frame is identical to
time in the stationary frame. That is, time in both frames may be represented by the variable t. Thus time t is actually constant time, or time that increases at a constant rate. Time t` then is
actually relative time, or time whose rate of increase is affected by relative motion. It is used to show that time t in the moving frame is slower than time t in the stationary frame. Or stated
another way, that constant time in either frame has a reduced rate relative to constant time in the other frame as defined by the transformation formulas. From this point forward, it should be
understood that when we say stationary frame time, we actually mean constant time t, and when we say moving frame time, we actually mean relative time t`.
4. The New Kinetic Energy Equation
Although the classical equation for kinetic energy seems to be supported by the evidence for low values of velocity, it is refuted by the evidence for high values of velocity in the area that
represents a significant fraction of the speed of light. At the other end of the spectrum is the relativistic equation that seems to be supported by the evidence for velocities that are a
significant fraction of light speed, but as will be shown later, is not representative of very low velocities in the area of classical physics. In fact, in both the real and theoretical sense,
neither equation is truly correct at any velocity. The causes by which this happened are as follows:
1. Relativistic effects were unknown during the development period of classical physics. The problem is then compounded by the manner in which the classical kinetic energy equation is normally
derived that in turn obscures its true meaning.
2. The relativistic effects, when discovered, were then used in an indirect derivation for the new kinetic energy equation that in turn obscured the true meaning that equation.
To avoid such problems we must derive the classical equation in a manner that makes its true meaning unambiguous. Then with our knowledge of relativity it becomes possible to correct the equation in
such a manner as to make it properly represent the physical laws of nature along the entire range of valid velocities.
In classical physics the following formulas are given for momentum, and constant acceleration:
Where p is momentum, m is mass, and v is velocity, we are given,
Where a is the rate of constant acceleration, and t is the time interval, we are given,
and therefore,
And lastly, where d is the distance traveled by an object under constant acceleration we are given,
Although these equations provide satisfactory results for Newtonian levels of velocity, none of them are truly correct at any velocity. For the moment we will defer treatment of the momentum
equation and proceed with the equations for constant acceleration. That is, if relativistic physics is a correct model for physical phenomena, then all of these equations and others to follow,
including the classical equation for kinetic energy will have to be modified. Until now, however, no one has been successful in figuring out how it could be accomplished. The problems are many, and
involve a very difficult analysis due to circular relationships that involve many interdependent variables. To proceed we must first decide the order in which the variables will be used.
Additionally, we will need to make certain assumptions that appear to be reasonable and in agreement with the evidence. And finally, we must assure that the resulting circular arguments are
self-enforcing throughout the analysis and lead to correct final results.
The first assumptions to be made involve the classical formulas for constant acceleration. The evidence shows that relative velocity increases asymptotically and therefore equations, 8 and 9, cannot
be correct for constant acceleration. If the rate of acceleration, a is constant, and time t increases at a constant rate, then the instantaneous velocity v, given by equation 8, will also increase
at a constant rate. Such a result is of course inconsistent with the evidence and therefore in disagreement with the principles of relativistic physics. On the other hand, where v is the
instantaneous velocity, a[c] is the rate of acceleration, and t` is the time interval, our analysis will show that,
are the correct formulas for constant acceleration. That is, since t` increases asymptotically at the same rate as v, the rate of acceleration will remain constant as the velocity increases toward c
. To verify this, in a computer math program for example, we will first need to derive different forms of equations 11 and 12, those that do not depend on the variable t`. This is because, as seen
in equation 1, time interval t` is itself dependent on the value of v. Such treatment, however, must be deferred to a later point in the analysis. The forgoing is an example of the circular
dependency commented on earlier.
The subscript c in the preceding equations is needed to distinguish between constant acceleration and relative acceleration, which occurs simultaneously and is defined by,
where a[r] is the rate of relative acceleration. These are nothing more than correct versions of equations 8 and 9 respectively, rewritten to show their true meanings. Thus, if t increases at a
constant rate, and if v is asymptotic, a[r] the rate of acceleration relative to the stationary frame will decrease as the velocity increases toward c.
For consistency with the modifications used in equations 11 and 12, we must also modify the associated distance formula given in equation 10. This gives us,
where the travel distance is denoted by the variable d. This change, however, is insufficient for arriving at a correct formula for the distance traveled by an object or particle under constant
acceleration. One of the problems involves the constant, ½. This constant is also affected by the asymptotical nature of velocity. That is, as c is approached, further increases in velocity
approach 0. Thus, the distance traveled approaches vt`, and not ½ vt`. If this were the only consideration, we would modify the constant ½ so that its value approaches 1 at a rate that is
consistent with the rate that v approaches c. But there is another factor to consider. One involving the energy required for the acceleration, and this too affects the required modification. In
the interest of clarity, this modification is best deferred to a later part of our analysis where such modification can be more easily understood.
Now, by using the right sides of equation 7 (p = mv) and equation 15 we can formulate a new formula where the temporary variable k[a] = momentum over distance traveled by an object under constant
acceleration. Thus we have,
By substituting v/t` from equation 12 (a[c] = v/t`) for a[c] in the above equation, we obtain,
Now, since the classical formula for kinetic energy is,
we can state,
and by substitution get,
The reason for deriving the kinetic energy equation in this manner is to make it clear what the various variables and constant represent. This last version of the equation can now be evaluated from
a relativistic perspective and corrected to properly represent all values of velocity, v. At this point it is necessary to call upon the millennium transformation factor, previously introduced as
expression 6.
Referring now to figure 1, we can compare the relativistic motion of a particle to the Newtonian motion when a constant force is applied. Whereas in Newtonian motion the velocity increases without
limit, in relativistic motion the velocity increases asymptotically as the object approaches the speed of light c, and of course the speed of light is never exceeded. From what we understand about
relativistic effects, the transformation factors 3 through 6, are mathematical definitions of the behavior. If time slows down in the moving frame of reference, it is not unreasonable to assume that
this slowing of time directly affects the velocity of the moving object. Thus, as velocity increases, time slows down causing further increases in velocity to require increasingly greater amounts of
energy. With respect to kinetic energy this is a paradoxical contradiction of Newton’s second and third Laws of motion. Since kinetic energy is a direct function of velocity it would increase at a
slower rate along with the velocity increases when at the same time it must increase at an increasingly greater rate along with the energy causing the acceleration. Obviously it cannot do both. Of
the two choices, reason and experience support the view that the kinetic energy must always equal the input energy.
If we now refer to the classical kinetic energy equation 19, we can see there are only two variables to choose from should we wish to modify the formula to bring it into conformance with the
evidence. The choices are, mass and, velocity. Einstein made what appeared to be a reasonable choice at the time and selected mass. If mass increases with velocity, it would explain the observed
behavior. This, it will be shown, appears to have been the wrong choice and at very least results in an anomaly that is well hidden in the E = Mc^2 equation, but is very apparent in the resulting
relativistic kinetic energy equation, K = Mc^2 - M[o]c^2. But even more than this, as the analysis continues the evidence builds in favor of the new theory thus challenging the very premises upon
which Einstein’s equations are founded. It will be shown now, that neither variable should be modified. This is not to say that the kinetic energy equation itself should not be modified.
Referring back to figure 1, and also to equations 12, 15, and 21, (repeated below) it can be seen that equation 21 must be modified in such a manner as to offset the relativistic effect that the
motion has on velocity.
That is, if equation 21 is to produce the correct result for kinetic energy, the experienced relativistic effect on velocity must in the mathematical sense, be reversed. The obvious conclusion is
that we must factor the velocity by the reciprocal millennium factor, expression 6. If we are right, however, we must also factor the fractional constant, ½, in equations 15 and 21. This conclusion
is supported as follows: We can assume from equation 15 that the distance, d, traveled by an object under constant acceleration should = ½ a[c]t`^2. Referring to equation 12, we can see that this
is the same as saying distance ½ vt`. In other words, the distance traveled by an object under constant acceleration should = ½ times the velocity, v, achieved for the interval, t`, during which the
acceleration takes place. However, when we study the relativistic motion curve in figure 1, we can see that as velocity increases toward c, there is less and less change in velocity over an interval
of time. Stated another way, as velocity increases toward c, the distance traveled by the object approaches vt`, and not ½ vt`. This implies that the constant ½ should increase toward a value of 1.
But since at this point, v, is not only near the value of c but, with our first change, its value is being dramatically increased by the millennium factor, the constant ½ must actually be modified
to decrease rather than increase in order to compensate and bring the final result into agreement with the evidence that supports relativity. When these changes are properly implemented, the
distance will progress toward vt` as correctly assumed while at the same time the resulting kinetic energy equation is a correct equation for all values of v.
Proceeding now with the necessary modifications to equation 21, we derive,
Note: If you entered this page directly during a search, you can visit the Millennium Relativity site by clicking on the Home link below: | {"url":"http://www.mrelativity.net/TimeEnergyIG/TimeEnergyIG1.htm","timestamp":"2014-04-21T09:36:53Z","content_type":null,"content_length":"103286","record_id":"<urn:uuid:9d64d84b-5fde-4d25-8972-f9eff427c7c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel of surface of revolution
Show that the parallel (cycles) of a surface of revolution are curvature lines! Thank you very much in advance!
By symmetry the direction of the generatrix is a critical point of the function of normal curvature, at a fixed point, so it is one of the principle directions. So the other principle direction is
the one orthogonal to it, that is, the direction of the parallel cycles. | {"url":"http://mathhelpforum.com/differential-geometry/194162-parallel-surface-revolution.html","timestamp":"2014-04-18T05:51:12Z","content_type":null,"content_length":"30991","record_id":"<urn:uuid:a4318c0a-5289-4145-8971-29f4972bd389>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tewksbury Precalculus Tutor
...I've also tutored for SAT math as a subject where not only are these subjects tested, but also under test is your speed and your choice of questions. I have a PhD in physics and was a
mathematics olympiad winner in high school. I've always been extremely good at math.
47 Subjects: including precalculus, chemistry, calculus, reading
...My schedule is extremely flexible and am willing to meet you wherever is most convenient for you.I graduated from the University of Connecticut with a B.S. in Physics and minor in Mathematics
before attending graduate school at Brandeis University and Northeastern University, where I received a M...
9 Subjects: including precalculus, calculus, physics, geometry
...I am very personable and always try to build a trusting relationship with my students. I want all of my students to think of me as their tutor, mentor and friend. Once we create a positive and
respectful relationship, learning about roots of a quadratic equation or why apples fall from trees will become natural and easy.
10 Subjects: including precalculus, calculus, geometry, algebra 1
...I have worked mostly with college level introductory courses and high school students. I have worked with many students of different academic levels from elementary to college students. Whether
you want to solidify your knowledge and get ahead or get a fresh perspective if your are struggling, I am confident I can help you.
19 Subjects: including precalculus, Spanish, chemistry, calculus
I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have
tutored a wide range of students - from middle school to college level.
14 Subjects: including precalculus, geometry, statistics, SAT math | {"url":"http://www.purplemath.com/Tewksbury_Precalculus_tutors.php","timestamp":"2014-04-20T23:33:01Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:8dd50b31-c27f-4c05-946a-52b0fafdea14>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
No burnouts allowed
[ Up ]
This program does electrical cable sizing, based on the most basic electrical laws.
If you are planning to do cable sizing for a real installation, check the results and make sure that you comply with all electrical regulations.
From Ohms law, the voltage drop in an electrical conductor is
V = I R
I = Current, amps
R = Resistance, Ohms
The electrical resistance can be derived from the properties of the conductor material.
R = ρ L / A
ρ = Resistivity, Ω mm^2 /m
L = Length, m
A = Cross sectional area, mm^2
Watch the units in the above equation.
The resistance can also be expressed in terms of the material conductivity (ψ) which is just the reciprocal of the resistivity.
R = L / (ψ A)
Table 1. Typical material
electrical conductivity
Material Conductivity, ψ
Copper 58
Aluminium 36
Mild Steel 7.7
Combining Ohms law with the resistance expression, we get.
V = (I L) / (ψ A)
Now, we can define an acceptable voltage drop in a conductor. I have calculated the voltage drop that was used in several published tables and get around 5.5 Volts.
We could therefore re-arrange this equation to give the cable size. But cables come in standard sizes measured in mm^2. In addition, you also need to consider the maximum current for each cable size.
Table 2. Conductor sizes and maximum currents
Cable Size, mm^2 1.5 2.5 4 6 10 16 25 35 50 70 95 120 150 185 240 300
Maximum current, A 13 21 28 36 46 61 81 99 125 160 195 220 250 285 340 395
So, the theory is relatively simple, let us now see how to use the program.
Enter the following and the program calculates line current and cable area.
1. Conductor material. Select from list.
2. Load Type. Resistive for electrical heating or inductive for motors.
3. Phase. Single or Three phase
4. Power. Use the motor button for a quick lookup.
5. Cable Length in meter.
The normal voltage drop is based on 5.5 Volts. You can edit this value if necessary by pressing the [Settings] button.
Right-Click anywhere on the form to activate the context menu.
This will allow you to activate any of the program functions and run other TechniSolve program tools that you may have installed.
Electrical heating loads up to 84 kW
Single phase motors up to 5.5 kW
Three phase motors up to 110 kW | {"url":"http://www.coolit.co.za/cablesize/index.htm","timestamp":"2014-04-18T19:01:51Z","content_type":null,"content_length":"7604","record_id":"<urn:uuid:0ef1628c-fffc-40bb-a0ac-e76b76eb0a6e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
absolute value continuity
It is very easy to prove that abs(x) is continuous everywhere using epsilon-delta techniques. Yet, let U = (0,1). This is open in [0, infinity). Yet, f inverse (U) can be said to equal (0,1) U
{-1/2} right? This set, call it V is certainly not open in (- infinity, infinity). | {"url":"http://mathhelpforum.com/differential-geometry/194978-absolute-value-continuity.html","timestamp":"2014-04-18T22:21:14Z","content_type":null,"content_length":"39691","record_id":"<urn:uuid:ca835e08-9e50-4c50-aa5b-cfd7893d5278>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Five Figures - Answer
Answer :
The only set of three numbers, of two, three, and five figures respectively, that will fulfil the required conditions is 27 × 594 = 16,038.
These three numbers contain all the nine digits and 0, without repetition; the first two numbers multiplied together make the third, and the second is exactly twenty-two times the first.
If the numbers might contain one, four, and five figures respectively, there would be many correct answers, such as 3 × 5,694 = 17,082; but it is a curious fact that there is only one answer to the
problem as propounded, though it is no easy matter to prove that this is the case. | {"url":"http://www.pedagonet.com/puzzles/three1.htm","timestamp":"2014-04-20T13:19:59Z","content_type":null,"content_length":"10609","record_id":"<urn:uuid:f37ffea0-d0fd-4550-abe1-c0e2843a9d39>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buena Park Math Tutor
Find a Buena Park Math Tutor
...I just love tutoring Algebra 2. I am an excellent math tutor and very patient and caring. I make sure your student understands the material, not only the current lesson, but the prior lessons
as well.
20 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...I plan on becoming a teacher. I like putting my mathematical skills to use in a that will benefit others. I've been tutoring math for the past seven years.
6 Subjects: including geometry, algebra 1, SAT math, prealgebra
...At the college, he was responsible for tutoring undergraduate and post-baccalaureate students in chemistry, calculus, physics, and MCAT physics preparation. At UCI, Jordan teaches general
chemistry and scientific computational skills to undergraduate students. Miscellaneous: As the seventh born of nine children, Jordan has developed a unique capability to connect with almost
5 Subjects: including algebra 2, calculus, trigonometry, chemistry
I am a recent college graduate and received my BA in political science/pre-law. I am very strong in the fields of government, history, mathematics, and writing. Much of my time in college was
devoted to critical thinking and writing skills which are things I believe many students are struggling with today.
4 Subjects: including algebra 1, ACT Math, geometry, government & politics
...I develop flashcards with the students to aid in the learning process. I am most capable of tutoring in geometry. Students seem to have the most difficulty in math as compared to other
31 Subjects: including SAT math, English, writing, trigonometry | {"url":"http://www.purplemath.com/buena_park_math_tutors.php","timestamp":"2014-04-16T07:32:32Z","content_type":null,"content_length":"23598","record_id":"<urn:uuid:f696f505-102c-4f5a-a7b6-51f5f6f259ed>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Performance Measures
Morningstar's Performance Measures
Sharpe Ratios
Morningstar versus Return Sharpe Ratios
As shown earlier, Morningstar computes its version of the Sharpe Ratio using substantially different procedures from those typically used in academic studies. Here, we contrast the Morningstar
version (msSR) with ERSR, the annualized version of the traditional measure.
Morningstar and excess return Sharpe ratios
Clearly, the two measures are highly correlated across funds. While some curvature appears in the relationship, within the range in which most fund ratios lie (0 to 2.0 in this period), the points
fall very close to a 45-degree line, since the results are very similar in magnitude.
To compare rankings based on two measures we cross-plot the corresponding percentiles for the funds. Percentiles are computed as follows. First the funds are ranked on the basis of the value in
question (for example, the Morningstar Sharpe ratio). The fund with the highest value is assigned rank 1286, the fund with the smallest value is assigned rank 1, and all other funds are assigned
ranks between 1 and 1286, in order. Then the ranks are converted to percentiles, with rank 1 assigned percentile 1/1286, rank 2 assigned percentile 2/1286, and so on up to rank 1286, which is
assigned percentile 1.0 (100%).
Below we plot the percentiles based on the two Sharpe ratios. Not surprisingly, they are very similar.
Morningstar and excess return Sharpe ratio percentiles
The relationships we have shown graphically can also be summarized numerically in terms of correlation coefficients. Broadly, a correlation coefficient of 0 indicates no (linear) relationship between
the variables, while a coefficient of 1.0 indicates a perfectly positive linear relationship. In this case, the correlation coefficient for the Sharpe ratios themselves was 0.995, while that for the
percentiles was 0.997. Whatever the merits or demerits of one of these measures vis-a-vis the other, the choice between them seems to make little difference in practice.
The Economic Meaning of the Excess Return Sharpe Ratio
As indicated earlier, it is useful to consider any performance measure as a statistic designed to help answer a specific investment question (or questions). The evaluate the relevance of a measure
for a given task one must know the question that it is designed to help answer. What, then, is the question for which the Excess Return Sharpe Ratio may be at least a partial answer?
As described in William F. Sharpe, The Sharpe Ratio, (The Journal of Portfolio Management, Fall 1994), a Sharpe Ratio is a measure of the expected return per unit of standard deviation of return for
a zero-investment strategy. Such a strategy involves taking a short position in one asset or set of assets and an equal and offsetting long position in another asset or set of assets. As such it can,
in principle, be undertaken at any desired scale. While the expected return and standard deviation of such a strategy will depend on the chosen scale, their ratio will not. Hence, the Sharpe ratio is
unaffected by scale. More importantly, for any given desired level of risk, a strategy based on, say, fund X will provide higher expected return than one based on fund Y if and only if the Sharpe
Ratio of X exceeds that of Y.
When the Excess Return Sharpe Ratio is used, the strategy being considered involves borrowing at a short-term interest rate and using the proceeds to purchase a risky investment such as a mutual
fund. In the present context, the Sharpe ratio of any strategy involving a combination of treasury bills and a given mutual fund will be the same. This is illustrated in the figure below, in which X
and Y are two mutual funds.
Excess Return Sharpe Ratios for Two Funds
Consider an investor who plans to put all her money in either fund X or fund Y. Moreover, assume that the graph plots the best possible predictions of future expected return and future risk, measured
by the standard deviation of return. She might choose X, based on its higher expected return, despite its greater risk. Or, she might choose Y, based on its lower risk, despite its lower expected
return. Her choice should depend on her tolerance for accepting risk in pursuit of higher expected return. Absent some knowledge of her preferences, an outside analyst cannot argue that X is better
than Y or the converse.
But what if the investor can choose to put some money in one of these funds and the rest in treasury bills which offer the certain return shown at point B? Say that she has decided that she would
prefer a risk (standard deviation) of 10%. She could get this by putting all her money in fund Y, thereby obtaining an expected return or 11%. Alternatively, she could put 2/3 of her money in fund X
and 1/3 in Treasury Bills. This would give her the prospects plotted at point X' -- the same risk (10%) and a higher expected return (12%). Thus a Fund/Bill strategy using fund X would dominate a
Fund/Bill strategy using fund Y. This would also be true for an investor who desired, say, a risk of 5%. And, if it were possible to borrow at the same rate of interest, it would be true for an
investor who desired, say, a risk of 15%. In the latter case, fund X (by itself) would dominate a strategy in which fund Y is levered up to obtain the same level of overall risk.
Note that in this comparison it is assumed that only one risky investment is to be undertaken. The reason that the Excess Return Sharpe Ratio is, in principle, designed to deal with this situation is
not difficult to see -- the measure of risk is the total risk of the fund in question. But in a multi-fund portfolio, both the total risk of a fund and its correlation with movements in other funds
is relevant. Since the Excess Return Sharpe Ratio deals only with the former, it is best suited to investors who wish to choose only one risky mutual fund.
Prospectively, the Excess Return Sharpe Ratio is best suited to an investor who wishes to answer the question:
If I can invest in only one fund and engage in borrowing or lending, if desired, which is the single best fund?
Retrospectively, an historic Excess Return Sharpe Ratio can provide an answer for an investor with the question:
If I had invested in only one fund and engaged in borrowing or lending, as desired, which would have been the single best fund?
Of course, Excess Return Sharpe Ratio may prove to be useful for answering other questions to the extent that it can serve as an adequate proxy for a measure that is, in principle, more applicable.
Such possibilities will be explored subsequently.
Negative Sharpe Ratios
In practice, there are situations in which funds underperform treasury bills on average and hence have negative average excess returns. In such cases it is often considered paradoxical that a fund
with greater standard deviation and worse average performance may nonetheless have a higher (less negative) Excess Return Sharpe Ratio and thus be considered to have been "better". Given the basis
for the use of the measure, however, this is not a paradox. Consider the case shown below.
Excess Return Sharpe Ratios for Two Funds
Here X by itself was clearly inferior to Y (and both were inferior to Treasury Bills). But, for an investor who had planned for a standard deviation of 10%, the combination of 2/3 X and 1/3 Bills
would have broken even, while investment in fund Y would have lost money. Thus a Fund/Bills strategy using the fund with the higher (or less negative) Excess Return Sharpe Ratio would have been
better. Of course, one would never invest in funds such as X or Y if their prospects involved risk with negative expected excess returns. But, after the fact, the Sharpe Ratio comparison remains
valid, even in this case, if the preconditions for its use were in effect.
Costs, Assets and Sharpe Ratios
Whatever the relevance of the Excess Return Sharpe Ratio may be, it is useful to investigate the relationship between fund performance, so measured, and fund characteristics.. We investigate three
such characteristics: expense ratios, turnover ratios and total assets. Since all our measures of performance are based on net returns, higher costs would lead to lower net returns and hence poorer
performance unless more than offset by higher gross performance. Moreover, since larger funds tend to have lower proportional expenses, they would provide better net performance unless their gross
performance were commensurately lower.
We choose the traditional annualized excess return Sharpe ratio (ERSR) for this analysis, since it is more familiar and has somewhat better statistical and economic properties. However, results using
the Morningstar measure would have differed little from those shown here.
Here (and later) we first consider each of the three variables in isolation, following the method used earlier. We group the funds into deciles of 129 funds each (except that there are 125 funds in
the tenth decile) based on the variable of interest (for example, expense ratio). We then compute the average Sharpe ratio for the funds in each decile and graph the average values for each of the
ten deciles. If the variable has explanatory power, the deciles will vary in average performance by economically meaningful amounts.
We begin with expense ratios, the results for which are shown below:
Average Sharpe ratios for ten deciles based on expense ratios
While by no means uniform, the bars become considerably shorter as one goes from left to right. The average Sharpe ratio for the funds with the smallest expense ratios was over 75% greater than that
of the funds with the greatest expense ratios. This is evidence in support of the thesis that higher expenses add far more to expense than they add to performance.
A similar result is obtained when turnover ratios are considered:
Average Sharpe ratios for ten deciles based on turnover ratios
While the magnitude of the difference between the largest and smallest decile is somewhat smaller than that obtained when funds were grouped based on expense ratios, the greater uniformity of the
decline in bar height from left to right is impressive. This is evidence that the greater costs incurred by funds with high turnover are not offset by commensurate performance gains.
Since large funds tend to have lower expenses and somewhat lower turnover, we would expect performance to increase with asset size, given the two previous results. As the next graph shows, such is
the case.
Average Sharpe ratios for ten deciles based on total assets
While bigger funds tend to have had better performance, this may be due entirely to their tendency to have lower expenses and turnover. To try to separate out the influences of these three fund
characteristics, we perform a multiple regression analysis with all three characteristics as independent variables and the Sharpe ratios as the dependent variable. For each variable, two statistics
are reported below -- one measuring the variable's statistical significance, the other its economic significance.
Multiple regression, dependent variable: Sharpe ratio
│ │t-value│Effect of one SD change │
│Expense Ratio │-14.25 │ -0.1336 │
│Turnover Ratio │-10.11 │ -0.0936 │
│Assets │+ 1.49 │ +0.0137 │
From a statistical standpoint, both expense ratios and turnover ratios are highly significantly related to Sharpe ratios. A standard rule of thumb considers a variable statistically significant if
the t-value from a multiple regression has an absolute value greater than 2.0. In this sense, the two cost measures are highly significant while the size of the fund, per se, is not.
Statistical significance is important, but economic significance measures the effect of a variable on an investor's overall wealth. To capture the latter we compute the impact of a change in each
variable equal to one cross-sectional standard deviation of that variable for the funds in the analysis. For example, let the average expense ratio for the funds be aE and the standard deviation of
expense ratios for the funds be sdE. In this case, aE=1.3047 and sdE = 0.6552. In the multiple regression equation the coefficient for the expense ratio was -0.2039. This indicates that moving from a
fund with an expense ratio equal to aE to one with an expense ratio of aE+sdE would, on average, reduce the fund's Sharpe ratio by 0.2039*0.6552, or 0.1336. Roughly, going from a typical fund to one
in the 84'th percentile in terms of expense ratios would, on average, lower performance measured by the Sharpe ratio by 0.1336.
As the figures in the final column of the table indicate, expense ratio was the most economically important determinant of performance in this analysis, with turnover ratio a fairly close second, and
assets, per se, a distant third.
In-sample and Out-of-sample Analyses of the Impact of Expenses
Evidence that fund net performance tends to be lower when expenses are high than when they are low is not new. Nor is evidence that higher turnover tends to lower net performance. However, our
results may overstate the importance of each of these factors. Consider, for example, two funds with equal dollar expenses, each of which is fixed and unaffected by assets under management. If one
fund does well while the other does poorly, the expense ratios at the end of the measurement period will differ, with the ratio of dollar costs to asset value lower for the fund that provided the
better performance, even though the performance was unrelated to its expenses.
In practice, fund expense ratios do not decline as rapidly with size as our example would suggest. Nonetheless, it is likely that our results overstate the relationship between expenses (and possibly
turnover) measured before the fact and subsequent performance. To provide at least some measure of the latter, we examine the relationship between expenses ratios at the end of 1993 and Sharpe Ratios
for the 1994-1996 period for the 540 funds in our 6-year sample.
The figures below show average Sharpe Ratios for fund deciles based on prior measures of , respectively, expense ratios, turnover ratios and fund size.
Note that the differences in performance are somewhat smaller than in the prior analyses, but they are still substantial and in the same directions.
The table below shows the results of a multiple regression in which the Sharpe Ratio for 1994-1996 was the dependent variable and the three measures determined in 1993 were the independent variables.
Multiple regression, dependent variable: Sharpe ratio
│ │t-value│Effect of one SD change │
│Prior Expense Ratio │- 8.40 │ - 0.1309 │
│Prior Turnover Ratio │- 4.88 │ - 0.0743 │
│Prior Assets │+ 0.48 │ + 0.0073 │
All the numbers in the table are smaller in absolute value than their counterparts in the prior analysis, again suggesting that the earlier analysis was in fact biased as expected. This being said,
the coefficients for expenses and turnover are both statistically and economically highly significant.
One final comment about these results is in order. Note that the sample does not include all the funds for which expense ratios and turnover data were available in later 1993. Most of the missing
funds are likely to have performed poorly between 1994 and 1996. If they tended to have had high expenses and/or turnovers, our results may well understate the negative impact of such
characteristics. This seems more likely than the alternative hypothesis that our results understate the impact of expenses and turnover. However, lacking complete data on the missing ("dead") funds,
no definitive statement regarding the sign or size of the bias can be made.
Go to Table of Contents for this Paper | {"url":"http://www.stanford.edu/~wfsharpe/art/stars/stars6.htm","timestamp":"2014-04-18T18:50:52Z","content_type":null,"content_length":"20929","record_id":"<urn:uuid:bcf78d29-d0b5-4d28-bb5a-4269090cb9d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Quantum World/Serious illnesses/Bohr
From Wikibooks, open books for an open world
In 1913 Niels Bohr postulated that the angular momentum $L$ of an orbiting atomic electron was quantized: its "allowed" values are integral multiples of $\hbar$:
$L=n\hbar$ where $n=1,2,3,\dots$
Why quantize angular momentum, rather than any other quantity?
• Radiation energy of a given frequency is quantized in multiples of Planck's constant.
• Planck's constant is measured in the same units as angular momentum.
Bohr's postulate explained not only the stability of atoms but also why the emission and absorption of electromagnetic radiation by atoms is discrete. In addition it enabled him to calculate with
remarkable accuracy the spectrum of atomic hydrogen — the frequencies at which it is able to emit and absorb light (visible as well as infrared and ultraviolet). The following image shows the visible
emission spectrum of atomic hydrogen, which contains four lines of the Balmer series.
Apart from his quantization postulate, Bohr's reasoning at this point remained completely classical. Let's assume with Bohr that the electron's orbit is a circle of radius $r.$ The speed of the
electron is then given by $v=r\,d\beta/dt,$ and the magnitude of its acceleration by $a=dv/dt=v\,d\beta/dt.$ Eliminating $d\beta/dt$ yields $a=v^2/r.$ In the cgs system of units, the magnitude of the
Coulomb force is simply $F=e^2/r^2,$ where $e$ is the magnitude of the charge of both the electron and the proton. Via Newton's $F=ma$ the last two equations yield $m_ev^2=e^2/r,$ where $m_e$ is the
electron's mass. If we take the proton to be at rest, we obtain $T=m_ev^2/2=e^2/2r$ for the electron's kinetic energy.
If the electron's potential energy at infinity is set to 0, then its potential energy $V$ at a distance $r$ from the proton is minus the work required to move it from $r$ to infinity,
$V=-\int_r^\infty F(r')\,dr'=-\int_r^\infty\!{e^2\over(r')^2}\,dr'= +\left[{e^2\over r'}\right]_r^\infty=0-{e^2\over r}.$
The total energy of the electron thus is
$E=T+V=e^2/2r-e^2/r= -e^2/2r.$
We want to express this in terms of the electron's angular momentum $L=m_evr.$ Remembering that $m_ev^2=e^2/r,$ and hence $rm_e^2v^2=m_ee^2,$ and multiplying the numerator $e^2\,$ by $m_ee^2$ and the
denominator $2r$ by $rm_e^2v^2,$ we obtain
Now comes Bohr's break with classical physics: he simply replaced $L$ by $n\hbar$. The "allowed" values for the angular momentum define a series of allowed values for the atom's energy:
$E_n=-{1\over n^2}\left({m_ee^4\over2\hbar^2}\right),\quad n=1,2,3,\dots$
As a result, the atom can emit or absorb energy only by amounts equal to the absolute values of the differences
$\Delta E_{nm}=E_n-E_m=\left({1\over n^2}-{1\over m^2}\right)\,\hbox{Ry},$
one Rydberg (Ry) being equal to $m_e e^4/2\hbar^2 = 13.6056923(12)\,\hbox{eV.}$ This is also the ionization energy $\Delta E_{1\infty}$ of atomic hydrogen — the energy needed to completely remove the
electron from the proton. Bohr's predicted value was found to be in excellent agreement with the measured value.
Using two of the above expressions for the atom's energy and solving for $r,$ we obtain $r = n^2\hbar^2/m_ee^2.$ For the ground state $(n=1)$ this is the Bohr radius of the hydrogen atom, which
equals $\hbar^2/m_ee^2 = 5.291772108(18)\times10^{-11} m.$ The mature theory yields the same figure but interprets it as the most likely distance from the proton at which the electron would be found
if its distance from the proton were measured. | {"url":"https://en.wikibooks.org/wiki/This_Quantum_World/Serious_illnesses/Bohr","timestamp":"2014-04-18T08:19:14Z","content_type":null,"content_length":"34292","record_id":"<urn:uuid:b828126c-2622-43b3-9eb0-977b11bc3a89>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
'Simple' statics
Again, you have 4 unknowns but only 3 static equilibrium equations, not 4. If you take the sum of M_A = 0 as one of your static equilibrium equations, then taking the sum of M_B =0 or sum of M_C =0,
buys you nothing. It's sort of like saying sum of F_x =0, sum of F_y =0, both correct and necessary, but then trying to use sum of forces about an axis 45 degrees to the x axis =0, also correct, but
it doesn't give you the extra equation you need. You need the extra equation from other info, such as B_x or B_y =0 (roller support). Otherwise, you'd have to remove one of the components of the pin
support, say B_x, then solve the reactions A_x and A_y and B_y with the 3 equilibrium equations, then add back the B_x term and note the reactions under B_x only, with the stipulation that the total
horizontal deflection of the support at B under all loads is 0. You say you have a software program to run this problem, does it allow you to enter that the support deflections at A and B in both the
horizontal and vertical directions are 0? I would think that the program needs the end conditions anyway.
As an approximate solution, with the load aplied at C, you might want to consider this as a truss, the difference being that you have a continuous , not a pinned 'link' joint, at C, but certain tests
have shown that this does not appreciably affect member forces and support reactions (many trusses are built with multiple bolted joint connections with gusset plates, tending to fix the joint),
thus, you may wish to consider AC and BC as taking axial forces only, in which case A_x and A_y are related by A_y/A_x = tan theta, and likewise for B_x and B_y. | {"url":"http://www.physicsforums.com/showthread.php?p=2430447","timestamp":"2014-04-16T19:09:46Z","content_type":null,"content_length":"51254","record_id":"<urn:uuid:e05aa69d-5ab5-43a9-899f-2f24725caa70>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Main Index Mathematical Analysis Mathematical Constants
Subject Index
comment on the page
Harmonic number
The first ten harmonic numbers are:
To compute more harmonic numbers visit .
[1] .
To see the first statement note that the greatest power of 2 not exceeding
L.Kürschák [2] proved more: the difference between two distinct harmonic numbers is never an integer. He proved this in the form that
T.Nagell [3] proved that even [4] .
The Bertrand postulate says that for every positive integer
An analogous question for which ):
L.Theisinger [1] also proved that
A combinatorial interpretation of harmonic numbers says that ( [5] , p.275 )
Euler found the following integral representation
It can be shown using the integral comparison criterion applied to the function
This implies that there follows that there exists a non-vanishing finite limit
The constant . To see this we can use Euler’s summation formula which gives that
• to get an asymptotic approximations for
For instance if
• to discover and prove various asymptotic relations, for instance that for every
Last statement can be extended also in the following form: is monotone increasing and converges to
we have
using the substitution
Graphical representation of values (red points) of
A generating function for harmonic numbers is
In the following graph the harmonic numbers are represented by red points and the values of right hand side by blue points
J.C.Lagarias [6] proved an interesting link to the Riemann hypothesis. He proved that the problem of showing that
for all [7] . One of the inequalities says that the inequality
In the graph the blue points represent the sum-of-divisors functions and the red ones the values of the right hand side of (9)
For the first multiples of 12 we have
It is possible to extend the above definition of a harmonic number also to non-integral values of
converges for all complex values of
Wolstenholme's Theorem: If [8]
• the numerator of the harmonic number
• the numerator of the generalized harmonic number
An interesting problem in which solution harmonic numbers unexpectedly occur was discovered by R.T.Sharp [9] [10] : determine how far an overhang we can achieve by stacking dominoes over the table
edge, accounting for the force of gravity.
The center of gravity is a geometric property of any object and it is the point where all the weight of the object can be considered to be concentrated. In other words, it is defined as the average
location of the weight of an object. Thus the radiusvector
Since we shall suppose that edges of the cards are parallel to the edge of the table, we shall only work with the
For the sake of simplicity suppose that length of each domino piece is 2 units, the weight is 1 unit with uniformly distributed mass along the length,, and that the pile contains
The stability criterion is that the 12)
This implies that
A bit surprising consequence of the solution is that given sufficiently large number of dominoes, arbitrary large overhangs can be achieved.
In the above way of stacking of dominoes we achieved an overhang of [11] showed that it is exponentially far from optimal and gave explicit constructions with an overhang of .
Generalized harmonic number
Generalized harmonic number of order
The first ten generalized harmonic numbers
To compute more generalized harmonic numbers visit .
Comparison graph of values of
The following identity goes back to Euler ( [5] , p.278 )
Similarly to (10) we can take the series
to get a generalization to non-integral values of indices.
The generalized harmonic number converges to the Riemann zeta function in the following sense
Hyperharmonic number
J.H.Conway and R.K.Guy [12] defined the hyperharmonic numbers, the harmonic number of order
The first ten harmonic numbers of order 2 are:
The first ten harmonic numbers of order 3 are:
We have [13]
where [14] , where [12]
To compute more hyperharmonic numbers visit .
[1] Theisinger, L. (1915). Bemerkung über die harmonische Reihe. (German). Monatsh. f. math., 26, 132-134.
[2] Kürschák, J. (1918). Über die harmonische Reihe. (Hungarian). Math. és phys. lapok, 27, 299-300.
[3] Nagell, T. (1923 (1924)). Eine Eigenschaft gewisser Summen. Skr. Norske Vid. Akad. Kristiania, I(13), 10-15.
[4] Erdõs, P. (1932). Generalization of an elementary number-theoretic theorem of Kürschák (Hungarian). Mat. Fiz. Lapok, 39, 17-24.
[5] Graham, R. L., Knuth, D. E., & Patashnik, O. (1994). Concrete mathematics: a foundation for computer science (2nd ed.). Amsterdam: Addison-Wesley Publishing Group.
[6] Lagarias, J. C. (2002). An elementary problem equivalent to the Riemann hypothesis. Amer. Math. Montly, 109(6), 534-543.
[7] Robin, G. (1984). Grandes valeurs de la fonction somme des diviseurs et hypothèse de Riemann. J. Math. Pures Appl., IX. Sér. 63, 187-213.
[8] Wolstenholme, J. (1862). On certain properties of prime numbers. Quarterly Journal of Mathematics, 5, 35-39.
[9] Sharp, R. T. (1954). Problem 52: Overhanging dominoes. Pi Mu Epsilon Journal, 1(10), 411-412.
[10] Trigg, C. W. (1954). Problem 52: Overhanging dominoes, (proposer: R. T. Sharp). Pi Mu Epsilon Journal, 1(10), 411-412.
[11] Paterson, M., & Zwick, U. (2006). Overhang. In: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm (pp. 231-240). New York: ACM.
[12] Conway, J. H., & Guy, R. K. (1996). The Book of Numbers. Berlin: Springer Verlag.
[13] Benjamin, A. T., Gaebler, D., & Gaebler, R. (2003). A combinatorial approach to hyperharmonic numbers. Integers, 3, Paper A15.
[14] Broder, A. Z. (1984). The r-Stirling numbers. Discrete Math., 49, 241-259.
Cite this web-page as:
Štefan Porubský: Harmonic Number.
Page created , and updated . | {"url":"http://www.cs.cas.cz/portal/AlgoMath/MathematicalAnalysis/MathematicalConstants/HarmonicNumber.htm","timestamp":"2014-04-19T07:08:09Z","content_type":null,"content_length":"45997","record_id":"<urn:uuid:ff11f98a-97cf-457b-a11d-ed65e5a38491>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Square Root of i
Date: 03/08/99 at 21:21:00
From: Nadine Schiavo
Subject: Square Root of i.
In my Honors Algebra II/Trigonometry class, we just completed a section
on complex numbers. One of my students asked me the following:
The square root of -1 is i, but what is the square root of i?
Can you help?
Date: 03/09/99 at 08:31:37
From: Doctor Rick
Subject: Re: Square Root of i.
There are two complex numbers which, when squared, equal i. (The same
holds true for any number. But for real numbers, we arbitrarily say
that the positive root is THE square root. When the roots have
imaginary parts, any such choice would be even more arbitrary, and we
do not bother to choose one.)
The square roots of i are
(1 + i) * sqrt(2)/2 and
-(1 + i) * sqrt(2)/2
You can prove this just by squaring each number. To find them in the
first place, you can use Euler's formula
e^(i*t) = Cos(t)+i*Sin(t)
If t = pi/2, you get
e^(i*pi/2) = cos(pi/2)+i*sin(pi/2) = 0 + i*1 = i
The square root of this is
(e^(i*pi/2))^(1/2) = e^(i*pi/4)
and using Euler's formula again, we have
e^(i*pi/4) = 1/2 + i*1/2
If you set t = 5*pi/2, you again get e^(i*5pi/2) = i. Taking the square
root of this and using Euler's formula, you get the other root of i.
Euler's formula makes it easy to find powers and roots by working in
polar coordinates in the complex plane. Any number x + iy can be
written in terms of a radius r and angle theta (counterclockwise from
the x axis):
x + iy = re^(i*theta)
r = sqrt(x^2 + y^2)
theta = arctan(y/x)
Then, using Euler's formula,
(x+iy)^k = r^k e^(i*k*theta)
= r^k(cos(k*theta) + i*sin(k*theta))
Here are some related pages from our archives:
Square Roots in Complex Numbers
Proof of e^(iz) = cos(x) + isin(x)
Roots of Unity
- Doctor Rick, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/53863.html","timestamp":"2014-04-17T01:09:46Z","content_type":null,"content_length":"7066","record_id":"<urn:uuid:7e92ceda-dbc0-437d-ac08-250b35fa9815>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Remember that putting a negative sign in front of a function means the same thing as multiplying that function by -1.
Sample Question
Let g(x) = -x^2. Then we could think of this function as
g(x) = (-1)(x^2),
g'(x) = (-1)(x^2)' = (-1)(2x) = -2x.
The moral of the story is that the derivative of the negative of f is the negative of the derivative of f:
(-f(x))' = -f'(x).
Sample Problem
Let f(x) = -sin x. Then
f'(x) = -(sin x)' = -cos(x).
Find the derivative of the function.
Find the derivative of the function.
Find the derivative of the function.
Find the derivative of the function.
Find the derivative of the function. | {"url":"http://www.shmoop.com/computing-derivatives/derivatives-negative-one-help.html","timestamp":"2014-04-18T15:49:44Z","content_type":null,"content_length":"40637","record_id":"<urn:uuid:ac151fa9-e8e5-4f34-8eec-0e124774faf7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Question1-Read this statement: “If a triangle is isosceles, then it is equilateral.” What is the converse of the statement? Answer- If a triangle is equilateral, and then it is isosceles. Reason-
Question3-Name the property that justifies this statement: If AB = BA, then segment AB is congruent to segment BA. Answer- Reflexive Property of Congruence Reason- Question4-Name the property that
justifies this statement: If 3x+4=35, then 3x=31. Answer- Subtraction Property of Equality Reason-
• one year ago
• one year ago
Best Response
You've already chosen the best response.
i need someone to explain why these answers are correct, i am really bad at explaining
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f5c68fe4b0246f1fe3b3d6","timestamp":"2014-04-16T04:43:23Z","content_type":null,"content_length":"28146","record_id":"<urn:uuid:5ca3c417-429c-49b8-9d46-7cc6531f66d0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time and size limitations
Next: The basic machinery Up: Limitations Previous: Realism of forces
Typical MD simulations can be performed on systems containing thousands--or, perhaps, millions--of atoms, and for simulation times ranging from a few picoseconds to hundreds of nanoseconds. While
these numbers are certainly respectable, it may happen to run into conditions where time and/or size limitations become important.
A simulation is ``safe'' from the point of view of its duration when the simulation time is much longer than the relaxation time of the quantities we are interested in. However, different properties
have different relaxation times. In particular, systems tend to become slow and sluggish in the proximity of phase transitions, and it is not uncommon to find cases where the relaxation time of a
physical property is orders of magnitude larger than times achievable by simulation.
A limited system size can also constitute a problem. In this case one has to compare the size of the MD cell with the correlation lengths of the spatial correlation functions of interest. Again,
correlation lengths may increase or even diverge in proximity of phase transitions, and the results are no longer reliable when they become comparable with the box length.
This problem can be partially alleviated by a method known as finite size scaling. This consist of computing a physical property A using several box with different sizes L, and then fitting the
results on a relation
Next: The basic machinery Up: Limitations Previous: Realism of forces Furio Ercolessi | {"url":"http://www.fisica.uniud.it/~ercolessi/md/md/node12.html","timestamp":"2014-04-21T04:50:55Z","content_type":null,"content_length":"4967","record_id":"<urn:uuid:e6f269d9-dd5e-49b9-8a9f-32ee184a5a94>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earth Avoids Collisions With Pair of Asteroids
41091911 story
Posted by
from the stand-down-Mr.-Willis dept.
Hugh Pickens
"According to NASA, a pair of asteroids — one just over three miles wide — passed Earth Tuesday and early Wednesday, avoiding a potentially cataclysmic impact with our home planet. 2012 XE5,
estimated at 50-165 feet across, was discovered just days earlier, missing our planet by only 139,500 miles, or slightly more than half the distance to the moon. 4179 Toutatis, just over three miles
wide, put on an amazing show for astronomers early Wednesday, missing Earth by 18 lunar lengths, while allowing scientists to observe the massive asteroid in detail. Asteroid Toutatis is well known
to astronomers. It passes by Earth's orbit every four years and astronomers say its unique orbit means it is unlikely to impact Earth for at least 600 years. It is one of the largest known
potentially hazardous asteroids, and its orbit is inclined less than half-a-degree from Earth's. 'We already know that Toutatis will not hit Earth for hundreds of years,' says Lance Benner of NASA's
Near Earth Object Program. 'These new observations will allow us to predict the asteroid's trajectory even farther into the future.' Toutatis would inflict devastating damage if it slammed into Earth
, perhaps extinguishing human civilization. The asteroid thought to have killed off the dinosaurs 65 million years ago was about 6 miles wide, researchers say. The fact that 2012 XE5 was discovered
only a few days before the encounter prompted Minnesota Public Radio to poll its listeners with the following question: If an asteroid were to strike Earth within an hour, would you want to know?"
This discussion has been archived. No new comments can be posted.
Earth Avoids Collisions With Pair of Asteroids
Comments Filter:
• Fearmongering much? (Score:5, Informative)
by LordLucless (582312) on Thursday December 13, 2012 @02:02AM (#42269751)
I guess "Asteroid Misses Earth, Just Like It's Done Every 4 Years For Millennia" just wasn't catchy enough
• Re:Surprising number (Score:3, Informative)
by Anonymous Coward on Thursday December 13, 2012 @04:55AM (#42270395)
Just looking at the numbers I'd place the odds at high of an impact. We're coming up on a hundred year anniversary of Tunguska so I'd say we're due for a similar impact any day now. It could be
tomorrow or a hundred years from now but statistically we're due now.
We're not 'due' for anything. The fact that a devastating impact didn't happen yesterday does not increase the odds that it will happen today, it's not as if somebody decides to send an astroid
in our direction because he looks on his impact calendar and decides it's been quiet for too long. If every day has an equal likelyhood of a devastating impact happening the average outcome will
reflect that likelyhood without days or impacts infuencing each other.
• Satellite fly-by to asteroid 4179 Toutatis (Score:5, Informative)
by Taco Cowboy (5327) on Thursday December 13, 2012 @05:58AM (#42270621) Journal
There will be a human-made satellite that will engage in a fly-by to asteroid 4179 Toutatis
The satellite is China's Chang'e 2 and it will rendezvous with 4179 Toutatis.
There are two conflicting reports of the rendezvous date -
According to wikipedia the rendezvous date will be 13th December 2012 - https://en.wikipedia.org/wiki/4179_Toutatis [wikipedia.org]
According to another source - http://www.planetary.org/blogs/emily-lakdawalla/2012/20120614-change-2-toutatis.html [planetary.org] - the rendezvous date will fall on 6th, January, 2013.
• Re:Surprising number (Score:2, Informative)
by Anonymous Coward on Thursday December 13, 2012 @07:55AM (#42271155)
Seriously dude, learn how statistics and probability work.
Consider this a jar problem with black and white marbles.
This isn't a jar problem where you're taking out a white marble everyday (no asteroid impact) and tossing it away which increases the chance you'll eventually pull the one black marble in the jar
This is a replacement problem. That white marble you pulled goes right back in and the probability you draw the one black one is equally likely on any given day.
• Re:Surprising number (Score:5, Informative)
by BasilBrush (643681) on Thursday December 13, 2012 @10:13AM (#42272201)
No he's not wrong. You don't understand statistics and probability.
An ordinary coin has a 50% chance of landing heads.
If I toss it, and it lands tails. The next time it is no more likely to land heads. It's still 50%.
If I toss it 3 times and it lands on tails each time, the next time it's still 50% chance it'll land on heads.
If I toss it 100 times and every single time it lands on tails, guess what the probability of it landing on heads the next time is? Yup, it's still 50%.
They are independent events. The coin has no memory.
Likewise if there is an X% chance of a asteroid hitting the earth on and particular day, the fact that one has not hit the earth today does not in any way affect the chances of it hitting
They are independent events. One asteroid doesn't know what another asteroid did or did not do yesterday.
Likewise similar myths about choosing lottery numbers based on previous numbers are all wrong. Despite this, the mathematically ignorant nearly all think they are right.
This bears on your pro-gun arguments. You don't understand statistics. You just google and copy from pro-gun sites, anything you think sounds like it supports guns, ignoring the ones that don't
sound like they support guns. You have no basis on which to judge their veracity.
You honestly think copying and pasting data for which you don't understand the stats will somehow progress your particular passion. It doesn't.
And you don't even have the manners to attribute the source of your copy and pasting. Which lowers to point of engaging with you even more.
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/12/12/13/0111241/earth-avoids-collisions-with-pair-of-asteroids/informative-comments","timestamp":"2014-04-20T01:38:44Z","content_type":null,"content_length":"85442","record_id":"<urn:uuid:9b3db07a-dd3f-49ee-9e1c-4e9d99b8f137>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermat-like equation $c^n=a^{2n}+a^n b^n + b^{2n}$
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Is there a solution to the following equation:
$c^n=a^{2n}+a^n b^n + b^{2n}$ where $a,b,c \in \mathbb{N}^*$, and $n$ is an integer $\geq 2$.
up vote 13 down vote favorite The problem is due to Antoine Balan.
Thanks in advance.
add comment
$c^n=a^{2n}+a^n b^n + b^{2n}$ where $a,b,c \in \mathbb{N}^*$, and $n$ is an integer $\geq 2$.
Writing the equation as $$ c^n+(ab)^n = (a^n+b^n)^2, $$ one has, via results of Darmon and Merel, and Poonen, that there are no solutions for $n \geq 4$, provided the three terms are
coprime. If $n=2$, I think Pythagorean triply stuff shown that there are again no solutions. In the case $n=3$, I guess one could again look at the parametrizations for $X^3+Y^3=Z^2$,
say in Cohen's GTM. I'm not sure how hard this case is....
up vote 19 down
vote accepted The condition on coprimality might be a problem, but it's too early in the morning for me to be sure!
add comment
Writing the equation as $$ c^n+(ab)^n = (a^n+b^n)^2, $$ one has, via results of Darmon and Merel, and Poonen, that there are no solutions for $n \geq 4$, provided the three terms are coprime. If $n=
2$, I think Pythagorean triply stuff shown that there are again no solutions. In the case $n=3$, I guess one could again look at the parametrizations for $X^3+Y^3=Z^2$, say in Cohen's GTM. I'm not
sure how hard this case is....
The condition on coprimality might be a problem, but it's too early in the morning for me to be sure!
Yes, it's a nice question. It does seem that there are no solutions in positive integers to A. Balan's equation, and indeed no integer solutions at all other than those with $a=0$, $b=0$, or
(when $n$ is odd) $a+b=0$. As usual the problem is equivalent to finding all rational points on an algebraic curve $B_n$; here $B_n$ is the "superelliptic" curve with affine equation $y^n = x^
{2n} + x^n + 1$. We want to show that the only rational points are those with $x=\infty$, $x=0$, and (when $n$ is odd) $x=-1$. Herewith:
1) Verification that $ab$ and $c$ may be assumed coprime; by M.Bennett's clever rewriting and the results he quotes from DMP = Darmon, Merel, and Poonen, this reduces the problem to $n=2$ and
2) $n=2$, worked out in more detail than in my comment to the original question.
3) $n=3$; fortunately a genus-2 quotient $C_3$ of $B_3$ has torsion Jacobian.
4) A bit more on the analogy JSE suggested: Balan is to $\lbrace0,\infty,\rho,\rho^2 \rbrace$ as Fermat is to $\lbrace 0, 1, \infty \rbrace$ (where $\rho = e^{2\pi i/3}$), and why the proof
suggested by this analogy seems to barely fail.
5) Where Bennett's approach fits in this picture.
(1) Mike finished his answer with the sentence "The condition on coprimality might be a problem, but it's too early in the morning for me to be sure!"; by now he's probably had the chance to
check this, but I don't see it yet as an edit or a comment, so here goes. To apply DMP to $c^n + (ab)^n = (a^n+b^n)^2$ we need $c$ coprime to $ab$. If $p$ divides $c$ and (say) $a$, then
taking the equation mod $p$ yields $p|b^n$, and then $p|b$, so $p^{2n} | a^{2n} + a^n b^n + b^{2n} = c^n$, whence $p^2 | c$. At this point we may replace $(a,b,c)$ by the equivalent solution $
(a/p, b/p, c/p^2)$. Unless $(a,b,c)=(0,0,0)$, in finitely many steps we reach an equivalent solution with $\gcd(ab,c)=1$ as desired. If $n\geq 4$, it follows from DMP that at least one of
$ab$, $a^n+b^n$, and $c$ vanishes, and we're done.
It remains to deal with $n < 4$.
(2) $B_2: y^2 = x^4 + x^2 + 1$ is an elliptic curve. It has rational points at infinity, so we put it in extended Weierstrass form by the usual centuries-old technique. Choose a point $P$ at
infinity, and expand $y$ about $P$ in a Laurent series, truncating before the constant term to get a quadratic polynomial; here it's just $x^2$. Hence $t := y+x^2$ has a double pole at $P$ and
nowhere else on the curve, so it can be used as the abscissa. Writing $y=t-x^2$ in $y^2 = x^4 + x^2 + 1$ gives a quadratic in $x$, namely $(2t+1)x^2 = t^2-1$, which has solutions iff $4(2t+1)
(t^2-1)=u^2$ for some $u$; dividing $t$ by $2$ yields $(t+1)(t-2)(t+2) = u^2$, which is the curve of conductor $48$ with coefficient vector $[0,1,0,-4,-4]$. Thanks to rational $2$-torsion (and
trivial Sha[2] obstruction), Fermat's descent technique, now automated in Cremona's mwrank, suffices to prove the curve has rank zero and no rational points other than the four trivial ones we
already knew of.
(3) $B_3: y^3 = x^6 + x^3 + 1$ has genus $4$. There's an obvious quotient curve of genus $1$, namely $E: Y^{\phantom.3} = X^{\phantom.2} + X + 1$, which is already in the usual extended
Weierstrass form (with the roles of $X$ and $Y$ reversed). Unfortunately the obvious points with $Y=1$ are of infinite order; indeed plugging the coefficient vector $[0,0,1,0,-1]$ into mwrank
we find that $(X,Y\phantom.) = (0,1)$ (or equivalently $(-1,1)$) generates the group of rational points. (We could have known in advance that the rank cannot exceed $1$ because its conductor
is $3^5 = 243 < 389$...) So we cannot use this quotient to prove that $B_3$ has no more rational points. Fortunately there is a genus-2 quotient $C_3: Y^{\phantom.3} = X^4 + X^3 + X^2$ that
does work. [To get this quotient, first write $B_3$ as $(x^2 y)^3 = x^{12} + x^9 + x^6$, then take $(X,Y\phantom.) = (x^2 y, x^3)$.] To put this curve in hyperelliptic form, divide through by
$X^2$ to get $z^3 X = X^2 + X + 1$ where $z = Y/X$, then find the discriminant of this quadratic in $X$ to get $z^6 - 2z^3 - 3 = w^2$. The three known points correspond to the Weierstrass
point $(z,w) = (-1,0)$ and the two points at infinity, call them $P_\pm$. The subgroup of the Jacobian $J(C_3)$ represented by degree-zero divisors supported on these three points is
isomorphic with ${\bf Z} / 6 {\bf Z}$: twice the Weierstrass point is equivalent with $P_+ + P_-$, and $3P_+ \sim 3 P_-$ because the difference is the divisor of $z^3 - 1 - w$. [An analogous
result holds for all odd $n$, but the next step might not.] It turns out that this torsion group is all of $(J(B_1))({\bf Q})$: I asked magma
vote _<z> := PolynomialRing(Rationals());
12 C1 := HyperellipticCurve(z^6 - 2*z^3 - 3);
down J := Jacobian(C1);
vote TorsionSubgroup(J);
(adapting one of the examples in http://magma.maths.usyd.edu.au/magma/handbook/text/1375), and found that the Jacobian $J(C_3)$ has $6$ torsion points and rank zero. It follows that our list
of rational points on $C_3$, and thus also on $B_3$, is complete.
[The Jacobian of the genus-$4$ curve $B_3$ is isogenous with $E \times E \times J(C_3)$: in general the Jacobian of a superelliptic curve of the form $y^m = P(x^m)$ is isogenous with the
product of the Jacobians of the $n$ curves $Y^{\phantom.m} = X^k P(X)$ with $k=0,1,2,\ldots,m$, and here $k=0$ and $k=1$ both yield $E$, while $k=2$ yields $C_3$.]
(4) Recall the following geometrical description of the Frey curves associated with a putative point on the Fermat curve $F_p: x^p + y^p = z^p$, with $p>3$ prime. The rational function $(x/z)^
p$ on $F_p$ realizes $F_p$ as a $\mu_p^2$ cover of ${\bf P}^1$, branched only above $\lbrace 0, 1, \infty \rbrace$. Use the modular function $\lambda$ to identify ${\bf P}^1$ with the modular
curve ${\rm X}(2)$, whose three cusps are at $\lambda=0,1,\infty$. The image of a putative Fermat counterexample yields a Frey curve $E: Y^{\phantom.^2} = X (X \pm x^p) (X \mp y^p)$, with the
sign chosen to make the curve semistable even at $2$ (this is where $p>3$ is needed), and thus modular. But the cover ${\rm X}(2p) \rightarrow {\rm X}(2)$ of modular curves has the same
ramification points and orders as our map from $F_p$. Once $E$, and thus the Galois representation on $E[p]$, has been proved modular, it then follows that this mod-$p$ representation comes
from a cuspform of level $2$, which is impossible because fortunately ${\rm X}_0(2)$ has genus zero.
Now the Balan curve $B_p$ is likewise a $\mu_p^2$ cover of ${\bf P}^1$, but with four branch points, at $(a/b)^p = 0$, $\infty$, $\rho$, and $\rho^2$. These don't seem to be the cusps of any
modular curve. But for our purpose it's enough for some subset to be the cusps, and JSE suggested the only possibility: twist $\Gamma(2)$ by ${\bf Q}(\sqrt{-3})$ to put two of its cusps at $\
rho$ and $\rho^2$ and the third at $\infty$ (or equivalently at $0$). Then the Frey curve becomes $$ E': Y^{\phantom.2} = X \bigl(X^2 + (a^p+2b^p) X + (a^{2p}+a^pb^p+b^{2p}) \bigr), $$ with
discriminant $-48 a^{2p} (a^{2p}+a^pb^p+b^{2p})^p = -48 a^{2p} c^p$. So we must deal with reduction at $3$ as well as at $2$. Now the good news is that $E'$ is semistable at $3$: while there
are coprime integers $A,B$ such that $Y^{\phantom.2} = X \bigl(X^2 + (A+2B) X + (A^2+AB+B^2)\bigr)$ has additive reduction at $3$, this can only happen if $A\equiv B \mod 3$, but then $A^2 +
AB + B^2 \equiv 3 \bmod 9$ so it cannot be a $p$-th power. The bad news is that $E'$ is not in general semistable at $2$, and even after applying the available symmetries $(a,b) \
leftrightarrow (b,a)$ and $(a,b) \leftrightarrow (-a,-b)$ the power of $2$ in the conductor might be as large $2^3$ $-$ this happens exactly when $a \equiv b \bmod 4$. In this case, all that
we can say is that the Galois representation on $E'[p]$ is isomorphic to the the $p$-torsion of an elliptic curve of conductor 24, such as $y^2 = x^3 - x^2 + x$ (it is not entirely
coincidental that this curve is a quadratic twist of the one we encountered above for exponent $2$). At this point I'm stuck: there might be a way to push this argument further or modify it to
finish off the proof, but all I can obtain this way is the unsatisfying conclusion that in any primitive solution $a \equiv b \bmod 4$ and $E'[p]$ has the same Galois structure as the
$p$-torsion of $y^2 = x^3 - x^2 + x$. The first condition follows from the computation that in all other cases $E'[p]$ has conductor dividing $12$ for at least one of the four equivalent
choices of $(a,b)$, and then there's no possible cuspform because ${\rm X}_0(12)$ is rational. This condition is peculiar because the curve $B_n$ has good reduction at 2 (it's the construction
of the auxiliary curve $E'$ introduces problems at this prime). The condition on $E'[p]$ seems rather hard to exploit...
(5) Fortunately there's another way, which comes down to what Bennett found: compose the map, call it $t = (a/b)^p: B_p \rightarrow {\bf P}^1$, with the map $s: t \mapsto t + 1/t$, giving the
quotient map ${\bf P}^1 \rightarrow {\bf P}^1$ under the involution $t \leftrightarrow 1/t$ that switches $0$ with $\infty$ and $\rho$ with $\rho^2$. This map has double points at $s = \pm 2$
(images of the fixed points $t = \pm 1$ of the involution), and takes $t=0,\infty$ to $s=\infty$ and $t = \rho,\rho^2$ to $s = -1$. So now, in place of ${\rm X}(2)$ we need a modular curve
with cusps at $s=-1$ and $s=\infty$ and an elliptic point of order $2$ at $s=2$ or $s=-2$ (or possibly both). The modular curve ${\rm X}_0(2)$, with two cusps and one elliptic point of order
$2$, does the trick, and if we choose to put the elliptic point at $s=-2$ then there's no problem with reduction at $3$ (putting it at $s=+2$ would cause such a problem because $2 \equiv -1 \
bmod 3$). This yields the elliptic curve $E'': Y^2 = X \bigl(X^2 \pm 2(a^p+b^p) X + c^p \bigr)$. That's still not the end of the game: the smallest valuation at $2$ of any quadratic twist of
$E''$ is $5$, and the modular curve ${\rm X}_0(2^5)$ has genus $1$, so there is a modular form of level $2^5$ that must be dealt with. Fortunately this form, unlike the one of level $24$, is
CM. Thus the corresponding Galois representation mod $p$ is well understood, and with some further cleverness and much effort Darmon and Merel were able to dispose of this final case.
add comment
Yes, it's a nice question. It does seem that there are no solutions in positive integers to A. Balan's equation, and indeed no integer solutions at all other than those with $a=0$, $b=0$, or (when
$n$ is odd) $a+b=0$. As usual the problem is equivalent to finding all rational points on an algebraic curve $B_n$; here $B_n$ is the "superelliptic" curve with affine equation $y^n = x^{2n} + x^n +
1$. We want to show that the only rational points are those with $x=\infty$, $x=0$, and (when $n$ is odd) $x=-1$. Herewith:
1) Verification that $ab$ and $c$ may be assumed coprime; by M.Bennett's clever rewriting and the results he quotes from DMP = Darmon, Merel, and Poonen, this reduces the problem to $n=2$ and $n=3$.
2) $n=2$, worked out in more detail than in my comment to the original question.
3) $n=3$; fortunately a genus-2 quotient $C_3$ of $B_3$ has torsion Jacobian.
4) A bit more on the analogy JSE suggested: Balan is to $\lbrace0,\infty,\rho,\rho^2 \rbrace$ as Fermat is to $\lbrace 0, 1, \infty \rbrace$ (where $\rho = e^{2\pi i/3}$), and why the proof suggested
by this analogy seems to barely fail.
(1) Mike finished his answer with the sentence "The condition on coprimality might be a problem, but it's too early in the morning for me to be sure!"; by now he's probably had the chance to check
this, but I don't see it yet as an edit or a comment, so here goes. To apply DMP to $c^n + (ab)^n = (a^n+b^n)^2$ we need $c$ coprime to $ab$. If $p$ divides $c$ and (say) $a$, then taking the
equation mod $p$ yields $p|b^n$, and then $p|b$, so $p^{2n} | a^{2n} + a^n b^n + b^{2n} = c^n$, whence $p^2 | c$. At this point we may replace $(a,b,c)$ by the equivalent solution $(a/p, b/p, c/p^2)
$. Unless $(a,b,c)=(0,0,0)$, in finitely many steps we reach an equivalent solution with $\gcd(ab,c)=1$ as desired. If $n\geq 4$, it follows from DMP that at least one of $ab$, $a^n+b^n$, and $c$
vanishes, and we're done.
(2) $B_2: y^2 = x^4 + x^2 + 1$ is an elliptic curve. It has rational points at infinity, so we put it in extended Weierstrass form by the usual centuries-old technique. Choose a point $P$ at
infinity, and expand $y$ about $P$ in a Laurent series, truncating before the constant term to get a quadratic polynomial; here it's just $x^2$. Hence $t := y+x^2$ has a double pole at $P$ and
nowhere else on the curve, so it can be used as the abscissa. Writing $y=t-x^2$ in $y^2 = x^4 + x^2 + 1$ gives a quadratic in $x$, namely $(2t+1)x^2 = t^2-1$, which has solutions iff $4(2t+1)(t^2-1)=
u^2$ for some $u$; dividing $t$ by $2$ yields $(t+1)(t-2)(t+2) = u^2$, which is the curve of conductor $48$ with coefficient vector $[0,1,0,-4,-4]$. Thanks to rational $2$-torsion (and trivial Sha[2]
obstruction), Fermat's descent technique, now automated in Cremona's mwrank, suffices to prove the curve has rank zero and no rational points other than the four trivial ones we already knew of.
(3) $B_3: y^3 = x^6 + x^3 + 1$ has genus $4$. There's an obvious quotient curve of genus $1$, namely $E: Y^{\phantom.3} = X^{\phantom.2} + X + 1$, which is already in the usual extended Weierstrass
form (with the roles of $X$ and $Y$ reversed). Unfortunately the obvious points with $Y=1$ are of infinite order; indeed plugging the coefficient vector $[0,0,1,0,-1]$ into mwrank we find that $(X,Y\
phantom.) = (0,1)$ (or equivalently $(-1,1)$) generates the group of rational points. (We could have known in advance that the rank cannot exceed $1$ because its conductor is $3^5 = 243 < 389$...) So
we cannot use this quotient to prove that $B_3$ has no more rational points. Fortunately there is a genus-2 quotient $C_3: Y^{\phantom.3} = X^4 + X^3 + X^2$ that does work. [To get this quotient,
first write $B_3$ as $(x^2 y)^3 = x^{12} + x^9 + x^6$, then take $(X,Y\phantom.) = (x^2 y, x^3)$.] To put this curve in hyperelliptic form, divide through by $X^2$ to get $z^3 X = X^2 + X + 1$ where
$z = Y/X$, then find the discriminant of this quadratic in $X$ to get $z^6 - 2z^3 - 3 = w^2$. The three known points correspond to the Weierstrass point $(z,w) = (-1,0)$ and the two points at
infinity, call them $P_\pm$. The subgroup of the Jacobian $J(C_3)$ represented by degree-zero divisors supported on these three points is isomorphic with ${\bf Z} / 6 {\bf Z}$: twice the Weierstrass
point is equivalent with $P_+ + P_-$, and $3P_+ \sim 3 P_-$ because the difference is the divisor of $z^3 - 1 - w$. [An analogous result holds for all odd $n$, but the next step might not.] It turns
out that this torsion group is all of $(J(B_1))({\bf Q})$: I asked magma
_<z> := PolynomialRing(Rationals()); C1 := HyperellipticCurve(z^6 - 2*z^3 - 3); J := Jacobian(C1); TorsionSubgroup(J); RankBound(J);
(adapting one of the examples in http://magma.maths.usyd.edu.au/magma/handbook/text/1375), and found that the Jacobian $J(C_3)$ has $6$ torsion points and rank zero. It follows that our list of
rational points on $C_3$, and thus also on $B_3$, is complete.
[The Jacobian of the genus-$4$ curve $B_3$ is isogenous with $E \times E \times J(C_3)$: in general the Jacobian of a superelliptic curve of the form $y^m = P(x^m)$ is isogenous with the product of
the Jacobians of the $n$ curves $Y^{\phantom.m} = X^k P(X)$ with $k=0,1,2,\ldots,m$, and here $k=0$ and $k=1$ both yield $E$, while $k=2$ yields $C_3$.]
(4) Recall the following geometrical description of the Frey curves associated with a putative point on the Fermat curve $F_p: x^p + y^p = z^p$, with $p>3$ prime. The rational function $(x/z)^p$ on
$F_p$ realizes $F_p$ as a $\mu_p^2$ cover of ${\bf P}^1$, branched only above $\lbrace 0, 1, \infty \rbrace$. Use the modular function $\lambda$ to identify ${\bf P}^1$ with the modular curve ${\rm
X}(2)$, whose three cusps are at $\lambda=0,1,\infty$. The image of a putative Fermat counterexample yields a Frey curve $E: Y^{\phantom.^2} = X (X \pm x^p) (X \mp y^p)$, with the sign chosen to make
the curve semistable even at $2$ (this is where $p>3$ is needed), and thus modular. But the cover ${\rm X}(2p) \rightarrow {\rm X}(2)$ of modular curves has the same ramification points and orders as
our map from $F_p$. Once $E$, and thus the Galois representation on $E[p]$, has been proved modular, it then follows that this mod-$p$ representation comes from a cuspform of level $2$, which is
impossible because fortunately ${\rm X}_0(2)$ has genus zero.
Now the Balan curve $B_p$ is likewise a $\mu_p^2$ cover of ${\bf P}^1$, but with four branch points, at $(a/b)^p = 0$, $\infty$, $\rho$, and $\rho^2$. These don't seem to be the cusps of any modular
curve. But for our purpose it's enough for some subset to be the cusps, and JSE suggested the only possibility: twist $\Gamma(2)$ by ${\bf Q}(\sqrt{-3})$ to put two of its cusps at $\rho$ and $\rho^
2$ and the third at $\infty$ (or equivalently at $0$). Then the Frey curve becomes $$ E': Y^{\phantom.2} = X \bigl(X^2 + (a^p+2b^p) X + (a^{2p}+a^pb^p+b^{2p}) \bigr), $$ with discriminant $-48 a^{2p}
(a^{2p}+a^pb^p+b^{2p})^p = -48 a^{2p} c^p$. So we must deal with reduction at $3$ as well as at $2$. Now the good news is that $E'$ is semistable at $3$: while there are coprime integers $A,B$ such
that $Y^{\phantom.2} = X \bigl(X^2 + (A+2B) X + (A^2+AB+B^2)\bigr)$ has additive reduction at $3$, this can only happen if $A\equiv B \mod 3$, but then $A^2 + AB + B^2 \equiv 3 \bmod 9$ so it cannot
be a $p$-th power. The bad news is that $E'$ is not in general semistable at $2$, and even after applying the available symmetries $(a,b) \leftrightarrow (b,a)$ and $(a,b) \leftrightarrow (-a,-b)$
the power of $2$ in the conductor might be as large $2^3$ $-$ this happens exactly when $a \equiv b \bmod 4$. In this case, all that we can say is that the Galois representation on $E'[p]$ is
isomorphic to the the $p$-torsion of an elliptic curve of conductor 24, such as $y^2 = x^3 - x^2 + x$ (it is not entirely coincidental that this curve is a quadratic twist of the one we encountered
above for exponent $2$). At this point I'm stuck: there might be a way to push this argument further or modify it to finish off the proof, but all I can obtain this way is the unsatisfying conclusion
that in any primitive solution $a \equiv b \bmod 4$ and $E'[p]$ has the same Galois structure as the $p$-torsion of $y^2 = x^3 - x^2 + x$. The first condition follows from the computation that in all
other cases $E'[p]$ has conductor dividing $12$ for at least one of the four equivalent choices of $(a,b)$, and then there's no possible cuspform because ${\rm X}_0(12)$ is rational. This condition
is peculiar because the curve $B_n$ has good reduction at 2 (it's the construction of the auxiliary curve $E'$ introduces problems at this prime). The condition on $E'[p]$ seems rather hard to
(5) Fortunately there's another way, which comes down to what Bennett found: compose the map, call it $t = (a/b)^p: B_p \rightarrow {\bf P}^1$, with the map $s: t \mapsto t + 1/t$, giving the
quotient map ${\bf P}^1 \rightarrow {\bf P}^1$ under the involution $t \leftrightarrow 1/t$ that switches $0$ with $\infty$ and $\rho$ with $\rho^2$. This map has double points at $s = \pm 2$ (images
of the fixed points $t = \pm 1$ of the involution), and takes $t=0,\infty$ to $s=\infty$ and $t = \rho,\rho^2$ to $s = -1$. So now, in place of ${\rm X}(2)$ we need a modular curve with cusps at $s=
-1$ and $s=\infty$ and an elliptic point of order $2$ at $s=2$ or $s=-2$ (or possibly both). The modular curve ${\rm X}_0(2)$, with two cusps and one elliptic point of order $2$, does the trick, and
if we choose to put the elliptic point at $s=-2$ then there's no problem with reduction at $3$ (putting it at $s=+2$ would cause such a problem because $2 \equiv -1 \bmod 3$). This yields the
elliptic curve $E'': Y^2 = X \bigl(X^2 \pm 2(a^p+b^p) X + c^p \bigr)$. That's still not the end of the game: the smallest valuation at $2$ of any quadratic twist of $E''$ is $5$, and the modular
curve ${\rm X}_0(2^5)$ has genus $1$, so there is a modular form of level $2^5$ that must be dealt with. Fortunately this form, unlike the one of level $24$, is CM. Thus the corresponding Galois
representation mod $p$ is well understood, and with some further cleverness and much effort Darmon and Merel were able to dispose of this final case.
I'd like to add another line of proving this (and similar problems); the proof is not yet formally complete but the method as such seems helpful for many such problems, so I show my
reasoning here:
[update] After rereading my own answer I think now, that that line of attack might be more incomplete for this problem than initially thought; that the holes cannot be simply filled with
one or two little more thoughts; so possibly I'll retract the whole approach later. Thanks for the upvotings anyway and apologies for distractions ... [/update]
The original answer went as below:
Let us define the following notations
a) {n,p} the exponent to which a prime p occurs in the primefactorization of some number n
b) the iverson-bracket [n:p] which evaluates to 1 if p divides n and to 0 if not
c) $ \small f(a,b,n)=a^n - b^n $ , the short form for expressions where a,b are thought to be constant and n is seen as varying
d) $ \small \lambda_p $ the smallest k>0 such that p divides $ \small f(a,b,k) $ ,
up vote 2 e) $ \small w_p $ the exponent, to which p occurs in $ \small f(a,b,\lambda_p) $ , formally $ \small w_p=\{ f(a,b,\lambda_p),p \} $
down vote
We use that definitions to rewrite the analysis of the problem in terms of the Euler-totient-theorem and the concept of the order of cyclic subgroups modulo some primes p.
Here the idea is to compare the odd primefactors in the canonical primefactorizations of the lhs and rhs in the conveniently rewritten problem
$$ \small c^n-(ab)^n =^? a^{2n}+b^{2n} = {a^{4n}- b^{4n} \over a^{2n}- b^{2n} } . $$ It will be sufficient to compare the odd primefactors $ \small p, q \in \text{odd primes} $ only; so we
refer to possible exponents of 2 with some anonymous exponents s and t only. Then the lhs is with $ \small q \in \text{odd primes} $
$$ \small f(c,ab,n)=2^s \prod q^{ [n:\lambda_q] (w_q+\{n,q\})} $$ and the rhs is with some exponent t at the primefactor 2 : $$ \small a^{2n}+b^{2n}={ a^{4n}-b^{4n} \over a^{2n}-b^{2n} }={f
(a,b,4n)\over f(a,b,2n)} $$ and $$ \small {f(a,b,4n)\over f(a,b,2n)} = 2^t { \prod p^{ [4n:\lambda_p] (w_p+\{4n,p\})} \over \prod p^{ [2n:\lambda_p] (w_p+\{ 2n , p \} ) } } $$ Here for all
odd primes p we have $ \small \{ 4n , p \} = \{ 2n, p \} = \{ n,p \} $, $$ \small { f(a,b,4n)\over f(a,b,2n)} = 2^t \prod p^{ ([4n:\lambda_p]- [2n:\lambda_p]) (w_p+\{ n, p \} ) } $$
Conclusion: (updated)
In the rhs we get only that primefactors p, whose order divide 4n but not 2n and - having $ \small n = 2^m \cdot o, o \text{ odd } $ thus must be exactly $ \small 4 \cdot 2^m r $ where r is
any odd divisor of n, while on the lhs we get all primes in the factorization whose order equal any divisor of n. But the sets of primes must be equal to allow a solution for the original
add comment
I'd like to add another line of proving this (and similar problems); the proof is not yet formally complete but the method as such seems helpful for many such problems, so I show my reasoning here:
[update] After rereading my own answer I think now, that that line of attack might be more incomplete for this problem than initially thought; that the holes cannot be simply filled with one or two
little more thoughts; so possibly I'll retract the whole approach later. Thanks for the upvotings anyway and apologies for distractions ... [/update] The original answer went as below:
a) {n,p} the exponent to which a prime p occurs in the primefactorization of some number n b) the iverson-bracket [n:p] which evaluates to 1 if p divides n and to 0 if not
c) $ \small f(a,b,n)=a^n - b^n $ , the short form for expressions where a,b are thought to be constant and n is seen as varying d) $ \small \lambda_p $ the smallest k>0 such that p divides $ \small f
(a,b,k) $ , e) $ \small w_p $ the exponent, to which p occurs in $ \small f(a,b,\lambda_p) $ , formally $ \small w_p=\{ f(a,b,\lambda_p),p \} $
We use that definitions to rewrite the analysis of the problem in terms of the Euler-totient-theorem and the concept of the order of cyclic subgroups modulo some primes p. Here the idea is to compare
the odd primefactors in the canonical primefactorizations of the lhs and rhs in the conveniently rewritten problem $$ \small c^n-(ab)^n =^? a^{2n}+b^{2n} = {a^{4n}- b^{4n} \over a^{2n}- b^{2n} } . $$
It will be sufficient to compare the odd primefactors $ \small p, q \in \text{odd primes} $ only; so we refer to possible exponents of 2 with some anonymous exponents s and t only. Then the lhs is
with $ \small q \in \text{odd primes} $ $$ \small f(c,ab,n)=2^s \prod q^{ [n:\lambda_q] (w_q+\{n,q\})} $$ and the rhs is with some exponent t at the primefactor 2 : $$ \small a^{2n}+b^{2n}={ a^{4n}-b
^{4n} \over a^{2n}-b^{2n} }={f(a,b,4n)\over f(a,b,2n)} $$ and $$ \small {f(a,b,4n)\over f(a,b,2n)} = 2^t { \prod p^{ [4n:\lambda_p] (w_p+\{4n,p\})} \over \prod p^{ [2n:\lambda_p] (w_p+\{ 2n , p \} )
} } $$ Here for all odd primes p we have $ \small \{ 4n , p \} = \{ 2n, p \} = \{ n,p \} $, $$ \small { f(a,b,4n)\over f(a,b,2n)} = 2^t \prod p^{ ([4n:\lambda_p]- [2n:\lambda_p]) (w_p+\{ n, p \} ) }
Conclusion: (updated) In the rhs we get only that primefactors p, whose order divide 4n but not 2n and - having $ \small n = 2^m \cdot o, o \text{ odd } $ thus must be exactly $ \small 4 \cdot 2^m r
$ where r is any odd divisor of n, while on the lhs we get all primes in the factorization whose order equal any divisor of n. But the sets of primes must be equal to allow a solution for the
original problem. | {"url":"http://mathoverflow.net/questions/75744/fermat-like-equation-cn-a2nan-bn-b2n/75828","timestamp":"2014-04-16T07:59:07Z","content_type":null,"content_length":"80666","record_id":"<urn:uuid:32e5b0e9-f399-4b1e-a38d-3c7f84febb4f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 30th 2009, 05:45 AM #1
Oct 2008
Diffy Q
A function y = g(x) is described by some geometric property of its graph. Write a differential equation of the form dy/dx = f(x, y) having the function g as its solution (or as one of its
"The line tangent to the graph of g at the point (x, y) intersects the x-axis at the point (x/2, 0)."
The equation of the line tangent to the graph of g at the point whose abscissa is $x_0$ is $y = g'(x_0)(x-x_0)+g(x_0)$
This line intersects the x-axis at the abscissa given by the equation $0 = g'(x_0)(x-x_0)+g(x_0)$ or $x = x_0 - \frac{g(x_0)}{g'(x_0)}$
We know that this abscissa is $\frac{x_0}{2}$ therefore for every $x_0$ inside g domain
$\frac{x_0}{2} = x_0 - \frac{g(x_0)}{g'(x_0)}$
$g'(x_0) = \frac{2\: g(x_0)}{x_0}$
g satisfies the differential equation $y' = \frac{2\: y}{x}$
thank you much
January 30th 2009, 07:24 AM #2
MHF Contributor
Nov 2008
January 30th 2009, 07:45 AM #3
Oct 2008 | {"url":"http://mathhelpforum.com/differential-equations/70811-diffy-q.html","timestamp":"2014-04-20T03:40:31Z","content_type":null,"content_length":"35691","record_id":"<urn:uuid:5fc1b1de-ae90-43ef-9b54-5c499fdcedaa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 26
- ACTA NUMERICA , 2002
"... this paper. More should be said about these constraints in order to define an IEP. First we recall one condition under which two geometric entities intersect transversally. Loosely speaking, we
may assume that the structural constraint and the spectral constraint define, respectively, smooth manifo ..."
Cited by 42 (14 self)
Add to MetaCart
this paper. More should be said about these constraints in order to define an IEP. First we recall one condition under which two geometric entities intersect transversally. Loosely speaking, we may
assume that the structural constraint and the spectral constraint define, respectively, smooth manifolds in the space of matrices of a fixed size. If the sum of the dimensions of these two manifolds
exceeds the dimension of the ambient space, then under some mild conditions one can argue that the two manifolds must intersect and the IEP must have a solution. A more challenging situation is when
the sum of dimensions emerging from both structural and spectral constraints does not add up to the transversal property. In that case, it is much harder to tell whether or not an IEP is solvable.
Secondly we note that in a complicated physical system it is not always possible to know the entire spectrum. On the other hand, especially in structural design, it is often demanded that certain
eigenvectors should also satisfy some specific conditions. The spectral constraints involved in an IEP, therefore, may consist of complete or only partial information on eigenvalues or eigenvectors.
We further observe that in practice it may occur that one of the two constraints in an IEP should be enforced more critically than the other due, say, to the physical realizability. Without the
realizability, the physical system simply cannot be built. There are also situations when one constraint could be more relaxed than the other due, say, to the physical uncertainty. The uncertainty
arises when there is simply no accurate way to measure the spectrum or there is no reasonable means to obtain the entire information. When the two constraints cannot be satisfied simultaneously, the
IEP could be formulat...
- SIAM Rev , 1998
"... Abstract. A collection of inverse eigenvalue problems are identi ed and classi ed according to their characteristics. Current developments in both the theoretic and the algorithmic aspects are
summarized and reviewed in this paper. This exposition also reveals many open questions that deserves furth ..."
Cited by 41 (6 self)
Add to MetaCart
Abstract. A collection of inverse eigenvalue problems are identi ed and classi ed according to their characteristics. Current developments in both the theoretic and the algorithmic aspects are
summarized and reviewed in this paper. This exposition also reveals many open questions that deserves further study. An extensive bibliography of pertinent literature is attached.
, 2002
"... In this paper a Generalised Singular Value Decomposition (GSVD) based algorithm is proposed for enhancing multi-microphone speech signals degraded by additive coloured noise. This GSVD-based
multi-microphone speech signal... ..."
Cited by 18 (7 self)
Add to MetaCart
In this paper a Generalised Singular Value Decomposition (GSVD) based algorithm is proposed for enhancing multi-microphone speech signals degraded by additive coloured noise. This GSVD-based
multi-microphone speech signal...
, 2004
"... Time-domain equalization is crucial in reducing channel state dimension in maximum likelihood sequence estimation, and inter-carrier and inter-symbol interference in multicarrier systems. A
time-domain equalizer (TEQ) placed in cascade with the channel produces an effective impulse response that is ..."
Cited by 14 (8 self)
Add to MetaCart
Time-domain equalization is crucial in reducing channel state dimension in maximum likelihood sequence estimation, and inter-carrier and inter-symbol interference in multicarrier systems. A
time-domain equalizer (TEQ) placed in cascade with the channel produces an effective impulse response that is shorter than the channel impulse response. This paper analyzes two TEQ design methods
amenable to cost-effective real-time implementation: minimum mean squared error (MMSE) and maximum shortening SNR (MSSNR) methods. We reduce the complexity of computing the matrices in the MSSNR and
MMSE designs by a factor of 140 and a factor of 16 (respectively) relative to existing approaches, without degrading performance. We prove that an infinite length MSSNR TEQ with unit norm TEQ
constraint is symmetric. A symmetric TEQ halves FIR implementation complexity, enables parallel training of the frequency-domain equalizer and TEQ, reduces TEQ training complexity by a factor of 4
and doubles the length of the TEQ that can be designed using fixed-point arithmetic, with only a small loss in bit rate. Simulations are presented for designs with a symmetric TEQ or target impulse
- Math. Comp , 2000
"... Abstract. We exploit the even and odd spectrum of real symmetric Toeplitz matrices for the computation of their extreme eigenvalues, which are obtained as the solutions of spectral, or secular,
equations. We also present a concise convergence analysis for a method to solve these spectral equations, ..."
Cited by 7 (1 self)
Add to MetaCart
Abstract. We exploit the even and odd spectrum of real symmetric Toeplitz matrices for the computation of their extreme eigenvalues, which are obtained as the solutions of spectral, or secular,
equations. We also present a concise convergence analysis for a method to solve these spectral equations, along with an efficient stopping rule, an error analysis, and extensive numerical results. 1.
- in Proc. of the 1999 IEEE International Workshop on Acoustic Echo and Noise Control (IWAENC'99), Pocono , 1999
"... This paper discusses an SVD-based signal enhancement procedure, applied to noise reduction in multi-microphone speech signals. The SVD-based signal enhancement procedure amounts to a specific
optimal filtering problem when the so-called `desired response' signal cannot be observed. The optimal filte ..."
Cited by 6 (6 self)
Add to MetaCart
This paper discusses an SVD-based signal enhancement procedure, applied to noise reduction in multi-microphone speech signals. The SVD-based signal enhancement procedure amounts to a specific optimal
filtering problem when the so-called `desired response' signal cannot be observed. The optimal filter can then be written as a function of the generalized singular vectors and singular values of a
speech and noise data matrix. It is shown that the SNR improvement provided by the SVD-based optimal filtering technique is better than the improvement obtained with standard beamforming techniques.
Moreover most beamforming techniques assume the position of the speech source and the microphone array configuration to be known. Therefore the performance of these techniques is rather sensitive to
deviations from these assumptions, i.e. incorrect estimation of the position of the speech source and uncalibrated microphone arrays. It is shown that the SVD-based optimal filtering technique is
more robu...
- Numerical Algorithms, 25:377 – 385 , 1999
"... Several methods for computing the smallest eigenvalues of a symmetric and positive definite Toeplitz matrix T have been studied in the literature. The most of them share the disadvantage that
they do not reflect symmetry properties of the corresponding eigenvector. In this note we present a Lanczos ..."
Cited by 6 (3 self)
Add to MetaCart
Several methods for computing the smallest eigenvalues of a symmetric and positive definite Toeplitz matrix T have been studied in the literature. The most of them share the disadvantage that they do
not reflect symmetry properties of the corresponding eigenvector. In this note we present a Lanczos method which approximates simultaneously the odd and the even spectrum of T at the same cost as the
classical Lanczos approach. Keywords: Toeplitz matrix, eigenvalue problem, Lanczos method, symmetry AMS-classification: 65F15 1 Introduction Several approaches have been reported in the literature
for computing the smallest eigenvalue of a real symmetric, positive definite Toeplitz matrix. This problem is of considerable interest in signal processing. Given the covariance sequence of the
observed data, Pisarenko [14] suggested a method which determines the sinusoidal frequencies from the eigenvector of the covariance matrix associated with its minimum eigenvalue. Cybenko and Van Loan
[3] pre...
- In Proc. of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA'99), New Paltz , 1999
"... In this report, a compact review is given of a class of SVD-based signal enhancement procedures, which amount to a specific optimal filtering technique for the case where the so-called `desired
response' signal cannot be observed. A number of simple properties (e.g. symmetry properties) of the obtai ..."
Cited by 6 (6 self)
Add to MetaCart
In this report, a compact review is given of a class of SVD-based signal enhancement procedures, which amount to a specific optimal filtering technique for the case where the so-called `desired
response' signal cannot be observed. A number of simple properties (e.g. symmetry properties) of the obtained estimators are derived, which to our knowledge have not been published before and which
are valid for the white noise case as well as for the coloured noise case. Also a standard procedure based on averaging is investigated, leading to serious doubts about the necessity of the averaging
step. When applying this technique to multi-microphone noise reduction, the optimal filter exhibits a kind of beamforming behaviour for highly correlated noise sources. When comparing this technique
to standard beamforming algorithms, its performance is equally good for highly correlated noise sources. For less correlated noise sources -- a situation where standard beamforming typically fails --
it is sho...
- LINEAR ALGEBRA APPL , 2002
"... This paper concerns the construction of a structured low rank matrix that is nearest to a given matrix. The notion of structured low rank approximation arises in various applications, ranging
from signal enhancement to protein folding to computer algebra, where the empirical data collected in a matr ..."
Cited by 6 (1 self)
Add to MetaCart
This paper concerns the construction of a structured low rank matrix that is nearest to a given matrix. The notion of structured low rank approximation arises in various applications, ranging from
signal enhancement to protein folding to computer algebra, where the empirical data collected in a matrix do not maintain either the specified structure or the desirable rank as is expected in the
original system. The task to retrieve useful information while maintaining the underlying physical feasibility often necessitates the search for a good structured lower rank approximation of the data
matrix. This paper addresses some of the theoretical and numerical issues involved in the problem. Two procedures for constructing the nearest structured low rank matrix are proposed. The procedures
are flexible enough that they can be applied to any lower rank, any linear structure, and any matrix norm in the measurement of nearness. The techniques can also be easily implemented by utilizing
available optimization packages. The special case of symmetric Toeplitz structure using the Frobenius matrix norm is used to exemplify the ideas throughout the discussion. The concept, rather than
the implementation details, is the main emphasis of the paper.
- IEEE Trans. on Signal Processing , 2004
"... We show that maximum shortening SNR TEQs are often nearly symmetric. Constraining the TEQ to be symmetric causes only a 3% loss in bit rate (averaged over 8 standard ADSL channels). Symmetric
TEQs have greatly reduced design and implementation complexity. We also show that for infinite length TEQs, ..."
Cited by 5 (5 self)
Add to MetaCart
We show that maximum shortening SNR TEQs are often nearly symmetric. Constraining the TEQ to be symmetric causes only a 3% loss in bit rate (averaged over 8 standard ADSL channels). Symmetric TEQs
have greatly reduced design and implementation complexity. We also show that for infinite length TEQs, minimum mean squared error target impulse responses have all zeros on the unit circle, which can
lead to poor bit rate performance. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=925889","timestamp":"2014-04-20T08:01:12Z","content_type":null,"content_length":"39962","record_id":"<urn:uuid:599360f2-9608-4391-874e-bd9c2219d1b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find all solutions in the interval [0, 2π). (sin x)(cos x) = 0
• 3 months ago
• 3 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52d55a68e4b0274b88c4cf85","timestamp":"2014-04-18T00:17:26Z","content_type":null,"content_length":"74105","record_id":"<urn:uuid:feed57d4-4f33-41a0-a71a-e34fda9820c0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glendale Heights Math Tutor
Find a Glendale Heights Math Tutor
...I will be available for my students at any time by any means of communications. I have a PhD in physics, and I have been teaching it at the college level for 24 years. I did teach at middle and
high school levels to my sons and children of friends.
8 Subjects: including algebra 1, algebra 2, physics, prealgebra
...The primary challenge to philosophy is ensuring that all your arguments are indisputably logical according to the mathematical laws of logic so that while meaningfully testable no professor can
falsify your statements in the Popperian sense. Then, in an ideal world of forms, you have to express ...
57 Subjects: including trigonometry, differential equations, linear algebra, SAT math
...I have worked on both short and long term projects with physicians, engineers, undergraduates, and graduate students. If you are in need of assistance with the mathematical/computational
aspects of your project, feel free to contact me. I enjoy consulting on math projects tremendously and would...
18 Subjects: including discrete math, differential equations, linear algebra, MATLAB
...I have also taught probability in lower division courses such as College Math, College Algebra, and Problem Solving. I have also written a chapter on probability in several books. I can
definitely help you learn this subject.
25 Subjects: including algebra 1, algebra 2, calculus, geometry
...I am a math-fanatic and am able to effectively help people understand how to apply their knowledge. In addition, I can show students how to improve their study habits through the use of
flashcards, mnemonic devices, etc. I can't wait to start working with your student (note: I am very open to n...
29 Subjects: including calculus, chemistry, ACT Math, SAT math | {"url":"http://www.purplemath.com/Glendale_Heights_Math_tutors.php","timestamp":"2014-04-16T19:23:11Z","content_type":null,"content_length":"23997","record_id":"<urn:uuid:5f746989-7f38-4fdb-b723-9efac8711aea>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
dihedral angle
The angle defined by two planes or two faces of a polyhedron meeting at an edge; for example, all the dihedral angles of a cube are 90°. An almost-spherical polyhedron (with many faces) has small
dihedral angles.
A dihedral angle is described by a point on one of the planes or faces, the line of intersection (edge), and a point on the other plane. Should a point P on one of the planes be so positioned that
the line through it perpendicular to the edge E, intersects E at the same point as a line drawn perpendicular to E through a point P' on the other plane, then the angle PEP' is the plane angle of the
dihedral angle. All plane angles of a dihedral angle are equal.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/D/dihedral_angle.html","timestamp":"2014-04-17T10:16:07Z","content_type":null,"content_length":"6059","record_id":"<urn:uuid:94ac22c4-af1f-4d48-b163-89201ad7708c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2009 [00733]
[Date Index] [Thread Index] [Author Index]
Re: Re: Simplifying and Rearranging Expressions
• To: mathgroup at smc.vnet.net
• Subject: [mg95979] Re: [mg95956] Re: Simplifying and Rearranging Expressions
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Sat, 31 Jan 2009 01:14:48 -0500 (EST)
• References: <gls1u8$hjl$1@smc.vnet.net> <200901301047.FAA06653@smc.vnet.net>
To my amazement I have found something that I agree here. I do agree
that it is largely a pointless waste of time to use computer algebra,
which relies on pretty complex algorithms (like Groebner basis) to
make this **look** the way you want them to look, for reasons that
have no particular relation to these algorithms. Its a waste of
computing resources, the effort of mathematicians and programmers and
most of all the user, who could much more easily achieve this effect
in other ways, one of which is TeX (even better was, in my opinion,
was David Bailey's clever idea to use color to manipulate Mathematica
expressions almost as one does by hand - unfortunately this appears to
have been abandoned due to lack of interest).
I don't think however there is any chance whatever of WRI
incorporating TeX into Mathematica, for two reasons. One is that it
would be going against their principal idea of having all Mathematica
expressions fully controllable by means of the Mathematica programming
language. Clearly this would not be true of TeX strings, if they were
meant to be interpreted for display. Secondly, because other CAS
systems have essentially tried to do this sort of thing with very
little to show for it in terms of market success. You seem to be
completely unaware of how tiny the TeX users community is compared
with the community of users of programs like Mathematica.
This reminds me also that a lot of suggestions which you have made
about the way Mathematica ought to be (simple, cheap, computation
engine, no fancy staff) has already been tried by WRI and clearly
failed. It was called something like The Computation Center, and
limited version of Mathematica, that Wolfram once sold for a fraction
of the price of the full thing. The only problem was that hardly
anyone bought it (I suspect you did not either).
Not surprisingly WRI is likely to be pretty skeptical of bright ideas
that remind them of things that they or others have already tired and
have been shown not to work.
Andrzej Kozlowski
On 30 Jan 2009, at 11:47, AES wrote:
> In article <gls1u8$hjl$1 at smc.vnet.net>,
> "David Park" <djmpark at comcast.net> wrote:
>> I want to start a thread on this because I believe many MathGroup
>> people
>> will have some useful things to say.
> I'll bite, because I've done a bit of thinking on this.
>> A common task for Mathematica users is to obtain an expression that
>> is in a
>> particular form. For students and teachers this may often be a
>> textbook
>> form, or there may be other reasons that a particular form is
>> desired.
> Just to add a bit of specificity to this, let's consider expressions
> that arise in optics and e-m theory, which generally involve a set of
> physical quantities (velocity of light, propagation constant,
> permeabilities, index of refraction, frequency, wavelength,
> characteristics impedance, critical angle of refraction, and multiple
> others) that are conventionally written as
> c, k, (or beta), mu, epsilon, n, f (or omega), lambda,
> eta or z_0, thetaCrit, and so on
> **each of which is directly linked or coupled to (that is, can be
> calculated from) several others in the same set**.
> [You have to put up with a side story at this point. The
> distinguished
> physicist W. K. H. Panofsky, who just recently died, early in his
> career
> co-authored with Melba Phillips a small but excellent text on
> classical
> e-m, colloquially known as "Panofsky and Phillips", from which I and
> many others studied. This was long enough ago that cgs and mks units
> were still fighting it out in the physics community.]
> [P and P beautifully sidestepped this issue by, as they noted in the
> Preface to their book, writing every equation in their book using an
> appropriate (but generally different) subset of the above symbols,
> such
> that every equation was valid in mks units as it stood, **and could be
> instantly converted to be exactly valid in cgs units as well, simply
> by
> replacing any factor of epsilon that appeared in any of these
> equations
> by 1/ 4 pi **.]
>> It might be thought that this should be an easy task but quite
>> often it can
>> be a very difficult task, even involving mathematical derivation
>> and many of
>> the capabilities of Mathematica. Not obtaining a specific form may
>> be a
>> matter of not knowing how to solve the problem in the first place.
> It may not be just a difficult task; in fact, **it may be an
> impossible
> task** -- not to mention **an unnecessary and undesirable task**.
> 1) As already noted above, you may want to write expressions that
> contain some subset of a linked set of variables in different ways at
> different points in an exposition, because these different ways are
> conventional in the field, and/or make the physical meaning clearer.
> For example, you may want to write the space-time variation of a
> phasor
> wave amplitude as Exp[ I k z - I omega t] because that's neat, simple,
> and conventional.
> But then, in discussing a waveguide mode where a factor k d (d =
> waveguide width) appears, you may want to write that factor instead in
> the form 2 pi d/lambda to emphasize that it's the width in wavelengths
> that's important.
> But if at some point in your notebook you're going to insert any of
> the
> dependences within this set -- e.g., k := 2 pi / lambda -- then you're
> stuck with this from then on.
> 2) A second point: My experience has been that useful identities
> that
> often arise in analyses -- for example, with suitable qualifications
> the
> infinite integral of Exp[- a x^2 + b x] == Exp[b^2/4a] -- sometime
> just
> won't fall out (i.e., won't be explicitly evaluated by Mathematica
> if a
> and b are actually more complicated expressions.
> 3) More generally, one very often wants to do the eventual numerical
> calculations using only one or another form of dimensionless or
> normalized variables, because that's numerically efficient as well as
> physically and practically useful in expressing the results.
> And, if at some point in an exposition you're going to convert your
> analytical and expositional formulas into dimensionless formulas for
> numerical calculation purposes -- **at that point you really don't
> care
> how Mathematica arranges the resulting expression**.
>> Nevertheless, even simple rearrangement can be difficult. I
>> sometimes think
>> of it as doing surgery on expressions. I believe it is generally
>> desirable
>> to use Mathematica to rearrange an expression and not retype the
>> expression.
>> Retyping is too error prone.
> Last sentence is true; immediately preceding sentence may be true as
> phrased -- but is a mistaken belief. Comments on this below.
>> Simplify and FullSimplify are amazingly useful but it is difficult to
>> control them and obtain a precise result. One will often have to do
>> additional piecemeal operations. One downside of Simplify and
>> FullSimplify
>> is that they can return different forms with different Mathematica
>> versions.
>> Then any additional operations in an old notebook may no longer
>> work. It
>> would be nice if there was a method of using these commands that
>> would be
>> more version independent.
>> Various routines such as Together, Apart, Factor, TrigReduce,
>> TrigFactor,
>> TrigExpand, TrigToExp, GroebnerBasis etc., can be useful in getting a
>> specific form. MapAt is very useful for doing surgery on specific
>> parts of
>> an expression. Mathematica often gets two factors that have extra
>> minus
>> signs. You can correct that by mapping Minus onto the two factors.
>> For
>> integrals in the wrong form you could cheat by trying to find the
>> constant
>> by which they differ by subtracting and simplifying, and then use
>> that in
>> the derivation.
> Let's say it like it is: It's not just "difficult" for ordinary users
> to use and control many of these advanced tools: It's basically
> **impossible** for the average user to learn what some of these tools
> do, because they're so complex and the results can depend so
> critically
> on what you put into them; all you end up doing is thrashing around
> endlessly, trying to get them to produce the results you want.
> The more powerful they get, the less they're worth trying to learn.
>> It is very useful to get Mathematica generated expressions into the
>> form
>> that one wants. I believe that this is probably a sticking point
>> with many
>> users. In general it is not a trivial topic. Others may have some
>> good
>> general ideas that I don't know about.
> My bottom lines are instead:
> 1) Accept that "Retyping is...error prone" -- and more generally that
> "To err is human..." -- and to the extent that you have to do any form
> of retyping, do a _lot_ of checking, rechecking, testing with simple
> cases, and looking to see that results are physically meaningful.
> 2) Nonetheless, in general, "It is GENERALLY NOT very useful to get
> Mathematica generated expressions into the form that one wants" -- at
> least, not very often, and not if it involves any significant amount
> of
> effort. It's wasted energy, and can add its own errors, or divert one
> from seeing one's own errors.
> 3) Instead, if what you're doing is a complex analysis and/or
> exposition, tackle the analysis portion initially with paper, pencil,
> and a good soft eraser, the way God intended analysis to be done.
> 4) When and if certain calculations (series expansions, etc.) get
> messy, run separate Mathematica symbolic calculations in auxiliary
> notebooks to carry them out.
> 5) When it comes time for exposition, do the exposition using a tool
> that's designed for exposition (e.g., TeX), while doing the numerical
> calculations and graphing using a tool that's good at those things --
> and while doing this repeat item 1) multiple times.
>> Someday someone may even write a good tutorial on it.
> How about instead someone **imbedding real TeX in Mathematica**, as
> part
> of Mathematica's basic capabilities?
> That is:
> * TeX is (I believe) totally open source, free, highly stable, and
> widely known and studied -- and it's full source code is very compact.
> * So how about building the TeX source code into Mathematica's
> already
> immense repertoire of rules and stuff, and allowing one to include at
> any point in a "text portion" (I.e., a "non-evaluation portion") of a
> Mathematica notebook cell the syntax
> TeX[ ---any valid TeX syntax---]
> such as TeX[$\alpha = \beta / \gamma^2$] to get that bit of inline
> math
> into a Text or Header cell, or TeX[$$\alpha = \left( \beta /over
> \gamma^2 \right)$$] to insert a display equation,
> and just having Mathematica display the typeset box produced by TeX
> using that syntax into the Mathematica notebook at that point, with
> the
> stuff inside the [ ] brackets having no other evaluational function or
> effect in Mathematica itself, except to be displayed?
> Is there some reason this would be conceptually impossible? Would
> it be
> that difficult to accomplish? Could it at least be implemented with
> some
> reasonable subset of TeX syntax and capabilities?
> If your goal is to have Mathematica notebooks serve simultaneously as
> "exposition documents" and "calculation performing documents", might
> this be a lot easier than endless fighting with option-laden and
> temporally unstable Mathematica expressions like "Together, Apart,
> Factor, TrigReduce, TrigFactor, TrigExpand, TrigToExp, GroebnerBasis"
> and all their even more arcane extensions?
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jan/msg00733.html","timestamp":"2014-04-18T05:33:38Z","content_type":null,"content_length":"38268","record_id":"<urn:uuid:6a50b42f-3e15-4785-9e01-ad68c2343b5b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: evaluable predicates, general definition
From: Axel Polleres <axel.polleres@deri.org> Date: Thu, 08 Nov 2007 22:00:12 +0000 Message-ID: <473386EC.30705@deri.org> To: Michael Kifer <kifer@cs.sunysb.edu>, "Public-Rif-Wg (E-mail)" <
Michael Kifer wrote:
> Model theory of builtin predicates is not a problem. Modes (binding
> patterns) are extra-logical. We have to decide what do about them in terms
> of our recommendation (e.g., issue an error and abort).
Do you think the definition of binding patterns below works?
BTW: One thing which is non-standard in the Eiter et al. definition is
that an the extension of a predicate can be input.
> Builtin functions present a bigger challenge. They can also have fixed
> interpretation as functions, but builtin functions are partial, so they
> require special treatment in the model theory, and I am not sure if this
> complication is worth the trouble.
Would an extra "error" constant value solve that problem?
> --michael
>> Evaluable predicates:
>> The most general definition of external predicates (built-ins), I know
>> of (in an attempt to write down the definition of Eiter et al. [1] in a
>> RIF suitable way):
>> An evaluable predicate &pred(X_1,....,X_n) is assigned with one or more
>> binding patterns, where a binding pattern is a vector {in,out}^n.
>> Intuitively, an evaluable atom provides a way for deciding the truth
>> value of an output tuple depending on the extension of a set of input
>> predicates and terms. Note that this means that evaluable predicates,
>> unlike usual definitions of built-ins in logic programming, can not only
>> take constant parameters but also (extensions of) predicates as input.
>> inputs can not only be terms, but also predicate names (in which case
>> the *extension* of the respective predicate is the input.) External
>> predicates have a fixed interpretation assigned. The distinction
>> between input and output terms is made in order to guarantee that
>> whenever all input values of one of the given binding patterns are bound
>> to concrete values, the fixed interpretation only allows a finite number
>> of bindings for the output values, which can be computed by an external
>> evaluation oracle.
>> 1. T. Eiter, G. Ianni, R. Schindlauer, H. Tompits. A Uniform Integration
>> of Higher-Order Rea-
>> soning and External Evaluations in Answer Set Programming. In
>> International Joint Con-
>> ference on Artificial Intelligence (IJCAI) 2005, pp. 90–96, Edinburgh,
>> UK, Aug. 2005.
>> --
>> Dr. Axel Polleres
>> email: axel@polleres.net url: http://www.polleres.net/
Dr. Axel Polleres
email: axel@polleres.net url: http://www.polleres.net/
Received on Thursday, 8 November 2007 22:00:29 GMT
This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 2 June 2009 18:33:43 GMT | {"url":"http://lists.w3.org/Archives/Public/public-rif-wg/2007Nov/0043.html","timestamp":"2014-04-19T15:27:13Z","content_type":null,"content_length":"10596","record_id":"<urn:uuid:2d0cb109-e0c6-48fc-9b1f-cea1e10ae705>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration by Part
November 8th 2008, 01:13 PM
Integration by Part
So my professor was going over integration by part and using the tabular method. He use the problem:
(the integral sign, big swoosh like thing) (3t+9)^3(2t+e^2t)dt
Now, to find the derivative, I believe he used the chain rule, perhaps? Only thing is, I am not certain how I would use the chain rule on this one? Somehow, I do recall him showing on the outside
he obtained
3x3(something) for the first derivative
3x2x9(something) for the second derivative
But I do not know how.
Any help would be appreciated!
November 8th 2008, 01:17 PM
So my professor was going over integration by part and using the tabular method. He use the problem:
(the integral sign, big swoosh like thing) (3t+9)^3(2t+e^2t)dt
Now, to find the derivative, I believe he used the chain rule, perhaps? Only thing is, I am not certain how I would use the chain rule on this one? Somehow, I do recall him showing on the outside
he obtained
3x3(something) for the first derivative
3x2x9(something) for the second derivative
But I do not know how.
Any help would be appreciated!
The derivative of which? Which did you select to be the "u" term?
November 8th 2008, 01:22 PM
I'm almost positive he set (3t+9)^3 = u
November 8th 2008, 01:27 PM
November 8th 2008, 01:35 PM
That was it!
Alright, my question is, how did you get the 9, the 54, and eventually, the 162?
November 8th 2008, 01:37 PM
November 8th 2008, 01:37 PM
This is all correct. You want :
Do this in a similar manner until you get down to 162. Do you understand about using the tabular method and integrating $2t+e^(2t)$?
November 8th 2008, 01:37 PM
Oh! Okay! I get it now. I was doing it wrong at first and wasn't getting anything near those. Thank you!
November 8th 2008, 01:39 PM
November 8th 2008, 01:50 PM
Ok, what helped me learn it was by making a chart with one colum labeled D(for taking the derivative) and the other I(for integrating).
So, I'll start you out:
You understand that you once you get all the terms, you start by multiplying $(3t+9)^3$ ( $1/3t^3+1/2e^(2t)$)?
November 8th 2008, 01:51 PM
Yeah, the 2 column thing is what a friend taught me to do and it definitely helps a lot. | {"url":"http://mathhelpforum.com/calculus/58391-integration-part-print.html","timestamp":"2014-04-18T01:37:28Z","content_type":null,"content_length":"14945","record_id":"<urn:uuid:0234bb47-1806-442b-aaab-d81195ba87e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |