content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Homework Help
Posted by Shane on Thursday, October 14, 2010 at 2:21pm.
I have a series of questions that I did. They lead up to the last question I can't solve. Could you check my math and help me with the last question? Thanks!
a) Let a be the point (2,3). Compute the distance from Origin 0 to A
answer: a^2 + b^2 = c^2
3^2 + 2^2 = c^2
b) find the equation of circle C passing through point A
x^2 + y^2 = 13 -> since the center is (0,0), right?
c) find the equation of line D tangent to the circle C at point A
since the equation for line OA is y=3/2 x + 0 , I can use the negative inverse of the slope to get the slope of the tangent, right?
so I used y=-2/3 x +b and input the coordinates (2,3) to get b
3 = (-2/3)(2) + b
b = 4 , therefore the equation of the tangent is y=-2/3 x + 4 right?
d) line D meet Ox at point B. Find the coordinates of B.
This is where I'm a little confused. Does "Ox" mean the x-axis? That's what I went on so I just used the previous line equation and set y to zero:
0= -2/3 x + b
x = 6 therefore line D meets Ox when x=6 coordinates (6,0)
e) compute distance of AB
A(2,3) B(6,0) I used Pythagoreans theorem and got a distance of 5
f) find the equation of the circle C' with center B and passing through A
since the circle equation is (x-h)^2 + (y-k)^2 = R^2 I just input everything I knew so far and got: (x-6)^2 + y^2 = 25
is all that correct? I know it's a lot but I appreciate the help!
The last question is:
g) find the coordinates of the intersection points of C and C'
Related Questions
Math - I have a couple questions about the sums of geometric series. One. So the...
Math - I'm learning series and sequences (grade 11). Please check that my steps ...
math (college algebra) - Ok I have two questions I have NO CLUE how to complete ...
calculus - I need help understanding how the series of e derives into the ...
chemistry - Lead pipes were once used to transport drinking water and are still ...
Calculus - This is going to be pretty hard to show as text since it would be ...
math math math math - a. what is the general term of the following series? 60/...
How to get quick responses to your math questions - Math is a wide subject, ...
Math - We have an important chemsitry test due and I want to make sure that I do...
calculus - power series ASAP please :) - using power series, integrate & ... | {"url":"http://www.jiskha.com/display.cgi?id=1287080495","timestamp":"2014-04-19T02:58:06Z","content_type":null,"content_length":"9546","record_id":"<urn:uuid:c961f585-e9ec-44a5-9cea-2e75974cf93d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waverley Statistics Tutor
Find a Waverley Statistics Tutor
...I'm proficient with and tutor all of Excel's functionality including charts, pivot tables, complex formulas, array functions, and lookup functions. I don't tutor VBA programming or Macros. Many
students struggle with geometric proofs because the skills involved are unique to geometry and require a different level of conceptual thinking than other high school math courses.
23 Subjects: including statistics, chemistry, calculus, writing
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including statistics, English, chemistry, GED
...I explain the material so the student can learn through understanding. No short cuts required - if it makes sense, the student will learn it. I have a math degree from MIT and taught math at
Rutgers University for 10 years.
24 Subjects: including statistics, chemistry, calculus, physics
...In business, I have developed a great career in crowdfunding, and guide others in developing their careers in it as well. I have successfully guided two students, my own children, to successful
entrance into a tier one college (with a full scholarship) and a tier 2 college with a learning disabl...
90 Subjects: including statistics, English, reading, writing
...I am familiar with a few Java IDEs as well, so I am able to tutor from a versatile standpoint. I received excellent scores in all areas on my first and only attempt at the SAT. I have excellent
reading and communication skills, and a background in Latin to help with vocabulary.
38 Subjects: including statistics, chemistry, English, reading | {"url":"http://www.purplemath.com/waverley_statistics_tutors.php","timestamp":"2014-04-18T03:49:59Z","content_type":null,"content_length":"24005","record_id":"<urn:uuid:90690650-1cb1-4434-9b4a-af12da5e61d3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Interupted time series with variable dispersion
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Interupted time series with variable dispersion
From "Dupont, William" <william.dupont@Vanderbilt.Edu>
To <statalist@hsphsun2.harvard.edu>, "Arbogast, Patrick" <patrick.arbogast@Vanderbilt.Edu>, "LaFleur, Bonnie" <bonnie.lafleur@Vanderbilt.Edu>
Subject st: Interupted time series with variable dispersion
Date Mon, 23 Dec 2002 13:38:15 -0600
I am doing an interrupted time-series analysis. A simplified
description of my model is as follows:
An intervention occurs on a specific date. The data fits two simple
linear models of response vs. date both before and after the
intervention, (with the slope and intercept coefficients before the
intervention being different from these coefficients after the
intervention). A first order autoregressive-moving average
(arima(1,0,1)) model fits the data quite well.
My questions have to do with dispersion. There is an abrupt, large and
persistent drop in the response variable that starts at the time of the
intervention. The dispersion after the intervention is substantially
less then in the baseline interval. My colleagues are interested in
assessing both the drop in the response variable as well as the
reduction in dispersion. They are used to using the coefficient of
variation to assess such changes.
Simple coefficients of variation are not ideal in this example because
there is some systematic variation in response with time in both the
baseline and post intervention intervals. What I have done so far is
the following:
1. Fit separate time series models to the baseline and post
intervention data.
2. Calculate the mean squared error (MSE) for both models using the
structural option of the predict command.
3. Divide the square root of the MSE by the mean response in either the
baseline or post intervention period. This is a coefficient of
variation like statistic that is adjusted for temporal trends within the
interval of analysis (i.e. trends within either the baseline or post
intervention interval). Doing this gives a 30% reduction in my
"adjusted" coefficient of variation.
4. The next issue is whether this reduction is statistically
significant. I have calculated res2, the squared residual for each day
divided by the squared mean for the interval (either baseline or post
intervention). I then entered res2 into a separate arima model. The
outcome shows a significant drop in res2, which I would like to
interpret as evidence that there is a significant drop in my "adjusted"
coefficient of variation after the intervention.
My fundamental question is does the preceding make sense and is there a
better way of doing things? My specific questions are
1. Is there a better way of fitting my original model? My original
model does a good job at dealing with the temporal correlation of my
response variable and produces an expected response curve that fits the
data well. However, it also assumes a constant white noise error term
that is clearly not correct. I could, of course, transform the data to
obtain more homoscedastic errors but I am concerned that this would
result is a less satisfactory estimate of the expected response. Is
there a way that I can allow for separate residual errors before and
after the intervention without giving up a linear model of the response
as a function of time?
2. A problem with my approach to assessing the significance of the drop
in adjusted coefficient of variation is that my res2 variable is both
highly skewed and temporally correlated. If I use an arima model to
deal with the temporal correlation, I am assuming a white noise term
that is normally distributed. This assumption is clearly incorrect.
However, I do not know of a nonparametric method that can account for
the autoregressive-moving average nature of my data. What do you
recommend? (This problem is made more acute by the fact that my P value
comparing adjusted coefficients of variation is only 0.03, and its
statistical significance might easily vanish if a more appropriate model
was used.)
3. Can anyone suggest references that might be helpful? In particular,
can anyone tell me of a reference for calculating a coefficient of
variation adjusted for within-group systematic trends?
I apologize for writing such a long query. I would be most grateful for
any insights on how to analyze this sort of problem.
Bill Dupont
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-12/msg00468.html","timestamp":"2014-04-16T04:12:47Z","content_type":null,"content_length":"9054","record_id":"<urn:uuid:744b09bb-03f3-4ab5-9a62-20fbeb1134fa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Garcia on Thursday, November 1, 2012 at 9:05pm.
Enrico Fermi was a famous physicist who liked to pose what are now known as Fermi problems in which several assumptions are made in order to make a seemingly impossible estimate. One example of a
Fermi problem is "Caesar's last breath" which estimates that you, right now, are breathing some of the molecules exhaled by Julius Caesar just before he died.
1. The gas molecules from Caesar's last breath are now evenly dispersed in the atmosphere.
2. The atmosphere is 50 km thick, has an average temperature of 15 °C, and an average pressure of 0.20 atm.
3. The radius of the Earth is about 6400 km.
4. The volume of a single human breath is roughly 500 mL.
Perform the following calculations, reporting all answers to two significant figures.
Calculate the total volume of the atmosphere.
Calculate the total number of gas molecules in the atmosphere.
Calculate the number of gas molecules in Caesar's last breath (37°C and 1.0 atm).
What fraction of all air molecules came from Caesar's last breath?
About how many molecules from Caesar's last breath do you inhale each time you breathe?
• Chemisty - Ajai, Thursday, November 1, 2012 at 9:09pm
Your question is not correct . I cannot understand your question
• Chemisty - Garcia, Thursday, November 1, 2012 at 9:15pm
It is not one question. It is a series of questions. First calculate the total volume of the atmosphere. Second calculate the total number of gas molecules in the atmosphere. Third calculate the
number of gas molecules in Caesar's last breat. Fourth calculate what fraction of all air molecules came from Caesar's last breath. Lastly, calculate how many molecules from Caesar's last breath
do you inhale each time you breathe.
• Chemisty - DrBob222, Friday, November 2, 2012 at 12:21am
Perhaps I can get you started.
Wouldn't the volume of the atmosphere+earth = (4/3)*pi*r^3. r would be radius of earth + thickness of atmosphere for total volume. Then determine volume of earth and subtract to obtain volume of
the atmosphere.
• Chemisty - Garcia, Friday, November 2, 2012 at 8:10pm
I keep getting the answer to the volume of the atmosphere wrong. Are my calculations correct? Or what is my problem?
I calculate that the total volume of Earth + thickness of atmosphere is:
(4/3)*pi*(6450000 m)^3 = 1.124*10^21
I then calculate that the volume of the Earth is:
(4/3)*pi*(6400000 m)^3 = 1.098*10^21
Finally I subtract the volume of Earth from the total volume, which gives me that the volume of the atmosphere is 2.6*10^19
Related Questions
Science-QUICKLY PLEASE!! - 13. Which invention below would not have been ...
World History - The person who might be called the "father of the hydrogen bomb...
math - in one elementary school class the following information on sports ...
Math - 120 students were surveyed about their food preferences. 62 liked pizza,...
Algebra - In an interview of 50 math majors, 12 liked calculus and geometry 18 ...
Math - Movies A survey of 350 customers was taken at Regal Cinemas in Austin, ...
math - 8. In an interview of 50 math majors, 12 liked calculus and geometry 18 ...
Math (venn diagram) - 120 students were surveyed about their food preferences. ...
Math Finite - College students were polled about favorite types of food. 50% ...
College Math - Movies A survey of 350 customers was taken at Regal Cinemas in ... | {"url":"http://www.jiskha.com/display.cgi?id=1351818300","timestamp":"2014-04-19T11:47:31Z","content_type":null,"content_length":"10962","record_id":"<urn:uuid:88f92a69-2565-47d3-9860-efc1e018719e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
6[1-(-6)]8 its not 15
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Divide then multiply then add/subtract
Best Response
You've already chosen the best response.
I would be interested in how you got the answer 15, if you are willing to post your work.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
6[1-(-6)]+8 6[1+6]+8 6[7]+8 15
Best Response
You've already chosen the best response.
i get 21
Best Response
You've already chosen the best response.
you added a plus sign that was not in your original question which is correct?
Best Response
You've already chosen the best response.
can you show work i have like 20 similar problems it would be great if you did
Best Response
You've already chosen the best response.
I will work through it with you but first you must clarify if the question is \[6 \times (1-(-6) \times 8\] or\[6 \times (1-(-6)) +8\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Im having a problem because those brackets are not parenthesis
Best Response
You've already chosen the best response.
I use the acrimony (I think thats what it is called) BEDMAS to remember my order of operations Brackets Exponents Division Multiplication Addition Subtraction I live in Canada and trying to
remember as much as I can from previous schooling there should be no difference between the square brackets and the round brackets (parenthesis).
Best Response
You've already chosen the best response.
tell me what you think 6 times 7 is
Best Response
You've already chosen the best response.
i get 50 thats the answer so the brackets also mean multiply
Best Response
You've already chosen the best response.
Does brackets were driving me crazy
Best Response
You've already chosen the best response.
yes you assume multiplication when a number is written outside of the brackets like that.
Best Response
You've already chosen the best response.
thank you much
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51590816e4b0507ceba242aa","timestamp":"2014-04-20T11:14:53Z","content_type":null,"content_length":"64755","record_id":"<urn:uuid:c4469f32-078c-4a44-81e7-5bb87509e99d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
9 search hits
Model dependence of lateral distribution functions of high energy cosmic ray air showers (2003)
Hans-Joachim Drescher Marcus Bleicher Sven Soff Horst Stöcker
The influence of high and low energy hadronic models on lateral distribution functions of cosmic ray air showers for Auger energies is explored. A large variety of presently used high and low
energy hadron interaction models are analysed and the resulting lateral distribution functions are compared. We show that the slope depends on both the high and low energy hadronic model used.
The models are confronted with available hadron-nucleus data from accelerator experiments.
Exploring isospin, strangeness and charm distillation in heavy ion collisions (2003)
Manuel Reiter Elena L. Bratkovskaya Marcus Bleicher Wolfgang Bauer Wolfgang Cassing Henning Weber Horst Stöcker
The isospin and strangeness dimensions of the Equation of State are explored. RIA and the SIS200 accelerator at GSI will allow to explore these regions in compressed baryonic matter. 132 Sn + 132
Sn and 100 Sn + 100 Sn collisions as well as the excitation functions of K/pi, Lambda/pi and the centrality dependence of charmonium suppression from the UrQMD and HSD transport models are
presented and compared to data. Unambiguous proof for the creation of a 'novel phase of matter' from strangeness and charm yields is not in sight.
A micro-canonical description of hadron production in proton-proton collisions (2003)
Fu-Ming Liu Klaus Werner Jörg Aichelin Marcus Bleicher Horst Stöcker
A micro-canonical treatment is used to study particle production in pp collisions. First this micro-canonical treatment is compared to some canonical ones. Then proton, antiproton and pion 4 pi
multiplicities from proton-proton collisions at various center of mass energies are used to fix the micro-canonical parameters (E) and (V). The dependences of the micro-canonical parameters on
the collision energy are parameterised for the further study of pp reactions with this micro-canonical treatment.
Probing the minimal length scale by precision tests of the muon g-2 (2003)
Ulrich Harbach Sabine Hossenfelder Marcus Bleicher Horst Stöcker
Modifications of the gyromagnetic moment of electrons and muons due to a minimal length scale combined with a modified fundamental scale M_f are explored. Deviations from the theoretical Standard
Model value for g-2 are derived. Constraints for the fundamental scale M_f are given.
Micro-canonical hadron production in pp collisions (2003)
Fu-ming Liu Jörg Aichelin Marcus Bleicher Klaus Werner
We apply a microcanonical statistical model to investigate hadron production in pp collisions. The parameters of the model are the energy E and the volume V of the system, which we determine via
fitting the average multiplicity of charged pions, protons and antiprotons in pp collisions at different collision energies. We then make predictions of mean multiplicities and mean transverse
momenta of all identified hadrons. Our predictions on nonstrange hadrons are in good agreement with the data, the mean transverse momenta of strange hadron as well. However, the mean
multiplicities of strange hadrons are overpredicted. This agrees with canonical and grandcanonical studies, where a strange suppression factor is needed. We also investigate the influence of
event-by-event fluctuations of the E parameter.
Signatures in the Planck regime (2003)
Sabine Hossenfelder Marcus Bleicher Stefan Hofmann Jörg Ruppert Stefan Scherer Horst Stöcker
String theory suggests the existence of a minimum length scale. An exciting quantum mechanical implication of this feature is a modification of the uncertainty principle. In contrast to the
conventional approach, this generalised uncertainty principle does not allow to resolve space time distances below the Planck length. In models with extra dimensions, which are also motivated by
string theory, the Planck scale can be lowered to values accessible by ultra high energetic cosmic rays (UHECRs) and by future colliders, i.e. M f approximately equal to 1 TeV. It is demonstrated
that in this novel scenario, short distance physics below 1/M f is completely cloaked by the uncertainty principle. Therefore, Planckian effects could be the final physics discovery at future
colliders and in UHECRs. As an application, we predict the modifications to the e+ e- to f+ f- cross-sections.
Black hole relics in large extra dimensions (2003)
Sabine Hossenfelder Marcus Bleicher Stefan Hofmann Horst Stöcker Ashutosh V. Kotwal
Recent calculations applying statistical mechanics indicate that in a setting with compactified large extra dimensions a black hole might evolve into a (quasi-)stable state with mass close to the
new fundamental scale M f. Black holes and therefore their relics might be produced at the LHC in the case of extra-dimensional topologies. In this energy regime, Hawking's evaporation scenario
is modified due to energy conservation and quantum effects. We reanalyse the evaporation of small black holes including the quantisation of the emitted radiation due to the finite surface of the
black hole. It is found that observable stable black hole relics with masses sim 1-3 M f would form which could be identified by a delayed single jet with a corresponding hard momentum kick to
the relic and by ionisation, e.g. in a TPC.
Dynamics and freeze-out of hadron resonances at RHIC (2003)
Marcus Bleicher Horst Stöcker
Yields, rapidity and transverse momentum spectra of Delta++(1232), Lambda(1520), Sigma+-(1385) and the meson resonances K0(892), Phi, rho0 and f0(980) are predicted. Hadronic rescattering leads
to a suppression of reconstructable resonances, especially at low p_perp. A mass shift of the rho of 10 MeV is obtained from the microscopic simulation, due to late stage rho formation in the
cooling pion gas.
Strangeness dynamics in relativistic nucleus-nucleus collision (2003)
Elena L. Bratkovskaya Marcus Bleicher Wolfgang Cassing M. van Leeuwen Manuel Reiter Sven Soff Horst Stöcker Henning Weber
We investigate hadron production as well as transverse hadron spectra in nucleus-nucleus collisions from 2 A.GeV to 21.3 A.TeV within two independent transport approaches (UrQMD and HSD) that are
based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data demonstrates that both approaches agree quite well with each other and with the experimental
data on hadron production. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the 'kink') is well described by both approaches without
involving any phase transition. However, the maximum in the K+/Pi+ ratio at 20 to 30 A.GeV (the 'horn') is missed by ~ 40%. A comparison to the transverse mass spectra from pp and C+C (or Si+Si)
reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV, however, the measured K +/- m-theta-spectra
have a larger inverse slope parameter than expected from the calculations. The approximately constant slope of K+/-spectra at SPS (the 'step') is not reproduced either. Thus the pressure
generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding suggests that the additional pressure - as expected from
lattice QCD calculations at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) | {"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Marcus+Bleicher%22/start/0/rows/10/yearfq/2003/doctypefq/preprint","timestamp":"2014-04-17T16:44:55Z","content_type":null,"content_length":"43002","record_id":"<urn:uuid:4333cd90-ea13-45d8-9b3a-278923fcf22f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
triple with large LCM
up vote 20 down vote favorite
Does there exist $c>0$ such that among any $n$ positive integers one may find $3$ with least common multiple at least $cn^3$?
Let me post here a proof that we may always find two numbers with lcm at least $cn^2$. Note that if $a < b$, $N$=lcm$(a,b)$, then $N(b-a)$ is divisible by $ab$, hence $N\geq ab/(b-a)$. So, it
suffices to find $a$, $b$ such that $ab/(b-a)\geq cn^2$, or $1/a-1/b\leq c^{-1} n^{-2}$. Since at least $n/2$ our numbers are not less then $n/2$, denote them $n/2\leq a_1 < a_2 < \dots < a_k$, $$2/n
\geq \sum (1/a_i-1/a_{i+1})\geq k \min (1/a_i-1/a_{i+1}),$$ so $\min (1/a-1/b)\leq 2/nk\leq 4/n^2$.
For triples we get lower bound about $c(n/\log n)^3$ on this way. Again consider only numbers not less then $n/2$. If all lcm's are less then $c(n/\log n)^3$, then all number itselves are less then
some $n^3/2$, so for $2^k < n^3 < 2^{k+1}$ all of them do not exceed $2^k$, hence at least $n/2k$ of them belong to $[2^r,2^{r+1}]$ for the same $r$, then there exist three numbers $a < b < c$ with
$c-a\leq 2^r/(n/4k)\leq 4ka/n$. Then
lcm$(a,b,c)\geq abc/((b-a)(c-a)(c-b))\geq (a/(c-a))^3\geq (n/4k)^3$.
Typo: You have $\ge c^{-1} n^{-2}$ instead of $\le$. – Harry Altman Oct 13 '10 at 7:46
thank you, I fixed it – Fedor Petrov Oct 13 '10 at 8:13
1 I wonder whether this extends onto two potentially different sets. Is it true that for any two sets of positive integers $\{a_1,\ldots,a_n\}$ and $\{b_1,\ldots,b_n\}$ there exist indices $i,j\in
[n]$ such that $[a_i,b_j]\ge c n^2$ with a positive absolute constant $c$? – Seva Oct 29 '10 at 8:12
add comment
9 Answers
active oldest votes
See my post on AoPS
Edit: OK, reposting here.
The first step toward the solution, as it often happens, is to generalize the problem. Instead of just one set $A$, we shall consider $3$ sets $A,B,C$ of cardinalities $|A|,|B|,|C|$ and
will try to prove that there exist $a\in A,b\in B,c\in C$ such that $[a,b,c]\ge\sigma |A|\cdot|B|\cdot|C|$.
The reason for such generalization is that we are going to employ the usual "minimal counterexample" technique (a.k.a. "infinite descent", etc.) and we have much more freedom if we are
allowed to modify three different sets independently rather than just one of them.
Our first attempt will be to make the reduction modulo $p^k$ where $p$ is a prime and $k\ge 1$ is an integer. Let $A_{p,k}=\{a\in A: v_p(a)=k\}$ where, as usual, $v_p(a)=\max\{v:p^v\mid a\}
$. Let us replace $A$ with $A'=\{a'=p^{-k}a: a\in A_{p,k}\}$. For every $b\in B$, define $b'=\frac{b}{p^{\min(k,v_p(b))}}$. The numbers $b'$ form a set $B'$ of cardinality $|B'|\ge\frac{|B
|}{(k+1)}$ because each $b'$ can be obtained from at most $k+1$ different $b\in B$. Define $C'$ in a similar way. Note that if $a'\in A', b'\in B', c'\in C'$, and $a,b,c$ are the elements
of $A,B,C$ from which $a',b',c'$ were obtained, we have $[a,b,c]=p^k[a',b',c']$. Thus, if we have a minimal counterexample $A,B,C$ to our statement then $A',B',C'$ is not a counterexample,
so we can find $a',b',c'$ with $[a',b',c']\ge \sigma |A'|\cdot |B'|\cdot |C'|\ge \sigma (k+1)^{-2}|A_{p,k}|\cdot|B|\cdot|C|$, which will not give us a triple $a,b,c$ with large least common
multiple only if $|A_{p,k}|\le (k+1)^2p^{-k}|A|$. Thus, in our minimal counterexample, we must have this inequality for all prime $p$ and all $k\ge 1$. Note that it is trivially true with
$k=0$ as well. The same inequality holds for the cardinalities of sets $B_{p,\ell}$ and $C_{p,m}$.
up vote 6 Now we shall try the averaging technique. Since we are dealing with a multiplicative problem, it will be convenient to use geometric means. So, let us consider the identity \[ \prod_{a\in
down vote A,b\in B,c\in C}\frac{abc}{[a,b,c]}\prod_{a\in A,b\in B,c\in C}[a,b,c]=\prod_{a\in A,b\in B,c\in C}(abc) \] The products have $|A|\cdot|B|\cdot|C|$ factors in them and the product on the
accepted right is at least \[ (|A|!)^{|B|\cdot|C|}(|B|!)^{|A|\cdot|C|}(|C|!)^{|A|\cdot|B|}\ge e^{-3|A|\cdot|B|\cdot|C|}(|A|\cdot|B|\cdot|C|)^{|A|\cdot|B|\cdot|C|} \]
Our main task will be to estimate the first product on the left by $e^{K|A|\cdot|B|\cdot|C|}$ with some absolute $K>0$. If we manage to do that, we will immediately get the desired result
"on average" with $\sigma=e^{-K-3}$. In order to do it, we'll estimate the power at which each prime $p$ can appear in this product. So, fix some $p$ and assume that $a\in A_{p,k},b\in B_
{p,\ell},c\in C_{p,m}$. Then $p$ appears in the factor $\frac{abc}{[a,b,c]}$ at all only if $k+\ell+m\ge 2$ and its power in this case does not exceed $k+\ell+m+1$ (I know, this is an
idiotic bound, but it holds and will allow me to have all factors of the same form). Thus, the total power in which $p$ appears in the first product is at most \[ \begin{aligned} &\sum_{k,\
ell,m:k+\ell+m\ge 2}(k+\ell+m+1)|A_{p,k}|\cdot|B_{p,\ell}|\cdot|C_{p,m}| \cr &\le |A|\cdot|B|\cdot|C|\cdot\sum_{k,\ell,m:k+\ell+m\ge 2}(k+\ell+m+1)^7p^{-(k+\ell+m)} \end{aligned} \] Since
there are at most $(M+1)^2$ ways to represent a positive integer $M$ as a sum of three non-negative integers, the last sum is at most $\sum_{M\ge 2}(M+1)^9p^{-M}$.
Now it is time to put all $p$ together. We get $e^{K|A|\cdot|B|\cdot|C|}$ with \[ K=\sum_{M\ge 2,p\text{ prime}}(M+1)^9p^{-M}\log p \] and our only task is to show that this double series
converges. We can forget that $p$ is prime, just remember that $p\ge 2$. Also for any $\delta>0$, we can estimate $(M+1)^9\le C_\delta p^{\delta M}$, $\log p\le C_\delta p^{\delta}$ with
some finite $C_\delta>0$. Thus, our series is dominated by \[ \sum_{M,p\ge 2}p^{\delta-(1-\delta)M}=\sum_{p\ge 2} \frac{p^{3\delta-2}}{1-p^{\delta-1}}\ge \frac 1{1-2^{\delta-1}}\sum_{p\ge
2}p^{3\delta-2}<+\infty \] if $\delta<\frac 13$.
This proof can be easily generalized to any number of sets but the constant it gives is rather terrible. It would be nice to get some better bound even for the case of 2 sets. As usual,
questions and comments are welcome.
1 Is there a chance you could post a copy of your proof here, as well? Most people would have to create a new account in order to comment there. Moreover, it makes sense to have the
question and proof in the same thread, in case someone wants to refer to both in the future. – Gjergji Zaimi Oct 8 '11 at 22:25
Done............. – fedja Oct 8 '11 at 22:41
1 $ $ – Steven Gubkin Oct 9 '11 at 0:29
@ fedja You can post comments "under the character limit" by following your post with dollar signs enclosing empty spaces. – Steven Gubkin Oct 9 '11 at 0:30
Thanks! $ $ – fedja Oct 9 '11 at 0:45
add comment
With n = 48, any three members of {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 18, 20, 21, 24, 28, 30, 35, 36, 40, 42, 45, 56, 60, 63, 70, 72, 84, 90, 105, 120, 126, 140, 168, 180, 210,
up vote 3 252, 280, 315, 360, 420, 504, 630, 840, 1260, 2520} have lcm at most 2520, so $c\le35/1536=0.227\ldots$.
down vote
1 yes, but in general, for large values of $n$, if we take all $n$ numbers being divisors of the same common multiple $N$, then $N\gg n^3$ (since the number of divisors grows slower
then $N^{\epsilon}$ for any $\epsilon>0$) – Fedor Petrov Oct 10 '10 at 19:08
Yes, that's why I stopped at 2520. I have more ideas for lowering the upper bound, but so far no ideas for a lower bound. – Charles Oct 10 '10 at 19:20
I may prove smth like $C(n/\log n)^3$ from below. – Fedor Petrov Oct 10 '10 at 19:31
If you take all numbers of the form $2^{t_1}3^{t_2}5^{t_3}7^{t_4}$ where $t_1\in [0,4], t_2\in [0,3], t_3\in [0,2], t_4\in [0,1]$, then the max. LCM is $M=75600$, the cardinality $n=
5!$, the quotient is $M/n^3=.043...$ – Mark Sapir Oct 10 '10 at 19:42
If you reduce the range of t1, t2, and t3 by 1 you get 0.022786458333..., but further improvements don't seem possible with this method (unless you use primes > 20 or prime powers >
1000000). – Charles Oct 10 '10 at 20:43
show 5 more comments
If you take all numbers of the form $2^{t_1}3^{t_2}5^{t_3}$, where $t_1\in [0,3], t_2\in [0,2], t_3\in [0,1]$ then the maximal LCM is $8*9*5=360$, the cardinality $n=4!=24$, the quotient $360/
24^3=.026...$. I cannot find a smaller number yet.
Update: Let $S$ be the set. Let $X$ be the set of prime divisors of $S$. Without loss of generality we can assume that $X=2,3,...,p_k$ (all primes up to $p_k$). Indeed, we can always replace a
up bigger prime from $X$ by a smaller prime that does not without increasing the constant $C$. Now every element in $S$ $2^{l_1}...p_k^{l_k}$ corresponds to a vector $(l_1,...,l_k)$ in ${\mathbb
vote 2 Z}^k$. Let $\bar S$ be the set of all these vectors corresponding to numbers from $S$. Consider the partial component-wise order on ${\mathbb Z}^k$ (this makes the grid ${\mathbb Z}^k$ into a
down lattice (with intersection and join). Let $u_1,...,u_s$ be all the maximal vectors in $\bar S$ with respect to this partial order. We can assume that with every $x\in S$, $S$ contains all
vote divisors of $X$. Therefore for every $u_i$, $\bar S$ contains all $v\le u$. These $v$'s form a parallelepiped $U_i$. The number of points in the union of all the parallelepipeds $U_i$ is $n$,
the number of elements in $S$. Now we need to take any LCM of three $u_i$, and compare it with $k$. All the examples so far are such that there is only one maximal $u_i$ in $\bar S$. I think
the only hope to prove that $C$ vanishes is to consider the case when there are many maximal vectors in $\bar S$. This is also the way to show that $C$ has a non-trivial lower bound.
By adding in 7 you can get to 0.02278645833..., but I can't immediately improve on that. – Charles Oct 10 '10 at 20:38
add comment
For the question as asked, 2520 may exhibit the minimum. Perhaps the question is as $n$ increases.
For any set $S$ of positive integers consider $B_S$, the LCM of the entire set, and also $C_S$, the greatest LCM among all the 3 element subsets of $S$. Let $B_n$ and $C_n$ be the smallest
values of $B_S$ and $C_S$ over $n$ element sets. Finally, let $b_n=\frac{B_n}{n^3}$ and $c_n=\frac{C_n}{n^3}$. Certainly $C_n \le B_n$ and also $C_n \le n\cdot (n-1) \cdot (n-2)$. A set
achieving a minimum of $B_n$ should be all divisors of a certain number (the LCM). More precisely, we can enlarge to such a set: $B_6=C_6=12$ coming from $\{1,2,3,4,6,12\}$. Also, $B_5=C_5=
12$ coming from any 5 element subset. Hence $b_5=c_5=\frac{12}{5^3}=0.096...$ and $b_6=c_6=\frac{12}{6^3}=\frac{1}{18}=0.0555...$. As noted, $B_{48}=2520$ so $c_{48} \le b_{48}=
0.02278645833...$. I suspect that $b_n$ and $c_n$ are never again that small for $n>48$. It also would seem that, while $c_n<1$, $b_n$ grows, probably without bound.
up vote Numbers having more divisors than any smaller number are called highly composite numbers and supply minima of $b_n$. The sequence A002182 given in the OEIS up to 2162160, the smallest integer
1 down with 320 divisors. This shows $b_{320}=0.0659...$.
Following a link it would appear that the 100th HCN is $2^6\cdot 3^3 \cdot 5^2 \cdot 7^2 \cdot 11 \cdot 13 \cdot 17 \cdot 19 \cdot 23$ showing $b_{8064}=4.288...$
So this may be a case where the small numbers are misleading (as discussed in another recent question.)
Certainly a set achieving a minimum of $c_n$ (for n>N) may as well contain all divisors of its members. Past that I am still thinking.
If the question was about the lcm of the set, then HCN would be the right approach and it would suffice to show that the minimum is achieved at 2520. But {2, 3, 5, 7}, for example, has
maximal 3-lcm of 105, not 210, so more is needed to show that c > 0. – Charles Oct 11 '10 at 1:30
add comment
It occurs to me that since we seem to be having no luck proving divisors of 2520 give the solution we should maybe look at what might be a simpler problem; looking at ${\it pairs}$ with
up vote 1 large LCM. We'd want to compare the LCM to $n^2$. The divisors of 12 give $c=12/6^2=1/3$. Can we prove that 1/3 can't be bettered for pairs?
down vote
For pairs I may prove $cn^2$ lower bound, see update. – Fedor Petrov Oct 13 '10 at 6:24
add comment
For two numbers, this also follows easily from Graham's conjecture. Suppose, to simplify the life, that $n=2k$, and let $a_1>\dotsb>a_k$ be the $k$ largest elements of the set under
up vote 1 consideration. By Graham's conjecture, there exist $i,j\in[k]$ with $i\ne j$ and $a_i/(a_i,a_j)\ge k$. Now $[a_i,a_j]=a_ia_j/(a_i,a_j)\ge ka_j\ge ka_k\ge k^2=n^2/4$.
down vote
yes, ans slightly worse estimate may be gotten since Graham conjecture is proved for prime values of $k$ :) but is 1/4 the best constant? My impression is that the right asymptotics is
n^2+o(n^2) – Fedor Petrov Oct 28 '10 at 21:31
Graham's conjecture is actually proved for all $k$ in a 1996 paper by Balasubramanian and Soundararajan. So, we get a clean coefficient of $1/4$ in this way. Indeed, this argument shows
that if $a_k$ is the $k$th largest element of our set, then $\max [a_i,a_j]\ge ka_k$ -- which is potentially usefull if you think of improving the coefficient. – Seva Oct 29 '10 at 8:17
add comment
Here is a direction to explore. I describe it for quadruples, but, perhaps, a similar game can be played with triples.
Suppose that $A$ is a set of $n$ integers, all larger than $n$, such that for any $a,b,c,d\in A$ we have $[a,b,c,d]<cn^4$, with a sufficiently small constant $c$. Consider the set $S$ of all
up vote fractions (not necessarily irreducible) of the form $u/a$ with $a\in A$ and $0\le u\le a$. Although this is not immediate, it is likely to be true and possible to prove that $|S|\gg n^2$.
1 down Can we prove, in addition, that $|S+S|\gg |S|^2$? If so, the assertion will follow from the observation that any two different elements of $S+S$ are at least $1/(cn^4)$ away from each other,
vote whereas we have $\Omega(n^4)$ different elements, all lying in $[0,2]$.
add comment
Here is a rewriting of the proof of ${(\frac{n}{\log n})}^3$ lower bound which seems to allow some leeway to play with. Let $p$ be a prime around and less than $\frac{n}{3\log n}$. Then
there must be $3\log n$ numbers in the same residue class mod $p$. For three of these, say $ap+r < bp+r < cp+r$, we have $c/a < 2$. The LCM is at least $\frac{a}{c}p^3$ (the case where a
fraction of the numbers have r=0 is easy).
The main advantage seems to be the choice of many primes.
Two possibilities:
up vote 1
down vote 1. We want our prime to be around $\frac{n}{10}$ (say) rather than $ \frac{n}{3\log n} $. but the problem is we cannot claim $ >> \log n$ residues for such a prime. Is it feasible to
prove that for one of the many primes we have in the range, there must be a residue which appears many times? Looks hard to me though.
2. Instead of looking for the same residues, for a pair $ap+r,bp+s$, we can look to minimize $as-br$ since the gcd of this pair must divide this. So we can look at numbers $a/r$. Also we
can set up relations -for example there must be three (many?) numbers $ap+r,bp+s,cp+t$ such that $\frac{a+1}{r}=\frac{b+1}{s}=\frac{c+1}{t}$ (mod p).
add comment
EDIT: This answers the wrong question. > cn^3 is wanted, not > cp^3 . END EDIT
To even hope for such a c, you will need to avoid the following kind of set: fix t with at least n factors, then for primes p larger than t take the set S_p = {sp: s divides t} . For
up vote 0 down large enough p, LCM of S_p = tp < p^2 < cp^3 .
Gerhard "Ask Me About System Design" Paseman, 2010.10.10
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/41698/triple-with-large-lcm/41709","timestamp":"2014-04-19T09:55:46Z","content_type":null,"content_length":"107896","record_id":"<urn:uuid:d8954f9f-9d1d-435a-bdbf-5b3285214f6d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2005 [00710]
[Date Index] [Thread Index] [Author Index]
Re: Re: Language vs. Library why it matters / Turing
• To: mathgroup at smc.vnet.net
• Subject: [mg61577] Re: [mg61502] Re: Language vs. Library why it matters / Turing
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Sat, 22 Oct 2005 05:11:22 -0400 (EDT)
• References: <dipr57$hfl$1@smc.vnet.net> <200510180645.CAA11285@smc.vnet.net> <dj4p5f$gpf$1@smc.vnet.net> <200510200456.AAA16940@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
On 20 Oct 2005, at 13:56, Richard Fateman wrote:
> Andrzej Kozlowski wrote:
>> On 18 Oct 2005, at 15:45, Richard Fateman wrote:
>>> Mathematica is unusual in being
>>> complex, secret, and non-deterministic. These are each, in my view,
>>> significant detractions from the language and why I would not
>>> suggest that it be used for (say) a first course in programming.
>> The problem of the alleged non-deterministic nature of the
>> Mathematica programming language could easily be solved by means of a
>> device used by creators of most other programming languages. The
>> device consists simply of declaring anything that is not specified in
>> the documentation as "legal" as "illegal", whether it works or not.
> I doubt that this is done by most other programming languages. Most
> leave
> an area of "result is not specified by the standard". I also
> doubt that it could be applied to Mathematica in any interesting
> sense because the specification would be huge, and full of bugs.
> If the system were deterministic and there were only one
> implementation
> you could solve the problem by saying, if you want to know what that
> command means, it means "whatever it computes". That
> also eliminates all bugs, because they then become features.
>> This would deal with all the cases of "non-deterministic" behaviour
>> in Mathematica known to me (not counting obvious and acknowledged
>> bugs).
> No. How about Maxim's "weird" bug where the name of the variable
> changes
> the computation? Or problems which become strangely different when a
> secret boundary is crossed and an algorithm changes.
> In fact, most bugs are quite deterministic.
Most of these are bugs. The issues arising from a change that takes
place due to a change in algorithm should be solved by better
documentation. There are also issues of different behaviour on
different platforms due to the use of TimeConstrained. These are
harder to deal with (probably one would need to somehow estimate the
speed of the CPU starting of Mathematica.) By in any case, all of
this concerns the "mathematical" aspects of Mathematica and I thought
you supported the notion of "core language" (your brilliant idea of
Atica that is supposed to make you rich at SW's expense), and that
the math functions do not belong to that. So which one you are
objecting to teaching in a first programing course, Mathematica the
CAS or Atica the programming language?
> Actually, I remember from the days when I tried programming in
>> other languages (including C) that they would also produce
>> unpredictable results if you violated the official syntax (one would
>> sometimes get correct and sometimes incorrect output).
> Unpredictable by you because you don't know enough about the language
> is different from
> Unpredictable by anyone because the language implementation varies
> according to (say) where in memory the pages are loaded.
Fail to see much difference as far as the use is concerned.
> I know of no first-programming-language courses in American
> universities
> that use Mathematica. I think it would be fine to teach Mathematica
> in an engineering problem-solving symbolic/numeric course to students
> who already know how to program. Nancy Blachman taught such a course
> at Stanford. I believe it was not open to Computer Science majors.
> (I even lectured once in it!)
> But most computer scientists would, I think, object to teaching
> mathematica as a program language as such.
This is merely an unsupported assertion. I can also make such
assertions but what is the point? Obviously Mathematica is primarily
what is somewhat misleadingly called a CAS. Most computer scientists
do not know its programming language and it is my impression most
programmers tend to object to teaching languages they are not
comfortable with.
But obviously I meant a first programming course for mathematicians
and scientists.
Andrzej Kozlowski
Tokyo, Japan
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Oct/msg00710.html","timestamp":"2014-04-19T06:59:26Z","content_type":null,"content_length":"39528","record_id":"<urn:uuid:60d56e6f-9a6d-4df8-8d59-46f7b143dfb0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: subject reduction fails in Java -- how it CAN be proven
Re: subject reduction fails in Java -- how it CAN be proven
• To: types@cs.indiana.edu
• Subject: Re: subject reduction fails in Java -- how it CAN be proven
• From: Sophia Drossopoulou <scd@doc.ic.ac.uk>
• Date: Fri, 19 Jun 1998 18:00:18 +0100
• CC: se@doc.ic.ac.uk, Donald Syme <Donald.Syme@cl.cam.ac.uk>
• Delivery-Date: Fri, 19 Jun 1998 12:02:40 -0500
We at IC were quite surprised and intrigued by the counterexample
to the subject reduction property in Java. Indeed, we did not expect
the interaction of conditional expressions and method calls
to produce that effect.
On the other hand,
a) we feel that this problem is due to the type rule for conditionals
being too weak (it should say that the type of the conditional is
the most specific supertype of the two branches)
b) subject reduction can be proven, namely
> ... it seems, their proof methods cannot easily
> be extended to deal with the full Java language including conditional
> expressions.
our proof can easily be extended to deal with
conditional expressions. Namely, we can apply the same trick
which we developed for type checking assignments (suggested by Don
whereby we have different type rules for the compiled
language (Java_SE), than for the run-time language (Java_R).
In more detail:
1. In Java_SE require for
( b ? e1 : e2 )
that b is boolean; e1, e2 have such types that one can
be widened to the other.
That is:
b : boolean
e_1 : T_1
e_2 : T_2
(T_1 widens to T_2 and T_2=T)
or (T_2 widens to T_1 and T_1=T)
( b ? e_1 : e_2 ) : T
2. In Java_R we only require for
( b ? e1 ? e2 )
that b is boolean; e_1 and e_2 are well-typed and the whole
expression has the minimal type to which those of
e_1 and e_2 can be widened.
That is:
b : boolean
e_1 : T_1
e_2 : T_2
T = min { T' |
T_1 widens to T', and T_2 widens to T')
( b ? e_1 : e_2 ) : T
3. We prove that for a Java_SE term t
with \Gamma |-_{se} t : T using the Java_SE type
it also holds that
\Gamma |-_{r} t : T this time using the Java_R type
4. We can now prove the subject reduction theorem stating, that
for a well typed Java_R term t, and a well-typed Java_SE
program p, rewriting t (which might invoke methods from p)
preserves types up to widening.
Furthermore, I believe that the type rule suggested for Java_R is the
rule that _should_ be given for the type of conditional expressions,
and that the actual type (as described for Java_SE) is an easy
approximation to it, which allows an easier implementation of the
The suggestion given by Haruo Hosoya, Benjamin Pierce and David Turner,
> Integer i = new Integer();
> HashTable h = new HashTable();
> Object o = (b ? (Object)i : (Object)h);
> carrying the parameter types along explicitly during the substitution
is more concise, but, I feel, it is less faithful to what
really happens at run-time, since a conditional expression does
not require run-time checks, whereas the conditional as
transformed above does require run-time checks (which are guaranteed to
Also, I feel that this feature of conditional expressions is another
indication that the approach of distinguishing the compiled language
from the run-time language (due to Don Syme, again), where typing
in the first reflects the compile - time checks whereas
typing in the latter serves the requirements of formulating and
proving soundness properties, is a useful vehicle when
considering full, "real-world" languages.
Sophia Drossopoulou
> ---------
> [2] Drossopolou and Eisenbach, "Java is type safe -- probably," ECOOP
> '97.
please, consider our more recent work:
Drossopoulou and Eisenbach:
"Towards an Operations Semantics and Proof of Type Soundness for Java"
April 1998, the most recent version of the previous work, in which we
clarify and simplify many issues. This paper will appear as a chapter
in a book. Also available at:
Dr. Sophia Drossopoulou tel: +44 171 594 8368
Department of Computing fax: +44 171 581 8024
Imperial College of Science, Technology and Medicine
LONDON SW7 2BZ, England email: sd@doc.ic.ac.uk
Dr. Sophia Drossopoulou tel: +44 171 594 8368
Department of Computing fax: +44 171 581 8024
Imperial College of Science, Technology and Medicine
LONDON SW7 2BZ, England email: sd@doc.ic.ac.uk | {"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1997-98/msg00404.html","timestamp":"2014-04-19T23:06:03Z","content_type":null,"content_length":"7300","record_id":"<urn:uuid:5ac41027-5d3d-45ad-be9e-c1265b54c54b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Markovian and Autoregressive Clutter-Noise Models for a Pattern-Recognition Wiener Filter
Most modern pattern recognition filters used in target detection require a clutter-noise estimate to perform efficiently in realistic situations. Markovian and autoregressive models are proposed as
an alternative to the white-noise model that has so far been the most widely used. Simulations by use of the Wiener filter and involving real clutter scenes show that both the Markovian and the
autoregressive models perform considerably better than the white-noise model. The results also show that both models are general enough to yield similar results with different types of real scenes.
© 2002 Optical Society of America
OCIS Codes
(070.0070) Fourier optics and signal processing : Fourier optics and signal processing
(070.2580) Fourier optics and signal processing : Paraxial wave optics
(070.4550) Fourier optics and signal processing : Correlators
(070.5010) Fourier optics and signal processing : Pattern recognition
Sovira Tan, Rupert C. D. Young, and Chris R. Chatwin, "Markovian and Autoregressive Clutter-Noise Models for a Pattern-Recognition Wiener Filter," Appl. Opt. 41, 6858-6866 (2002)
Sort: Year | Journal | Reset
1. B. V. K. Vijaya Kumar, “Minimum variance synthetic discriminant functions,” J. Opt. Soc. Am. A 3, 1579–1585 (1986).
2. P. Réfrégier, “Optimal trade-off filters for noise robustness, sharpness of the correlation peak, and Horner efficiency,” Opt. Lett. 16, 829–831 (1991).
3. B. V. K. Vijaya Kumar, D. W. Carlson, A. Mahalanobis, “Optimal trade-off synthetic discriminant function filters for arbitrary devices,” Opt. Lett. 19, 1556–1558 (1994).
4. P. Réfrégier, “Filter design for optical pattern recognition: Multicriteria optimization approach,” Opt. Lett. 15, 854–856 (1990).
5. A. Mahalanobis, B. V. K. Vijaya Kumar, S. Song, and S. R. F. Sims, “Unconstrained correlation filters,” Appl. Opt. 33, 3751–3759 (1994).
6. H. Zhou, T. S. Chao, “MACH filter synthesizing for detecting targets in cluttered environment for gray-scale optical correlator,” in Optical Pattern Recognition X, D. P. Casasent and T.-H. Chao,
eds. SPIE 3715, 394–398 (1999).
7. V. Laude, A. Grunnet-Jepsen, and S. Tonda, “Input image spectral density estimation for real-time adaption of correlation filters,” Opt. Eng. 38, 672–676 (1999).
8. S. Tan, R. C. D. Young, J. D. Richardson, and C. R. Chatwin, “A pattern recognition Wiener filter for realistic clutter backgrounds,” Opt. Commun. 172, 193–202 (1999).
9. A. K. Jain, “Advances in mathematical models for image processing,” Proc. IEEE 69, 502–528 (1981).
10. M. Hassner and J. Slansky, “The use of Markov random fields as models of texture,” Comput. Graph. Image Process. 12, 357–370 (1980).
11. S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741 (1984).
12. B. Gidas, “A renormalization group approach to image processing problems,” IEEE Trans. Pattern Anal. Mach. Intell. 11, 164–180 (1989).
13. H. Derin and P. A. Kelly, “Discrete-index Markov-type random fields,” Proc. IEEE 77, 1485–1510 (1989).
14. S. Z. Li, Markov Random Fields Modeling in Computer Vision (Springer-Verlag, Berlin, 1995).
15. R. Paget and D. Longstaff, “A nonparametric multiscale Markov random field model for synthesising natural textures, Fourth International Symposium on Signal Processing and its Applications,” 2,
744–747 (1996).
16. M. Haindl, “Texture synthesis,” CWI Quaterly 4, 305–331 (1991).
17. J. Mao and A. K. Jain, “Texture classification and segmentation using multiresolution simultaneous autoregressive models,” Pattern Recogn. 25, 173–188 (1992).
18. P. Birch, S. Tan, R. Young, T. Koukoulas, F. Claret-Tournier, D. Budgett, and C. Chatwin, “Experimental implementation of a Wiener filter in a hybrid digital-optical correlator,” Opt. Lett. 26,
494–496 (2001).
19. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, (Wiley, New York, 1949).
20. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Addison-Wesley, Reading, Mass., 1993).
21. J. M. Blackledge, Quantitative Coherent Imaging (Academic, London, 1989).
22. H. Inbar and E. Marom, “A priori and adaptive Wiener filtering with joint transform corralators,” Opt. Lett. 20, 1050–1052 (1995).
23. E. Marom and H. Inbar, “New interpretations of Wiener filters for image recognition,” J. Opt. Soc. Am. A 13, 1325–1330 (1996).
24. F. R. Hansen and H. Elliott, “Image segmentation using simple Markov random field models,” Comput. Graph. Image Process. 20, 101–132 (1982).
25. J. W. Modestino and J. Zhang, “A Markov random field model-based approach to image interpretation, IEEE Trans. Pattern Anal. Mach. Intell. 14, 606–615 (1992).
26. C. S. Won and H. Derin, Unsupervised segmentation of noisy and textured images using Markov random fields, CVGIP: Graph. Models Image Process. 54 308–328 (1992).
27. R. Chellappa and A. Jain, Markov Random Field-Theory and Applications (Academic, San Diego, Calif., 1993).
28. P. Réfrégier, F. Goudail, and T. Gaidon, “Optimal location of random targets in random background: random Markov fields modelization,” Opt. Commun. 128, 211–215 (1996).
29. G. R. Cross and A. K. Jain, “Markov random field texture models,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-5, 25–39 (1983).
30. B. V. K. Vijaya Kumar and L. Hassebrook, “Performance measures for correlation filters,” Appl. Opt. 29, 2997–3006 (1990).
31. J. Proakis and D. Manolakis, Digital Signal Processing: Principles, Algorithms and Applications (Prentice-Hall, Englewood Cliffs, N.J., 1996).
32. J. H. McClellan, “Multidimensional spectral estimation,” Proc. IEEE 70, 1029–1039 (1982).
33. S. R. DeGraaf, “SAR imaging via Modern 2-D spectral estimation methods,” IEEE Trans. Image Process. 7, 729–761 (1998).
34. S. L. Marple, Digital Spectral Analysis with Applications Prentice-Hall, Englewood Cliffs, N.J., 1987).
35. J. W. Woods, “Two-dimensional discrete Markovian fields,” IEEE Trans. Inf. Theory IT-18, 232–240 (1972).
36. R. L. Kashyap and R. Chellappa, “Estimation and choice of neighbors in spatial-interaction models of images,” IEEE Trans. Inf. Theory IT-29, 60–72 (1983).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/ao/abstract.cfm?uri=ao-41-32-6858","timestamp":"2014-04-18T19:54:55Z","content_type":null,"content_length":"145013","record_id":"<urn:uuid:5d36b6ba-fc2d-4cca-b6ce-95938dc3ee7f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dimension of module
up vote 4 down vote favorite
Does dimension of a module (say, dimension of its support) have anything to do with the supremum length of chains of prime submodules like rings? Let's restrict to finitely generated modules over
Noetherian ring. Prime submodules are defined analogously to primary submodules: a submodule P in M is prime if P$\neq$M and $M/P$ has no zero divisors, i.e. $am\in P$ implies $m\in P$ or $a \in \
dimension-theory modules ac.commutative-algebra
meta: try to indicate which field of mathematics you're talking about as you begin using terms. For me, a module is more likely to be over a von Neumann algebra or over a tensor category than a
Noetherian ring. No one has a monopoly on modules anymore! You did explain which sense you meant, of course, but it took until the second sentence, and most of the terminology of the first sentence
doesn't even make sense until you've done so. – Scott Morrison♦ May 9 '10 at 19:16
So you are asking for the relation between the Krull dimension and the prime dimension of a module. I think the two dimensions are equal for multiplication modules. Would that be of interest? –
Gjergji Zaimi May 10 '10 at 0:25
add comment
1 Answer
active oldest votes
Let $R$ be an integral domain, then for the module $R^n$ its maximal length of chains of prime submodules is much larger than its dimension (for $n>>0$).
up vote 1 down vote accepted
add comment
Not the answer you're looking for? Browse other questions tagged dimension-theory modules ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/24031/dimension-of-module/35839","timestamp":"2014-04-18T08:04:08Z","content_type":null,"content_length":"53056","record_id":"<urn:uuid:7f4963d6-b41c-4077-ada4-0de0fce19de8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Tutors
Chandler, AZ 85226
Expert at Test Prep, Math, and Languages
...I help students improve their performance on all sections of the TOEFL. I tutor several sections of the ASVAB, including Word Knowledge, Paragraph Comprehension, Arithmetic Reasoning, and
Knowledge. I have worked with many ASVAB students to help improve...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/Apache_Junction_Mathematics_tutors.aspx","timestamp":"2014-04-19T21:12:34Z","content_type":null,"content_length":"62040","record_id":"<urn:uuid:1473eb43-1be2-4432-ae8c-5d436ef6e47b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Central and local limit theorems applied to asymptotic enumeration II: multivariate generating functions
Results 1 - 10 of 111
- ICM 2002 VOL. III 1-3 , 2002
"... Combinatorial enumeration leads to counting generating functions presenting a wide variety of analytic types. Properties of generating functions at singularities encode valuable information
regarding asymptotic counting and limit probability distributions present in large random structures. "Sing ..."
Cited by 387 (11 self)
Add to MetaCart
Combinatorial enumeration leads to counting generating functions presenting a wide variety of analytic types. Properties of generating functions at singularities encode valuable information regarding
asymptotic counting and limit probability distributions present in large random structures. "Singularity analysis" reviewed here provides constructive estimates that are applicable in several areas
of combinatorics. It constitutes a complex-analytic Tauberian procedure by which combinatorial constructions and asymptotic-probabilistic laws can be systematically related.
, 1992
"... . The average case analysis of algorithms can avail itself of the development of synthetic methods in combinatorial enumerations and in asymptotic analysis. Symbolic methods in combinatorial
analysis permit to express directly the counting generating functions of wide classes of combinatorial struct ..."
Cited by 271 (11 self)
Add to MetaCart
. The average case analysis of algorithms can avail itself of the development of synthetic methods in combinatorial enumerations and in asymptotic analysis. Symbolic methods in combinatorial analysis
permit to express directly the counting generating functions of wide classes of combinatorial structures. Asymptotic methods based on complex analysis permit to extract directly coefficients of
structurally complicated generating functions without a need for explicit coefficient expansions. Three major groups of problems relative to algebraic equations, differential equations, and iteration
are presented. The range of applications includes formal languages, tree enumerations, comparison--based searching and sorting, digital structures, hashing and occupancy problems. These analytic
approaches allow an abstract discussion of asymptotic properties of combinatorial structures and schemas while opening the way for automatic analysis of whole classes of combinatorial algorithms.
- Handbook of Combinatorics , 1995
"... ..."
, 1998
"... Flajolet and Soria established several central limit theorems for the parameter "number of components" in a wide class of combinatorial structures. In this paper, we shall prove a simple theorem
which applies to characterize the convergence rates in their central limit theorems. This theorem is a ..."
Cited by 67 (8 self)
Add to MetaCart
Flajolet and Soria established several central limit theorems for the parameter "number of components" in a wide class of combinatorial structures. In this paper, we shall prove a simple theorem
which applies to characterize the convergence rates in their central limit theorems. This theorem is also applicable to arithmetical functions. Moreover, asymptotic expressions are derived for
moments of integral order. Many examples from different applications are discussed.
- Algorithmica , 1997
"... Consider a given pattern H and a random text T generated by a Markovian source. We study the frequency of pattern occurrences in a random text when overlapping copies of the pattern are counted
separately. We present exact and asymptotic formulae for all moments (including the variance), and probabi ..."
Cited by 63 (24 self)
Add to MetaCart
Consider a given pattern H and a random text T generated by a Markovian source. We study the frequency of pattern occurrences in a random text when overlapping copies of the pattern are counted
separately. We present exact and asymptotic formulae for all moments (including the variance), and probability of r pattern occurrences for three different regions of r, namely: (i) r = O(1), (ii)
central limit regime, and (iii) large deviations regime. In order to derive these results, we first construct some language expressions that characterize pattern occurrences which are later
translated into generating functions. Finally, we use analytical methods to extract asymptotic behaviors of the pattern frequency. Applications of these results include molecular biology, source
coding, synchronization, wireless communications, approximate pattern matching, game theory, and stock market analysis. These findings are of particular interest to information theory (e.g.,
second-order properties of the re...
, 1992
"... An increasing tree is a labelled rooted tree in which labels along any branch from the root go in increasing order. Under various guises, such trees have surfaced as tree representations of
permutations, as data structures in computer science, and as probabilistic models in diverse applications. We ..."
Cited by 55 (7 self)
Add to MetaCart
An increasing tree is a labelled rooted tree in which labels along any branch from the root go in increasing order. Under various guises, such trees have surfaced as tree representations of
permutations, as data structures in computer science, and as probabilistic models in diverse applications. We present a unified generating function approach to the enumeration of parameters on such
trees. The counting generating functions for several basic parameters are shown to be related to a simple ordinary differential equation which is non linear and autonomous. Singularity analysis
applied to the intervening generating functions then permits to analyze asymptotically a number of parameters of the trees, like: root degree, number of leaves, path length, and level of nodes. In
this way it is found that various models share common features: path length is O(n log n), the distributions of node levels and number of leaves are asymptotically normal, etc.
, 1997
"... This paper describes a systematic approach to the enumeration of "noncrossing" geometric configurations built on vertices of a convex n-gon in the plane. It relies on generating functions,
symbolic methods, singularity analysis, and singularity perturbation. A consequence is exact and asymptotic c ..."
Cited by 55 (8 self)
Add to MetaCart
This paper describes a systematic approach to the enumeration of "noncrossing" geometric configurations built on vertices of a convex n-gon in the plane. It relies on generating functions, symbolic
methods, singularity analysis, and singularity perturbation. A consequence is exact and asymptotic counting results for trees, forests, graphs, connected graphs, dissections, and partitions. Limit
laws of the Gaussian type are also established in this framework; they concern a variety of parameters like number of leaves in trees, number of components or edges in graphs, etc.
, 1999
"... We present a complete analysis of the statistics of number of occurrences of a regular expression pattern in a random text. This covers "motifs" widely used in computational biology. Our
approach is based on: (i) a constructive approach to classical results in theoretical computer science (automata ..."
Cited by 48 (4 self)
Add to MetaCart
We present a complete analysis of the statistics of number of occurrences of a regular expression pattern in a random text. This covers "motifs" widely used in computational biology. Our approach is
based on: (i) a constructive approach to classical results in theoretical computer science (automata and formal language theory), in particular, the rationality of generating functions of regular
languages; (ii) analytic combinatorics that is used for deriving asymptotic properties from generating functions; (iii) computer algebra for determining generating functions explicitly, analysing
generating functions and extracting coefficients efficiently. We provide constructions for overlapping or non-overlapping matches of a regular expression. A companion implementation produces
multivariate generating functions for the statistics under study. A fast computation of Taylor coefficients of the generating functions then yields exact values of the moments with typical
application to random t...
- Random Structures & Algorithms , 2001
"... A considerable number of asymptotic distributions arising in random combinatorics and analysis of algorithms are of the exponential-quadratic type, that is, Gaussian. We exhibit a class of
"universal" phenomena that are of the exponential-cubic type, corresponding to distributions that involve the ..."
Cited by 46 (6 self)
Add to MetaCart
A considerable number of asymptotic distributions arising in random combinatorics and analysis of algorithms are of the exponential-quadratic type, that is, Gaussian. We exhibit a class of
"universal" phenomena that are of the exponential-cubic type, corresponding to distributions that involve the Airy function. In this paper, such Airy phenomena are related to the coalescence of
saddle points and the confluence of singularities of generating functions. For about a dozen types of random planar maps, a common Airy distribution (equivalently, a stable law of exponent 3/2)
describes the sizes of cores and of largest (multi)connected components. Consequences include the analysis and fine optimization of random generation algorithms for multiply connected planar graphs.
Based on an extension of the singularity analysis framework suggested by the Airy case, the paper also presents a general classification of compositional schemas in analytic combinatorics.
- Journal of Combinatorics , 2000
"... We derive the asymptotic expression for the number of labeled 2-connected planar graphs with respect to vertices and edges. We also show that almost all such graphs with n vertices contain many
copies of any fixed planar graph, and this implies that almost all such graphs have large automorphism gro ..."
Cited by 36 (2 self)
Add to MetaCart
We derive the asymptotic expression for the number of labeled 2-connected planar graphs with respect to vertices and edges. We also show that almost all such graphs with n vertices contain many
copies of any fixed planar graph, and this implies that almost all such graphs have large automorphism groups. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=197096","timestamp":"2014-04-19T23:24:19Z","content_type":null,"content_length":"36835","record_id":"<urn:uuid:822aeb43-1d1e-48b7-8366-45cb3987ef4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 209
-1 - Spectral methods for analyzing and visualizing networks: an introduction Andrew J. Seary and William .D. Richards School of Communication Simon Fraser University Bu~naby BC Canada V5A- 1 S6
seary~sfi~.ca richards~sfi~.ca Abstract Network analysis begins with data that describes the set of relationships among the members of a system. The goal of analysis is to obtain Domthe low-level
relational data a higher-level description of the structure of the system which identifies venous kinds of patterns In the set of relationships. These patterns wiD be based on the way individuals are
related to other individuals in the network. Some approaches to network analysis look for clusters of individuals who are tightly connected to One another; some look for sets of individuals who have
similar patterns of relations to the rest ofthe network. Other methods don't "look for" anything in particular instead, they construct a continuous multid~mensionaIrepresentationofthe network In
which the coordinates ofthe individuals can be further analyzed to obtain a variety of kinds of information about them and their relation to the rest ofthe network. One approach to this is to choose
a set of axes in the multidunensional space occupied by the network and rotate them so that the first axis points in the direction ofthe greatest vanability in the data; the second axis, orthogonal
to the first, points In the direction of greatest remaining vanability, and so on. This set of axes is a coordinate system that can be used to describe the relative positions ofthe set of points in
the data. Most ofthe variability In the locations of points wiD be accounted for by the first few dimensions of this coordinate system. The coordinates ofthe points along each axus will be an
eigenvector, axle the length of the projection will be an eigerlva~ue. The set of all eigenvalues is the spectrum ofthe network. Spectral methods (eigendecomposition) have been a part of graph theory
for over a century. Network researchers have used spectral methods either implicitly or explicitly since the late 1960's, when computers became generally accessible in most universities. The
eigenvalues of a network are intimately connected to important topological features such as magnum distance across the network (diameter), presence of cohesive clusters, long paths and bottlenecks,
and how random the network is. The associated eigenvectors can be used as a natural coordinate system for graph visualization; they also provide methods for discovering clusters and other local
features. When combined with other, easily obtained network statistics (e.g., node degree), they can be used to describe a variety of network properties, such as degree of robustness (i.e., tolerance
to removal of selected nodes or links), and other structural properties, and the relationship of these properties to node or link attributes In large, complex, multivariate networks. We introduce
three types of spectral analysis for graphs and descnbe some oftheir mathematical properties. We discuss the strengths and weaknesses of each type and show how they can be used to understand network
structure. These discussions are accompanied by~nteractive graphical displays of small (n=50) and moderately large (n=5000) networks. Throughout, we give special attention to sparse matrix methods
which allow rapid, efficient storage end analysts of large networks. We briefly describe aIgonthTns and analytic strategies that allow spectral analysis and identification of clusters in very large
networks (n>l,000,000~. DYNAMIC SOCKS N~TWO=MODEL~G ED TRYSTS 209
OCR for page 209
—2— Introduction A standard method In statistics for handling multivariate data is to find the directions of maximum variability, usuaDyofvanance-covanance or correlation matrices These directions
are caped Principal Coordinates or eiger~vectors, while the relative importance of each direction is represented by numbers called eigenva~ues. Collide, ~ 986) Finding this coordinate system may be
accomplished by a series of rotations (although this is not the most efficient method) that end up pointing along the direction of maximum variability, with the second largest maximum variability at
right angles, and so on. As a result, the data matrix Is reduced to a diagonal matnx, with diagonal entries co~Tespond~ng to the importance (eigenva~z~e) of each direction (eigenvector3. The
collection of all eigenvalues is caned the spectrum. One goal is to reduce the problem so that only the most import ant dimensions (those with the largest eigenvalues) contain most of the vanability.
Implicit ~ these methods (variance-covariance or correlation) is that some kind of"expected" or "background" signalhas been subtracted: In the case of variances, these would be the means of each
vanable In the original data matrix. To find these eigenvectors and eigenva~ues we need to solve the eigenva~ue equation: Ee=£e (we wid derive this equation below) which states that along the
direction represented by vector e, multiplication by data matrix E does not change the direction, but only the length (where £ may be any number, including 0~.~ The related pad (s,e) Is caned an
eigenpair of matrix E. A network or graph G(V,E) is a set of nodes V (points, vertices) connected by a set of links E (lines, edges). We win consider networks that are binary (edges have logicalvalue
1 if en edge exists, O if not), symmetric (an edge Mom node i toj implies an edge Dom node j to i), connected (there is a set of edges connecting any two nodes, consequently only one component), and
without self-loops (no edges between i and i). We may represent such a network as the adjacency matrix A = A(G) with: ~ In row i, columnj if i is connected to j, O otherwise. We wiD not directly
discuss weighted networks, where the entries for an edge may be a number other than I, although most ofthe results that follow generalize to such networks. For many"real world" networks, A consists
mostly of 0's: it is sparse. We win discuss efficient ways of storing and manipulating A using sparse methods. Associated with A is the degree clistributiorl D, a diagonal matrix with row-sums of A
along the diagonal, and 0's elsewhere. D describes how many connections each node has. We cad the number of nodes, m, the order of G and it is equal to the number of rows or columns of A. We
represent the number of edges by FEN. We win also introduce two other matrices related to A: · the Laplacian: L=D-A · the Normal: N=D-'A and wiD discuss the properties ofthe spectrum arid associated
eigenvectors of A, A, and N. ~ We have introduced some notation which will be followed throughout: · mamces are represented by bold capitals: D (column-)vectors are represented by bold lower case: e
inner products of vectors are represented as eTe = n (a scalar) where e' is the transpose of e. Outer products of vectors are represented as eeT = M (a matrix) eigenvalues are represented by "reek
letters, usually with some relationship to the latin letters representing a matrix and an eigenvector. E.g.,( al, al ~ is an eigenpair of adjacency matrix A. 210 DYNAMIC SOCKS NETWO~MOD~L~G ED CYSTS
OCR for page 209
Distances and diameter: One important property of a network is the set of distances between any pair of nodes i Andy; that is, the least number of links between any pairs i and I. One way of
calculating this is to take powers ofthe matrix A as follows: I~ power A = A by definition gives a matrix of all pairs of nodes linked to each other. 2nd power = AA has a non-zero in row i colurnnj
if j is two steps away from i. Since i is 2 steps away Mom itself, the diagonal i,i entry counts the number ofthese 2-steps. 3rd power = AAA has a non-zero entry in row i columns if j is 3 steps away
from i. Eventually, some power of A, say AN, will cor sist of entirely non-zero entries, meaning every node has been reached from every other node. We call N the diameter ofthe graph: the longest
possible path between any pair of nodes. This is a very inefficient way of calculating the diameter of a graph for two reasons: l) calculating each power of A requires m3 calculations 2) as more
nodes are reached, the powers of A become less sparse until eventually no 0's remain: the amount of storage required approaches m2 . If we continue taking powers of A, an interesting thing happens:
all the columns become multiples of each other. Taking higher powers of A corresponds to taking longer 'walks" along the edges and we can interpret this as a "loss of memory" about where we started
from (Lovasz, 1995). We will see why this happens soon, as well as other examples of this phenomenon. We can approach this problem another way by the properties of the spectral decomposition of A
(Parlett, ~ 980~. Let al be the eigenvalues of A and a i the corresponding eigenvectors, with cc02a~ 2a2 ... 20$1, ~ and ~a; ~~ = 1 (the eigenvectors are normalized to length 1~. Then the spectral
decomposition of A is: (1) A = Pi (aj~ajajT where a; ajT is an mxm matrix defining a l-dimension subspace and (ajaiT)N=aiajT if it aiajT= o if it therefore AN = Hi; (aj)Na; ajT for any power N. and
this allows an easy way of calculating powers of A, assuming we have already calculated all the eigenpairs (a, a i ~ Another important property ofthe spectral decomposition is the approximation
property. If we take the first k of the eigenpairs (or;, aj ), then Ak =~ 0 at aj ajT is the best least-squares approx- imation to A, meaning that we have captured most ofthe variability of A in the
important eigenpairs. For example, we can estimate an upper bound for the diameter using the second-largest eigenvalue al (Chung, 19894: Diam(G) < ~ln(m-l)/ln(k/`X~1 Unfortunately, this bound applies
only to k-regular networks (all degrees = k = A). We will get better bounds for general networks using different spectra. Nevertheless, this bound does show one relationship between the spectrum and
an important property like diameter. In particular, when k/a~ is large (there is a large gap between the first two eigenvalues), the upper bound on diameter is small, so aD distances are short.
DYNASTIC SOCIAL N~TWORKAdODELING AND ANALYSIS 2 1 1
OCR for page 209
~- The Power Method and Sparse methods: Using (1) and eigenpair (off, a O) we can see why taking taking large enough powers of A results in columns that are multiples of one another - in fact,
multiples of eigenvector a 0. This is the basis ofthe Power method (Hotelling, 1933) for finding eigenpairs. We have mentioned that taking powers of matrix A is not efficient, so we introduce a
representation and methods that are far more efficient. A very simple way of storing and manipulating a sparse matrix A is to use a link list representa- tion, which stores only the non-zero entries
of A as a list of pairs id for each link in A. We could then calculate the diameter of A by starting at i=1 and folio winy each link until we have reached every node, repeat for i=2, and save the
magnum number of steps. This requires about mlEI operations (Aho, et al., 1987) and a very moderate amount storage equal to 2IEl. We can now use (1) to devise a very efficient version of the Power
Method for finding the largest eigenpair: Starting with some random vector p normalized to length 1: Repeat p' ~ Ap, q ~ p, p ~ p' until p is no longer changing in direction. Then the largest
eigenpair of A is (p/q, p). There are some bookkeeping details: Ap uses the link list representation, and the entries of p' must be adjusted in size after each multiplication (for details see
Richards & Seary, 2000), but the method will always work for any matrix without repeated eigenvalues, which is generally the case for social networks. If we want more eigenpa~rs, we can iterate with
p ~ Mp -aO aO aO to get the second, and with p' ~ Mp -off aO aOT -at a, apt to get the third, and so on, without destroying sparsity. However, we must store the (ai, a I) eigen- pairs somewhere; the
procedure is subject to loss of precision on a computer; and the iterations may converge slowly if or /cc ~ is close to 1. There are better methods, such as Lanczos iteration (Parlett et al. ,1982)
which converge very rapidly and do not have problems with loss of precision. Some network invanants: Some properties of A remain unchanged (invariant) under the series of orthogonal rotations that
diagonalize A (eigendecomposition). We will relate these to some network invariants of A. The eigenvalues of any symmetric matrix M are the roots of the characteristic polynomial: xm + C xm~1 ~ c
xm-2 +C Xm-3 + C Therefore, c1 = ocO + al ~ ... + on l (sum over all eigenvalues) of A; c2 = JO al +a~ a: ... + (x<' am ~ ... +am 3 a~-l + am 2 ant l (sum over all pairs); C3 = aO al a? + OCO al 0C3
+ + Ocm-3 am-2 °Cm ~ (Slim over all triples) The trace of a matrix is the sum of the entries on the diagonal, and this is invariant under orthogonal rotations. Since A has trace of O (no self-loops),
C3 = 0. The sum ofproduct pairs is equal to minus the number of edges so that c2 = -IEI. Most important is C3 which is twice the number of triangles in G. Higher coefficients are related to cycles of
length 4, 5,... although they also contain contributions for shorter cycles (Biggs, 1993~. It appears that the eigenvalues of A encode information about the cycles of a network as well as its
diameter. We will see related results for the other two spectra. 212 DYNAMIC SOCIAL NETWORK MODELING AlID ANAI-YSIS
OCR for page 209
-5— A bipartite network is one that can be partitioned so that the nodes in one part have connections only to nodes In the other part, and vice-versa. Such a network cannot have odd cycles (of any
length) and hence no triangles. This means aR the odd coefficients c2k ~ must be 0. It can also be shown (Biggs, 1993) that, In bipartite networks, the eigenvalues occur in pairs with opposite signs,
so that of = -ocm ~ and so on. Bipartite networks can be used to represent two-mode networks (Wasserman & Faust, 1994), for example the network relating people and the events they attend. These
results scratch the surface ofthe information contained In the spectrum of A for k-regular graphs. For general graphs, we need to turn to other spectra.2 The Laplacian spectrum: The Laplacian of a
network was ong~naDy discovered by Kirchoff~] 8473. There are a number of definitions and derivations, perhaps the most revealing due to HaD (1970), who was interested In situating the nodes of any
network so that total edge lengths are muum~zed. He considers the problem of finding the minimum of the weighted sum (2) z= I/2~ij~xi- Xj)2 ad where a; j are the elements of the adjacency matrix A.
The sum is over ad pairs of squared distances between nodes which are connected, and so the solution should result in nodes with large numbers of inter-connections being clustered together. Equation
(2) can be re-written as: = ]/2~ij (~2-2xi ~ + Xj 2) aid = £i xi2 aij + Ej Iij Xi Xj aij = XTLX =l/2~ixj2ai3-l/2~,,,2xixja,j+1/2~jxj2ajj where ~ = D - A is the Laplacian. In addition to this, Hall
supplies the condition that XTX = 1, i.e., the distances are normalized. Using Lagrange multipliers (a standard method for solving problems with constraints), we have: Z = XTEX - AX X and to minimize
this expression, we take derivatives with respect to X to give: (3) EX-\X=OorEX=\X which is the eigenvalue equation. It is not hard to show that kO=0 with 10 = 1, the constant (or trivial)
eigenvector, and that ~-\O S hi<-. An,- For L, the most "important" eigenvectors belong to the smallest eigenvalues (Pothen, et al., 1990~. It turns out that the discrete network Laplacian shares
many important properties with the weD- known continuous Laplacian operator V2 of mathematical physics. This has led to an explosion of research and results, mostly concerned with k~ (Been, 1991~.
The definition of ~ shows that there is no loss of sparsity (except for the diagonal) and that the sparse methods mentioned earlier can be applied to find all or some of the eigenpairs. The
requirement that we must find the smallest eigenpairs is easily overcome by subtracting a suitably large constant from the diagonal of-L (which subtracts that constant Tom the eigenvalues without
changing the eigenvectors).This guarantees that the first eigenpairs returned by the Power Method or Lanczos iteration are associated with the smallest eigenvalues of L. 2 Similar results maybe
obtained Mom the moments of the eigenvalue distribution (Farkas. et al., 2001; Gho, et al., 2001) DYNAMIC SOCIAL METWORKMODEL~G~D ^^YSIS 213
OCR for page 209
214 -6- Some of the coefficients ofthe charactenstic polynom~al of ~ have an easy ~nterpretation: cat = Trace(LJ) = 2~E~ (i.e., twice the number of edges) car, I= 0 (since O is an eigenvalue) m cm ~
~ = \0 \! ~m-l ~ ;~0 \2 ~m-t ~ ~ ~ \~ \2 ~m-~ = \~ \2~\m-~ = the number of spanning trees of G (this is the Matnx-tree theorem of Kirchoff, I847~. In general, the eigenvalues of ~ encode info:Tnation
about the tree-structure of G (Cvetkovic, et at., 19954. The spectrum of ~ contains a O for every connected component. There is no such direct way to find the number of components of a network Tom
the spectn~m of A. There is also a bound on diameter related to ~m-~ and \~ for general graphs Dom Chung, et al., (1994~: Diam(G) < ~ cosh~~(m-~/ cosh~' (~\m-} ~ ;~/~\m-] ~ ;~1 Intuitively, if X~ is
close to 0, the graph is almost disconnected, while if Al >> X0 (an eigenvalue gap) the diameter is small. The Normal spectrum: We can repeat the same argument as Had to derive the Normal spectrum,
with the normalization constraint that XTDX = ~ (Scary & Richards, ~ 995) to give: LJX = ,u DX, or assuIIiing that D can be inverted, D-' EX= DO (D-A)X=~-D-~ A)X=,uX where ~ is an identity matrix of
proper size. In fact, we usually take the defining equation to be (4) DO AX = NX = v X with DO A = N and ~ = IN since adding an identity matrix shills the eigenvalues by ~ without changing the
eigenvectors. Note that for connected networks D not only has an inverse, it also has an inverse square root D-t'2. The Normal matrix N has a number of interesting properties: I) It is a generalized
Laplacian (with a different definition of orthonormality) 2) It therefore has a trivial eigenvector no with eigenvalue v0 = 3) The spectrum of N is bounded by ~ = vO >vim ... 2vm ~ 2 -] 4) The rows
of N sum to ~ (it is a stochastic matrix) 5) The spectnun of N contains a ~ for every connected component 6) The eigenvalue -1 ordy occurs if G is bipartite, in which case ad eigenvalues occur in
pairs. 7) N has been rediscovered a number oftimes: generalized or combinatorial Laplacian (Do~ziuk & Kendall' 1985; Chung'1995); Q-spectrum (Cvetkovic' et al.~1995~. The descriptive name Nonnal Is
suggested by points 2) - 5), although it is not standard tellIiinology. It is easy to see that there is no loss of sparsity ~ the definition of N. Each 1 In row i is simply replaced by 1fD~ and the
O's are unchanged, but N is no longer sywrnetr~c. However, the matrix D-~'2A D-~/2 is similar to N (has the same eigenvalues) and we can apply the sparse methods described above to solve for the
eigenpairs (vi, ei) and then calculate D-~/2 ei = ni to get the corresponding eigenvectors without losing precision or sparsity. For N. the coefficients of the characteristic equation are harder to
interpret except In special cases, but the eigenvalues encode information about both the cycle and tree structure of G D - ASPIC SOCIAL N~TWORKMODE:L~G AND ANALYSIS
OCR for page 209
-7- (Cvetkovic, et al., ~ 9954. Some examples: c, = Trace(N) = 0 C3 = C2k ~ = 0 (no triangles or other odd cycles) if G is bipartite Ill Regain; Regain ~F~-v) = number of spanning trees of G In the
last example we see how details ofthe degree distribution are also encoded in the spectrum. Fan Chung uses this to derive two remarkable bounds (see Chung, 1995 for details): D~c~mfG~ ~ Flax - r _ _-
_] lVO/X Voly ; UNIX VO/Y —1 (an—1 V1 ~ lll~lli~i dISt(Xi ~ Xi ) 111~Xi~i could (~n-1 V] ) r _ _ lo 1. i Nv°iXi W/Xi I (~n-1 ~ V~ ) (~1 ~,( ) where vol X is the total number of edges in a subset of
nodes XcV and X is V-X. Chung's first bound applies to any graph (regular or not) and is much tighter than the previous bound (for the Laplacian). Intuitively, if v, Is close to I, the network has a
long path or is almost disconnected, and if vat << I, the diameter is small. Chung's second bound describes the distance between subsets for any number k of subsets, based on the kit eigenvalue. The
result suggests that we can use the eigenvalues to estimate how many subsets we should look for in a network without forcing distances that are too short (and hence too many subsets). Intetpreting
the Spectra: Many important properties ofthe spectrum of A(G) where G is k-regular are true for L(G) and N(G), even when G is not regular. Another way of looking at this is that these properties of A
are true because the spectrum of A is simply related to those of ~ and N for regular graphs: At; = k-ki = vi/k for k-regular graphs, with the corresponding eigenvectors being identical). In other
words, both L and N are more natural Unction of graphs. This point of view is shared by the authors of recent papers on the Laplacian(Grone, et al., 1990, 1994~. Mohar(1991) presents a collection of
important results relating to the spectrum end eigenvectors of L. Chafing (1995) has written several papers and a book about N. We return to the goal expressed in the opening paragraph. We would like
to find the most important global features of a network, after accounting for what could be considered "expected" for a random network with the same number of nodes and edges. The biggest problem
with interpreting the spectrum of A is the lack of an "expected" eigenvector (again, except for k-regular graphs). There is a lot of literature on the so-called "main eigenvectors" of A: those which
have a projection on the "as-ones" vector (e.g., Harary& Schwenk, 1 979), but the results remain herd to interpret (Cvetkovic and Rowl~nson,1990~. Both ~ and N have an "expected" as-ones eigenvector
for which the interpretation is clear (though different in each case). To interpret L, we turn to physical analogy and the relation to V2 as discussed by Friedman (1993~. He considers a graph G as a
discrete mar~ifoici (surface) subject to "flee" boundary conditions.3 For illustration, consider v2 as the spatial part of the wave equation (Fisher, 1966, 3 no external constraints need to be
satisfied DYNAMIC SOCIAL NETWO^:MOD~WG ED ISIS 215
OCR for page 209
—8- Chavel, 1984~. Think of a fishing net subject to no forces. It just lies there at O energy with nothing happening. As we subject it to regular oscillations, the net vibrates with the most
highly-connected regions moving together. Friedman shows how the Hilbert Nodal theorem (Courant & Hilbert, 1965) can be applied to a discrete network, which generalizes FiedIer's result (descnbed
below): the kit eigenvector divides the network into no more than k+1 clisconrzected components.4 To interpret N. we have a number of choices: 1) N is the Laplacian for a network of nodes, each
weighted by its degree 2) N Is the transition matrix of a Markov Chain for a simple random walk on the nodes 3) N' is similar to the %2 matnx, thus treating A as a contingency table The first leads
to a physical analogy similar to that for A, so we consider 2) and 3~: The Normal spectrum and Random walks: Specifically, we consider nearest-neighbour random walks on a network (Lawler & Sokal,
19881. Define the probability-transition matrix for such a walk as N=D-~ A Then the probability of moving Tom vertex i to any vertex adjacent to i is uniform. N is a row- stochastic matrix, and the
random walk is a Markov chain. In this case 1 (the trivial all-ones eigenvector) is related to the stationary state ofthe Markov Chain: the probability is ~ - vO that such a probability distribution
is eventually reached.5 The vector p0 = IT ~ =N T ~ iS the stationary state, and it is proportional to the degree distribution. The second eigenpair(v~, no ) has become important In the analysis of
rapicily mixing Markov chains - those that reach the stationary state quickly (S~ncIair, 1995~. From the previous discussion it should not be surprising that these are associated with vat << 1 (a
large eigenvalue gap), which means that the walk quickly "forgets" where it started.6 Moreover, when v~ is close to I, there must be parts of the network that are hard to reach in a random walk,
unply~ng long paths or a nearly disconnected network. Normal spectrum and 72: The y2 matrix is defined in terms ofthe row and column marginals (sums). A typical element Is (Observed jj - Expected jj)
2 /Expected jj which is not sparse. For a sparse network A, consider X which has a typical element (Observed j, - Expected jj) // Expected ij where deg (i) deg (I) Expected ij - ~ de"(i) 4 This
interpretation of the eigenvectors may be even more useful when considering v2 as the spatial part of the Di~us~on equation (for example, when considering diffusion of innovation or disease). 5A
problem can arise with bipartite graphs: pa does not exist since the chain oscillates between the two sets of vertices (period = 2). Probabilists deal with this by a simple Lick: divide N by 2 and
add a self-loop of probability 1/2 toeveryvertex: N'=I/2+N/2 The eigenvaluesofN' are then: 1=v'o ~v',<... OCR for page 209
We can write % as: `~ - ~ so that non-zero elements of A become A,j //Eij while the O terms are unaffected, maintaining sparsity. The second term corresponds to the trivial eigenvector which can be
dealt with separately. In matrix notation % = D-"2A D-72 which has eigenpairs (v;, Dent) Thus we have (is) %2 = ~j=i V'2 l Id ant (omitting the vo =1 expected term for n; = 1) This equation shows how
much each dimension contributes to x2 which is a measure of dependence between rows and columns. In this interpretation, if vl is small (v, << v0 = 1), then %2 iS also small: there is no relation
between rows and columns of A, and so there is no "signal" above the expected "background". If vat is close to 1, then x2 win be large and there is a relation between rows and columns of A, with the
first eigenvector pointing in the direction of the maxiTnum variability in %2 If v2 v, ,...vk are also large, we need k+1 eigenvectors to describe the patterns in the %2 matrix. With (5) we can teD
how many eigenvectors we need to explain most ofthe x2 ofthe network.7 Compositions The Kronecker product of two binary matrices Al and A2 makes a copy of Al for each 1 in A2. It is weD-know that for
two matrices Al and A2 of order me and m2 the eigenpairs ofthe Kronecker product Al~A2 behave well (West, 1996): If Al has eigenpairs (`xi al ) and A2 has eigenpairs (,Bj 7bj )' then Al g)A2 has
eigenpairs ({oci x id} ,{ al ~ bj }) It is also weD-known that Al and A2 behave wed under Cartesian sum: A' OCR for page 209
-Io- It appears that the behaviour under Kronecker product explains why both the Adjacency and Normal eigenvectors are good at detecting both on- and off-diagonal blocks (clusters of edges).
Visualization The Laplacian can L provide good visual representations of graphs which are Cartesian products (such as grids and hypercubes); while N can provide good visual representations of graphs
which are Kronecker products (such as graphs consisting of blocks). The reasons for this are suggested above and have mostly to do with the behaviour of eigenpairs which are sums and products with 0
ar d 1, respectively. For graphs that are not k-regular, eigenpairs of A do not provide such good representations since, in general, there is no constant (expected, trivial) eigenvector to combine
with. Another way of describing these results is to consider the relationship between the eigenvector components for a node and those it is connected to (Scary & Richards, 1999). It is evident from
the definition of eigendecomposition that (where '`u~v" means '`u is connected to v") (6) al (u) = ~u~v a; (v/ (7) Ij (u) = ~u~vIi (v)/(7j-deg(u)) (~) ni (u) = ~U-vni (v)/(vixdeg(u)) for eigenpair i
of N for eigenpair i of A for eigenpair i of L, Note that A has no control for node degree. Consider the effect for "unportant" eigenpa~s: ~ ~ a ~ = k ~ 1, 7~ 0 and ~ vat ~ 1) when elegy) is small,
aqua will be folded toward the origin, while 1(u) and Mu) will sit further away from the origin than its neighbours. This effect makes it difficult to interpret visual representations based on A,
except for k-regular graphs where all three spectra are essentially the same (Fig. 1) The equation for ni shows that for vi near 1' each node is approximately at the centroidofthose it is connected
to. The exact difference Tom the centroid for node u of eigenvector ni is: n; (u)- ~U-vni (v)/deg(u)) = (~-vj)n; For important eigenvalues v; near 1, this produces very good visualization properties.
In addition, the eigenvector representation may be combined with derived properties such as betweenness (Freeman, 1979) to produce very helpful displays of large networks (Brandes et al., 2001)
Interpreting the eigenvectors I. Partitions Powers (1988) and others have shown how eigenvectors of A can be used to find partitions of highly connected subsets (clusters) of nodes, but these methods
are not as general or as clear as those derived Tom ~ or N. The first non-trivial eigenvector 1~ of L is the subject of extensive literature (Lubotzky, 1994; Alon & Millman, 19854. Fiedler (1975)
first suggested that the eigenvector 1, associated with the second-smalZest eigenvalue \~ could be used to solve the min-cut problem: separate the network into two approx~nately equal sets of nodes
with the fewest number of connections between them, based on the signs ofthe components of 1~ 8 In fact, more recent derivations of L use the min-cut property as a starting point (Walshaw, et al.,
1995) and the results are used to partition compute-intensive problems into sub-processes with miniTnal inter-process communication (Pothen, et al., 1990). This technique is called Recursive Spectral
Bisection (Simon, 1991). Other researchers have used 12, 13 8 Hagen also uses deviations from median 218 DYNAMIC SOCIAL N~TWO~MOD~G ^D ANALYSIS
OCR for page 209
and higher eigenvectors to produce multi-way partitions of networks (Hendrickson & Leland, 1995). The graph bisection problem (Mohar & Poijak., 1991) is to find two nearly equalized subsets V,,V2 c V
such that cut(V~,V2) = I;jj at is rnininiized, where in V,, j ~ V2. (i.e. nodes in V, and V2 have few connections to each other). This problem is known to be NP-hard (Garey & Johnson, ~ 979), but a
good approximation is given by the signs of I~ (Walshaw & Herons, ~ 995~. This gives two sets of nodes of roughly the same size, but has no control for the number of eciges In each part, and so any
clustering of nodes is a side- effect of the partition. However, we can add an additional constraint that the number of edges in each part also be roughly equal by weighting the node sets by their
total degrees. This is exactly what a partition based on no Dom N gives us, since it no points In the direction of maximum variability in %2 (Greenacre, 1984).9 Similarly, further partitions based on
n2, n3, ...will also produce sets of nodes with a large number of edges In common (as long as v2, V3 ...make significant contributions to %2). Partitions based onpositive eigenvalues win produce
blocks on the diagonal of A of edges associated with each set of nodes, while those based on negative eigenvalues produce nearly bipartite off-cliagonal blocks (which occur in pairs if the network is
symmetric) (Seamy & Richards, ~ 995~. 2. Clustering Ideally, the important eigenvectors should be at least bimodal to induce clustering based on sign- partitions, and oRen they are multi-modal
(Hager, 1 992), suggesting that standard clustering methods can be used on the coordinates of these vectors. Equations (7) and (~) show that ~ and N place nodes approximately at the centroids oftheir
neighbours. For N. the distances are actuary measured in %2 space, meaning that nodes with very similar patterns of connections wid be close together (Benzecri, 1992). This clustering happens with
either positive or negative eigenvalues (on- or off- diagonal). The latter are important In nearly bipartite networks with few triangles (Fig. 3). 3. Problems Parkas, et al., (2001) and Goh, et al.,
(2001) report that the important eigenvectors of A are very localized on nodes of high degree, and suggest that this effect may be used to distinguish certain types of networks. This effect does not
occur for ~ or N (fig. 2), since each include some control for degree, and so far no similar results for distinguishing network types have been reported for these spectra. The biggest problem for ~
and N is their sensitivity to "long paths", especially to pendant trees attached to the main body of the network (Scary & Richards, 2000~. For N. these may be interpreted as nodes that are hard to
reach (distant) in a random walk. For long paths internal to a network, this effect is actually an advantage, since these cycles are detected as "locally bipartite" and emphasized In import ant
eigenvectors. Nodes on such paths can have a large effect on global properties such as diameter (Fig. 3-4~. Two-mode networks Two-mode networks mix two different kinds of nodes and connections. A
simple example is an affiliation network such as people and the events they attend. We could be interested in finding sets of people with events in common (or, equivalently, sets of events attended
by the same people): this is an example of co-clustenng. Affiliation networks can be represented by bipartite graphs for which A and N 9 see Dhillon (2001 ) for a formal derivation and proof DYNAMIC
SOCIAL N~TWORKAlODE~G kD TRYSTS 219
OCR for page 209
-12- are most suited, since they have symmetric spectra for these (the eigenvalues occur in pairs with opposite sign). Because of this we don't need the entire bipartite matnx: we can work with the
rectangular representation, and infer the missing parts of the eigendecomposition. If we assume ma people and m2 events, the resulting eigenvectors consist of ma components for people followed by m:
components for events. The resulting blocks will be strictly off-diagonal and once again the eigenvectors of N provide a superior solution by maximizing y2. In fact, this solution is identical to
that provided by Correspondence Analysis' a statistical technique for fig pasterns in 2-mode data (Benzecn, 1992). Partial Iteration For large networks, it is not necessary or desirable to calculate
the entire eigendecomposition. For very large networks, it may not be possible in terms of tone and space to calculate even a few eigenpairs. Nevertheless, it is possible to get at a large amount of
the global and local network structure by partially iterating using the Power Method. A few iterations of N = D-i A, with each iteration placing nodes at the means oftheir neighbours, will produce a
mixture ofthe most important eigenvectors. Consider the spectral representation N Levi) niDn; We know that (n'DnjT) K =niDn,T for all K, so the contributions of eiaenvectors with small v auicklv ~ .
~ ~ ~ r crop out as these (vj)~ approach (). This means that N ~ Is dominated by the dimensions with v; near 1. We start with a random vector, and quickly (6-10 iterations) produce such a mixture.
Moody (2001) describes a procedure in which this process is repeated a number oftimes, each producing a slightly different mixture of the important eigenvectors (Fig S). The results are then passed
to a standard cluster analysis routine (such as k-means, Ward's method) to find any clusters of nodes. Further analysis The method of partial iteration of N has been used for years in the program
NEGOPY (Richards and Rice, 1981, Richards, 1995), as the first step in a more complex analysis. A key concept in NEGOPY is that of liaisons. These are nodes which do not have most of their
connections with members of a cohesive cluster of nodes, but rather act as corrections between clusters (Fig 3-4). Often it is the liaisons that provide the connections that hold the whole network
together. Finding the liaisons requires detailed knowledge about the members of(potential) clusters and their connections, and is not an immediate result of a partition based on eigenvectors or
clustering methods. Neverthe- less, eigendecomposition methods - fills or partial - are an excellent strategy to begin such analysis. Future prospects More work needs to be done on the categorization
of networks based on important eigenpairs of L and N. Recent reports (Koren, et al., 2002; Walshaw, 2000) suggest we might not need to resort to partial methods after all; we can find important
eigenpairs exactly for enormous networks (m> 105) using "small" amounts oftime and memory by first reducing the network in some way by sampling, solving the reduced eigenproblem, then interpolating
back up with a very good "first guess" for the Power Method. Preliminary tests show that this should work equally well for Lanczos iteration. 220 DYNAMIC SOCIAL NETWORK MODEL~G ED ISIS
OCR for page 209
-13- References Aho, A., Hopcroft, J., & Ullman, J. (1997~. Data structures and algorithms Addison-Wesley. Alon, N. and Millman, V. (1985~. X,, Isopenmetric Inequalities for Graphs, and
Superconcentrators. J. Comb. Theory B. 38: 73-88. Barnard S. and Simon, H. (1994~. Fast Implementation of Recursive Spectral Bisection for Partitionin, Unstructured Problems. Concurrency: Practice
and Experience 6~24: 101-117. Benzecri, J-P. (1992~. Correspondence Analysis Handbook, Marcel Dekker Inc. Bien, F. (1989~. Constructions of Telephone Networks by Group Representations. Notices of the
Am. Math. So c. 36: 5-22. Biggs, N. (1993~. Algebraic Graph Theory. Cambridge University Press. Brandes, U., Cornelsen, S. (2001~. Visual Ranking of Link Structures, Lecture notes in Computer
Science, 2125, Springer-Verlag. Chavel, I. (1984~. Eigenvalues in Riemanniarz Geometry, Academic Press. Chow, T.Y. (1997~. The Q-spectrum and sparring trees of tensor products of bipartite graphs",
Pro c. film. Math. Soc., 125(1 1): 3155-3161. Courant, R. and Hilbert, D. (1966~. Methods of Mathematical Physics. Interscience Publishers. Cvetkovic, D., Doob, M., and Sachs, H. (1995~. Spectra of
Graphs. Academic Press. Cvetkovic, D., Rowlinson, P. (1990). The largest eigenvalue of a graph: a survey. Linear and Multilinear Algebra. 28: 3-33. Cvetkovic, D., Doob, M. Gull an, I, Torgasev, A.
(1988~. Recent results in the theory of graph spectra, North Holland. C hung, F.K.R.~1988~. Diameters and Eigenvalues, J. Am. Math. Soc. 2~2~: 187-196. Chung, F.K.R, Faber, V.& Manteuffel, T.A.
(1994~. An upper bound in the diameter of a graph Dom eigenvalues associated with its Laplacian, SIAM J. Disc. Math. 7~3~: 443~57. ChuIlg, F.K.R. (1995~. Spectral Graph Theory, CBMS Lecture Notes,
AMS Publication. Dhillon, I.S. (2001~. Co-clustering documents and words using bapartite spectral graph partitioning, UT CS Technical Report TAR 2001-05. Diaconis, P. and Stroock, D. (1991~.
Geometric Bounds for Eigenvalues of Markov Chains, Ann. Appl. Prob. 1: 36-61. Dodziuk, J. and Kendall, W. S. (1985~. Combinatorial Laplacians and Isoperimetric Inequality. In K. D. Ellworthy (eddy.
From Local Times to Global Geometry, Pitman Research Notes in Mathematics Series 150: 68-74 Parkas, I., Derenyi, I., Barabasi, A-L., Viscek, T. (2001~. Spectra of "Real-world" graphs: beyond the
semi-circle law, cond-maV100235 Fiedler, M. (1973~. Algebraic Connectivity of Graphs, Czech. Math. J. 23: 298-305. Fisher, M. (1966~. On hearing the shape of a drum, J. Combin. Theory, 1: 105-125.
Freeman, L. (1979~. Centrality in social networks: Conceptual clarification, Social Networks, 1: 215-239. Freidman, J. (1993~. Some Geometrical Aspects of Graphs and their Eigenfi~nctions. Duke
Mathematical Journal, 69: 487-525. Garey, M., Johnson, D. (1979~. Computers and Intractability. W. H. Freeman. Goh, K-I., Kahng, B. Kiln, D. (2001~. Spectra and eigenvectors of scale-free networks,
cond-maV0103337. Greenacre, M., (1984~. Theory and Application of Correspondence Analysis. Academic Press. Grone, R., Merris, R. and Sunder, V. (1990~. The Laplacian Spectrum of a Graph. SIAM J.
Matrix Anal. App. 11~2~: 218-238. Grone, R. and Merris, R (1994~. The Laplacian Spectrum of a Graph II. SIAM J. Discrete Math. 7~2~: 22 1-229. Hagen, L. (1992~. New Spectral Methods for Ratio Cut
Partitioning and Clustenng, IEEE Trans. CAD, 1 1 (9): 1 074-1 085. Hall, K. (1970). it-dimensional Quadratic Placement Algorithm, Management Sci 17: 219-229. DYNAMIC SOCIAL NETWORK MODELING AND
ANALYSIS 221
OCR for page 209
HararY. F. & Schwenk A (1979~) The. cn~rtr~1 ~nnr^~h try Amt=~;~;~ +~ ~,.~_ ~¢,..~7, Paciific J. Math. 80: 443449. __7 __ ~~ I. ~ vex ^ ~~z ·- ~110~= LIl~ llUlllQ~1 Q1 Wait ~ a graph, Hendrickson,B.
and Leland, R. (1995). An improved spectral graph partitioning algorithm for mapping parallel computations. SIAM J. Comput. Sci. 16(2): 452~69. Hotelling' H. (1933). Analysis of a complex of
statistical variables into pricipal components, J. Educ. Psych ol. 24: 417~41, 498-520. Joliffe,I.T..(1986). Prir~cipalComponents Analysis, Springer-Verlag,New York. . Kirchoff, G. (1847). Uber die
Auflosung der Gleichungen.... Ann. Phys.Chem, 72: 497-508. Koren, Y. Carrnel, L., Harel, D. (2002). ACE: a fast multiscale eigenvector computation for huge graphs, IEEE Symposium On Information
Visualization. Lawler, G. F. and Sokal, A. D. (1988). Bounds on the L2 Spectrum for Markov Chains and Markov Processes: a Generalization of Cheegerts Ibequality, Trans. Amer. Math. Soc. 309: 557-580.
Lovasz, L. (1995~. Random Walks, Eigenvalues and Resistance, Handbook of Combinatorics, Elsevier, 1740-1745. Lubotzly, A. (1994). Discrete Groups, Expanding Graphs, and Invariant Measures,
Birkhauser. Mohar, B. (1991). The Laplacian Specimen of Graphs. In Alavi, Chartrand, Ollermann and Schwenk (eds . ). Graph Th eory Combinatorics and Applications, Wiley, 871 -898. Mohar, B., Poijak
S. (1991). Eigenvalues and the Max-cut Problem, Czech. Math. J. 40: 343-352. Moody, J. (2001). Peer influence groups: identifying dense clusters in large networks, Soicla Networks;, 23: 261-283.
Parlett, B. (1980). The Symmetric Eigenvalue Problem, Prentice-Hall. Parlett, B., Simon, H. and Stringer, L. (1982~. On Estimating the Largest Eigenvalue with the Lanczos AIgorithrn. Mathematics of
Computation. 38~157): 153-165. Pothen, A., Simon, H. and Liou, K-P., (1990). Partitorung Sparse Matrices with Eigenvalues of Graphs. SIAM J. Matrix Anal. App. 11~3): 430-452. Powers, D. (1988). Graph
partitioning by eigenvectors. Linear Algebra Appl. 101: 121-133. Richards, W.D. & Seary, A.J. (2000~. Eigen Analysis of Networks, J. Social Structure. http://www.heinzcmu.edu/projecVlNSNAJjoss/index1
.html Richards, W.D. and Rice, R. (1981~. The NEGOPY Network Analysis Program, Social Networks, 3~3~: 21~224. Richards, W.D.~1995~. NEGOPY4.30Manual and user's Guide. School of Communication, SFU.
hop :llww~. sfu. ca/-ri chards/Pdf-Zi pPi I es/negm a n98. pdf Seary, A.J. & Richards, W.D. (2000). Negative eigenvectors, long paths and p*, Paper presented at INSNA XX, Vancouver BC. http://
www.stu.ca/-richards/Pages/longpaths.pdf Seary, A.J. and Richards, W.D. (1998). Some Spectral Facts. Presented at INSNA XIX, Charleston. http://www.sfu.ca/-richardslPagesispecfact. pdf Seary, A.J.
and Richards, W.D. (1995). Partitioning Networks by Eigenvectors. Presented to European Network Conference, London. Published in Everett, M.G. and Rennolls, K. (eds). (1996~. Proceedings of the
International Conference on Social Networks, Volume 1: Methodology. 47-58. Simon,H. (1991~. Pariitoning of Unstructured Problems for Parallel Processing. Computing Systems in Eng. 2(2/3~: 135-148.
Sinclair, A. (1993~. Algorithmsfor Random Generation and Counting, Birkhauser Walshaw, C. (2000~. A multilevel algorithm for force-directed graph drawing, Proc. Graph Drawing 2000, Lecture Notes in
Computer Science, 1984: 171-182. Walshaw, C. and Begins, M. (1995~. Dynamic Load-balancing for PDE Solvers on Adaptive Unstructured Meshes, Concurrency: Practice and Experience. 7(1~: 17-28.
Wasserman, S. & Faust, K. (1994~. Social Network Analysis, Cambridge West, D. B. (1996~. Introduction to graph theory, Prentice Hall, 1996. 222 DYNAMIC SOCKS NETWO~MODE~WG ED TRYSTS
OCR for page 209
Cartesian sum Kronecker product Ha, ED -——— i_ a) Using the eigenvectors of A. With no 'trivial" vector to combine wig, the displays are distorted. by Eigenvectors of L behave well under Cartesian
sum. Coordinates of a 1-D path are replicated as straight lines. ::C~ 9 -=}~a- _=~==_ tic Eigenvectors of L do not behave well under Kronec- ker product. c) Using eigenvectors of N. Lines are
slightly curved, so the cube is slightly distorted. Largest negative eigenvector of N captures bipartiteness perfectly. Figure I. Six views of a 6 x 8 x 10 grid. Left: as the Cartesian sum of three
paths producing a three- dimensional grid. Right: as the Kronecker product producing a bipartite graph DYNAMIC SOCIAL N~TWO~MODEL~G ED ^^YSIS 223
OCR for page 209
224 -16- . ·.N In. a) The first 3 eigenvectors of N produce clusters of nodes. These are labelled with the resulting partition into 4 blocks, along with the liaisons. The central liaison is of high
degree and holds the network together. The figure is slightly rotated to show the clusters. Figure 2. Two Mews of a small social network with 145 nodes. (Data Source: L. Koehly) i' ~.~ . ~4, ~ Phi
ToiFrom - c~ . ,., j ! i. Fit . ~ b) Adjacency matrix permuted by partition numbers. The blocks have no interconnections, and the network is held together by the liaisons (right and bottom). i:,)
a..' I, .,~ - · ·Yl. · 1;;_ :, . ebb Ma · ~ Rae ~ .. I, ' ala I, s art ·. . Id. . ('i _, ', Fir' i. ~~-~ · i i · ~ Me - ' . i na · ·=s~_ . .', · st _ I'm 2. a) The first 3 eigenvectors of N. The 3rd
eigenvalue b) Adjacency matrix permuted by partition numbers. In this iS negative, since there are very few triangles in the case the liasons are of low degree. rightmost cluster. The labelling is by
ethnic group, which shows a close relation to structure. There are only two connections between the two main ethnic groups. Figure 3. Two views of a small social network of drug users with 114 nodes.
(Data Source: Scott Clair) D YNAMIC SOCIAL NETWORK MODELING AND ANAL YSIS
OCR for page 209
-17- -1.S 1 ~3 a 05 i\ ..... .. an ..... 1 := a. This figure shows the close relation between the first 3 eigenvectors of N and four needle exchange sites, which are used to label the nodes. The
figure is rotated to make this clear. 1 b. This figure is like an Adjacency matrix, except links are located by the coordinates of the first eigenvector of N. The network is dominated by exchanges
within sites E and W. Figure 4. Two views of a needle exchange network. (Data Source: T. Valente and R. Foreman). This network is moderately large (N=2736) and roughly scale-free (k~1.7~. The
eigenvectors of A are dominated by nodes of high degree (>100~. a) Close-up of clusters formed by the first 3 b) Clusters formed by placing each node at the centroid of its eigenvectors of N.
labelled by construction. neighbours, iterating 8 times with 3 random starts (multiple partial iteration). Labelled by construction. Figure 5. A moderately large (N=20,000) artificial network (Data
Source: J. Moody) constructed for testing purposes. The network was constructed Tom tightly connected groups of 50 nodes, each group then loosely connected in sets of 8, with 400 of these even more
loosely connected into a single component. DYNAMIC SOCIAL NETWORK MODELING AND ANALYSIS 225
OCR for page 209
- 1 8- APPEND]X (Glossary, examples, facts and definitions) adjacency matrix: A network (graph) may be represented by a matrix of zeros and ones, with a one indicating that two nodes are connected
(adjacent), and a zero otherwise. In a weighted graph, the ones may be replaced by other positive numbers (e.g., a distance or cost). A sample adjacency matrix is shown below. See link list a b c d e
f g h a 0 1 1 1 0 0 0 0 b 1 0 1 1 0 0 0 0 c 1 1 0 1 0 0 0 0 d 1 1 1 0 0 0 0 0 e 0 0 0 0 0 0 f 0 0 0 0 0 0 1 1 g 0 0 0 0 1 1 0 0 h 0 0 0 0 1 1 0 0 a has connections to b,c,d b has connections to a,c,d
. .etc... Adjacency specimen: The adjacency matrix of a graph, like any matrix, may be subject to an eigen decomposition. In graph theory, the resulting set of eigenvalues is referred to as the graph
spectrum, in analogy to the continuous spectrum from continuous spectral analysis methods such as Fourier analysis. In Fourier analysis, the spectrum is understood to refer to the weighting of sines
and cosines, whereas the discrete graph spectrum (eigenvalues) are weights of eigenvectors with unknown functional form. We sometimes use the term eigenpa~r refer to both eigenvalues and
eigenvectors. Since there are other spectra associated with graphs, we refer to this one as the Adjacency or Standard spectrum block: A block may be contrasted with a clique in the sense that the
former are defined as sets of nodes that have similar patterns of links to nodes in other sets, while the latter is a set of nodes that have most of their links to other nodes in their set. All
cliques are blocs, but some blocks are not cliques. One of the aims of blockrnodelling is to identify roles by clustering the nodes so that those with similar patterns of connections are next to one
another in the matrix. The members of each block perform similar roles in the network. block model: a higher-level description of a network, where roles (or blocks) are represented by a simplified
graph. For the matrix above, a block model would be: 1 0 0 O 0 1 0 1 0 Cartesian sum: A form of graph composition, which fonns more complex graphs from simpler ones. Cartesian sum may be expressed in
terms of Kronecker product as: A~A2 - A, g'l2 ~ AT (where I, and I~ are identity matrices of appropriate size) As an example, the Cartesian sum of two paths is a rectangular and. clique: In graph
theory, a clique is a sub-graph in which all nodes are connected to each other. In social networks, a clique is a set of nodes with most of their connections with other members of the clique. This
would generally correspond to an informal role (e.g. ffiendship). In the above matrix {a,b,c,d} form a clique. cluster: A collection of points that are "close" to each other in some sense. Many
definitions (and related techniques) are available. For networks, we should also insist that the points share connections, either within the cluster (clique) or with another cluster (see block
model). component: If a graph is connected, it consists of a single component. A disconnected graph does not have a path between any pair of nodes, and may consist of several components. connected:
If there is a path between every pair of nodes in a graph, the graph is said to be connected. A disconnected graph does not have a path between any pair of nodes, and so distances (and diameters)
cannot be defined, except within each component. 226 DYNAMIC SOCIAL NETWORK MODELING MID ANALYSIS
OCR for page 209
-19- distance: For graphs, the distance between nodes is defined as the smallest number of links connecting them. Also called geodesic distance. diameter: the largest geodesic distance between any
pair of nodes in a graph gap: The term gap or spectral gap refers to large distances in the spectrum of eigenvalues, particularly between 0 and the second-smalIest (Laplacian) or between 1 and the
second-largest in absolute value (Normal). A small gap means that a graph can be disconnected with few edge-cuts; a large gap means there are many paths between sets of nodes. global vs local
methods: In graph theory, a local method is one that examines only a few neighbours of a node. A global method is one which examines the entire graph, such as an eigendecomposition. Kronecker
product: A form of graph composition, which forms more complex graphs from simpler ones. An example of the Kronecker product is: 1 1 1 0 1 0 1 0 1 0 0 1 = 0 1 0 0 1 0 0 where every I in the first
matrix has been replaced by a complete copy of the second matrix. In this example the first matrix is a block model, not a graph. Lanczes iteration: a generalization of the power method which allows
calculation of a specified number of eigenpairs without loss of precision or orthogonality. Currently one of the best methods for eigendecomposition of large systems. Laplacian specimen: The
eigenvalues (and eigenvectors) of a matrix formed by subtracting the adjacency matrix from a diagonal matrix of node degrees. The eigenvalues are non-negative, with a "trivial" (constant) eigenvector
of eigenvalue 0. This discrete analogue of the continuous Laplacian shares a great many of its important properties. For this reason, it has become the focus of much research in the last decade.
liaisons: according to NEGOPY, these come in two types. Direct liaisons are individuals who have most of Weir interaction with members of groups, but not with members of any one group. They provide
direct connections between the groups they are connected to. Indirect liaisons are individuals who do not have most of their interaction with members of groups. They provide indirect or 'multi-step'
connections between groups by connecting Direct Liaisons, who have direct connections with members of groups (Richards, 1995~. link: A pair of nodes with some connection between them. In graph
theory, links are also called edges or lines. In social networks, links are often called ties. link list: A sparse format for stonng ~nfonnation in a network. Only the pairs of nodes that are
connected are in the link list. For symmetric graphs, only one pair is needed for each link. For weighted graphs, a third column may be used to hold the weights. For the symmetric adjacency matrix
shown above, the link list is: 1 2 ~ 3 1 4 2 3 ...and so on... localized: As applied to an eigenvector means that most of the coordinates are near zero, and only a few have large values. Coordinates
may be either positive or negative, and the eigenvectors are normalized to DYNAMIC SOCKS NETWO=MODEL~G ED TRYSTS 227
OCR for page 209
228 -20- make the sum of squares of components 1, so the sum of 4th powers is generally used as a measure of localization. If this sum is near 1 only a small number of coordinates are important. If
it is near 1/m, then all nodes contribute to the eigenvector. neigbourhooti: all the nodes which are connected to a given node. May be extended to all nodes connected to a set of nodes, but not
including the original set. RECOPY: (NEGative entrOPY) (Richards and Rice, 1981, Richards, 1995) is a computer program desired to find clique structures. It uses a random starting vector, and
multiplies it by the row-normalised adjacency matrix, subtracting off row means. Usually 6-8 such iterations are performed, resulting in a vector which is a mixture of the the important Normal
eigenvectors (Richards and Seary, 1997~. This vector is then scanned for potential clique structures, which are tested against the original network and for some statistical properties (e.g., variance
in the node degrees). Sparse matrix methods are used throughout, allowing large networks to be analysed rapidly. node: An object that may have some kind of connection (link) to another object. In
some cases, nodes are people, organizations, companies, countries, etc. In graph theory nodes are also called vertices and points. In social networks, nodes are often called actors. normal spectrum:
The eigenvalues (and eigenvectors) of a row-normalised adjacency matnx. This matrix is row-stochastic, and similar to a symmetric matrix, so its eigenvalues are real and less than or equal to 1 in
absolute value. It is closely related to the Laplacian (indeed, it may be defined to be the Laplacian in the %2 metric defined by the node degrees). partition: A partition of a graph is a division of
the nodes into a collection of non-empty mutually exclusive sets. A partition of the adjacency matrix shown above could be: {a,b,c,d), (e,f,g,h), so that there are no links between the nodes in each
part of the partition. sparse matrix techniques: In analysis of networks with more than 50 or 60 members, it is usually the case that each node is connected to only a fraction of the others. The
adjacency matrix for such networks contain mostly zeroes, which indicates the absence of links. In these situations, it far is easier to work with a list of the links (link list) that are present,
rather than the whole matrix which contains many times more numbers. Any array (such as an adjacency matrix) which consists mostly of some default number (usually zero) may be treated as a sparse
matrix. Since this value is known, it does not need to be stored as part of the array. This allows the array to be stored in a much more efficient manner, e.g., for an adjacency matrix, we only need
to store the links (pairs of nodes) when they exist. For a weighted adjacency matrix, we also need to store the values of the weights, one for each link Many matrix operations (e.g., multiplying a
matrix by a vector) can utilize this more efficient storage to run much faster as well. Sparse matrix techniques are those which avoid any manipulation of the matrix that would affect the sparseness
property (e.g., taking the inverse will generally do this, as will correlating each row or column with all the others). It is quite possible to find eigenvalues and eigenvectors using sparse
techniques. spectral analysis or methods: Loosely speaking, another term for eigendecomposition. Mathematically speaking, a general tea refemug to any re-statement of some fimction in terms of a set
of basis functions (e.g. sines and cosines for Founer analysis). The spectrum is the weights of these basis Unctions. The Fourier transform is especially useful in mathematical physics since the
sines and cosines (or eZ for complex z) are eigenfimctions of the ubiquitous derivative and integral operators. The terms fimction, operator and eigenfi~nction have the discrete analogues of vector,
matrix and eigenvector. Standard spectrum: see Adjacency spectrum DYNAMIC SOCIAL fJETWORKAdODEL~G ED TRYSTS | {"url":"http://www.nap.edu/openbook.php?record_id=10735&page=209","timestamp":"2014-04-20T23:59:07Z","content_type":null,"content_length":"99431","record_id":"<urn:uuid:b5c3cf1b-c9dd-47df-8cb9-7b699e12d5d1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Robert Morelli
(Mostly) Functional Design?
Date: -- (:)
From: Robert Morelli <morelli@c...>
Subject: Re: [Caml-list] Some Clarifications
Damien Doligez wrote:
> On Jul 19, 2005, at 22:14, Robert Morelli wrote:
>> One of the areas where I do much of my programming is in mathematical
>> software. I view this as one of the most difficult areas, at the
>> opposite extreme from simple domains like formal methods and language
>> tools.
> Since computer algebra is clearly a subset of formal methods, and a
> pretty good match for OCaml's feature set, I'm rather curious to know
> exactly what kind of mathematical software you are writing, that can
> be so much more complex.
Computer algebra is not all of what mathematical software is about, and
computer algebra is not, in practice, a subset of formal methods. The
communities of researchers who work in the two fields are traditionally
quite distinct, with distinct immediate goals, and the research is
funded and evaluated differently. For instance, the NSF in the US
funds several different kinds of computational mathematics research
through several different programs, and funds formal methods research
through several different programs.
In principle, the two fields should be merged -- at least in part --
and that is very clearly the vision of some people (though not all).
In practice, there have been initiatives to bring the two fields
together, but up until now that has been considered a challenging
interdisciplinary endeavor.
Several years ago I read a paper about a computer algebra system
written in OCaml called FOC. The title of the paper was something
like "Functors, blah, blah, ... Is it Too Much?" I think the
conclusion of the paper was that FOC naturally drew upon all of OCaml's
language facilities, both functional and object oriented. This paper
might speak to the demands of the domain for anyone who is curious
enough to look it up. Of course, we can equally well ask, "Is it | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2005/07/f12d29bc319b99e20929f743ed124ffa.en.html","timestamp":"2014-04-16T11:47:40Z","content_type":null,"content_length":"18604","record_id":"<urn:uuid:4315bfdb-1a3c-41ea-bbb7-47f42a273fca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinations problem
May 21st 2009, 04:16 AM #1
May 2008
Combinations problem
This may involve mods.
4 digit security codes in a particular city have this property:
the 4th digit is always the last digit of the sum of the first 3 digits. E.g. 3452, where 3 + 4 + 5 = 12, and 2 is the last digit of 12. Other examples include 5544.
There cannot be a security code with more than two consecutive digits the same (i.e. 6668 is not allowed, but 5544 is). How many different codes can be used?
Thanks in advance for any help.
This may involve mods.
4 digit security codes in a particular city have this property:
the 4th digit is always the last digit of the sum of the first 3 digits. E.g. 3452, where 3 + 4 + 5 = 12, and 2 is the last digit of 12. Other examples include 5544.
There cannot be a security code with more than two consecutive digits the same (i.e. 6668 is not allowed, but 5544 is). How many different codes can be used?
Thanks in advance for any help.
Hi BG5965.
First consider codes in which the first three digits are $xyy$ with $xe y.$ Then $x+y+y\equiv y\pmod{10}\ \iff\ x+y\equiv0\pmod{10}.$ For each $x$ except 0, there is exactly one $y$ such that
$x+y$ is divisible by 10. Hence there are 9 such codes beginning with 0 allowed $(y$ can be any digit except 0). There are also 9 such codes beginning with 5 allowed $(y$ can be any digit except
5). For each code starting 1, 2, 3, 4, 6, 7, 8 and 9, there are 8 such codes allowed $(y$ can be any digit except $x$ and $10-x).$ So the total number of allowable security codes beginning $xyy$
is $9+9+8\times8=82.$
The number of security codes beginning $xxy$ with $xe y$ is 10 × 9 = 90. Similarly for codes beginning $xyx$ with $xe y.$ And the number of security codes in which the first three digits are
distinct is 10 × 9 × 8 = 720.
Hence the total number of allowable security codes is 82 + 90 + 90 + 720 = 992.
Last edited by TheAbstractionist; May 21st 2009 at 08:41 AM.
Hi, I also posted this question on
Combinations question? - Yahoo!7 Answers
and got different answers with different working.
I am quite confused on which is the right method!
Hi, I also posted this question on
Combinations question? - Yahoo!7 Answers
and got different answers with different working.
I am quite confused on which is the right method!
Hi again.
The second poster who replied to your post over there assumed that your security code cannot start with 0 whereas the first one assumed that it can. Hope this explains why they are different.
Ok, but can you still re-explain the answer briefly - I don't get his explanation fully.
Sorry, I'm even more confused now. So the answer is not 982, like on the other post on the other website?
Hi yet again.
The solution I originally posted was totally wrong. I have re-edited my post now, and I hope it’s better this time. The answer I have is 982. (Note that I am assuming that a security code can
start with 0.)
Thanks a lot
I finally understand it now and I think now the answer is correct.
Thanks for your help again.
May 21st 2009, 05:40 AM #2
May 21st 2009, 06:44 AM #3
May 2008
May 21st 2009, 07:21 AM #4
May 21st 2009, 07:24 AM #5
May 2008
May 21st 2009, 08:07 AM #6
May 2008
May 21st 2009, 08:30 AM #7
May 21st 2009, 08:42 AM #8
May 2008 | {"url":"http://mathhelpforum.com/algebra/89920-combinations-problem.html","timestamp":"2014-04-17T16:26:53Z","content_type":null,"content_length":"51381","record_id":"<urn:uuid:6384b757-ed71-4af9-b6e0-5f81cd5fb371>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
You are riding your bicycle at a speed of 12 ft/2. Four seconds later, you come to a complete stop. Find the acceleration of your bicycle. Help! and explain it too please?
Best Response
You've already chosen the best response.
\[v _{f}=v _{i}+at\] 0=12+a(4) -3ft/s=a
Best Response
You've already chosen the best response.
Acceleration is the change in speed (or velocity) over time. In this case you decelerated from 12 feet per second to zero feet per second across a period of four seconds. On average over those
four seconds your speed went down by 3 feet per second for each of those seconds. Hence your acceleration was negative three feet per second per second or: \[-3ft/s^{2}\]
Best Response
You've already chosen the best response.
yes; ft/s^2!
Best Response
You've already chosen the best response.
If your acceleration was perfectly constant over those four seconds, your speed at time 0 was 12ft/s; at time 1 was 9ft/s; at time 2 was 6ft/s; at time 3 was 3ft/s; and at time 4 was 0ft/s.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e7295210b8b247045ca2cde","timestamp":"2014-04-20T11:21:06Z","content_type":null,"content_length":"35119","record_id":"<urn:uuid:64a3855a-2769-4073-b592-b7a992390b0a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Menlo Park Calculus Tutor
Find a Menlo Park Calculus Tutor
...I have 5 years of experience of teaching Calculus (including AP AB & BC) up to college level. This is new world of Math. It is really fun, though challenging if the concepts like limit,
continuity, derivative, slope, antiderivative, etc. are clear to the student.
17 Subjects: including calculus, geometry, statistics, precalculus
My weekly schedule is basically full with one, at most two non-regular openings. Due to traffic, I could only travel in the neighborhood (Dublin, Pleasanton, Livermore and San Ramon). Weekdays
11am-3pm remain mostly open. Thanks you very much for your support.
15 Subjects: including calculus, geometry, algebra 1, GRE
...I believe that every child can learn and excel at something, and that it is important to keep learning interesting things so that inquiry and thought becomes a life long enterprise. I have
taught Calculus and PreCalculus for over 15 years at the University level and at high school level. I have...
29 Subjects: including calculus, chemistry, reading, physics
...Over the years, I've tutored students on algebra 2 who were in various bay area (middle or high) school districts, including: PAUSD (Jordan, Palo Alto, Gunn), FUHSD (Fremont, Homestead,
Lynbrook, Cupertino, Monta Vista) and more. My calculus tutoring experience includes all levels: Basic (limi...
9 Subjects: including calculus, physics, geometry, ASVAB
...I was an instructor at some of the top test prep companies, so among my specialties is preparation for standardized tests like SAT, ACT, SSAT, and many others. I am very effective in helping
students improve their test scores: in a past testing year all of my SAT students scored 700 and above! Some of them were scoring in mid-500, when I started working with them.
14 Subjects: including calculus, statistics, geometry, algebra 2 | {"url":"http://www.purplemath.com/Menlo_Park_calculus_tutors.php","timestamp":"2014-04-16T19:07:19Z","content_type":null,"content_length":"24104","record_id":"<urn:uuid:f7937d42-1484-4c71-a588-8f45323e6890>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Hooke's Law. The distance d when a spring is stretched by a hanging object varies directly as the weight w of the object. If the distance is 23 cm when the weight is 3 kg, what is the distance when
the weight is 5 kg? I need the answer badly and I do not understand how to do the problem.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4d95425c0ffe8b0b52f5a820","timestamp":"2014-04-18T16:12:30Z","content_type":null,"content_length":"42023","record_id":"<urn:uuid:fa793727-3ad8-4b22-abfc-5300ba261a60>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reciprical of i
04-12-2003 #1
i is defined as the square root of -1, (sqr(-1) or (-1)^(.5) )
A friend just asked me what 1/i is, so I punched it into the TI-89 and got -i. This would be great, except I can't get the algebra to do that.
i!=-i, so I decided to try it with 2/i
x=2 * sqr(-1)
Same result...I tried again for 3 and ended up with 3i. I don't see any errors in my algebra, but as far as I know, the TI-89 is infallible. I once saw it described as "a massive beast of
knowledge" on these boards, and I agree. Would someone please enlighten me as to the nature of my error?
Last edited by confuted; 04-12-2003 at 04:36 PM.
When you solve for a square root you get two answers plus and minus hence:
x = 1/i
x^2 = 1/(i^2)
x^2 = 1/(-1)
x^2 = -1
x = +/- sqr(-1)
x = +/- i
Since x cannot be +i:
1/i = i
1 = i^2
1 = -1 ..... bzzzt
it must be - i:
1/i = -i
1 = i * -i
1 = -(i*i)
1 = -(-1)
1 = 1
Last edited by Clyde; 04-12-2003 at 06:46 PM.
Thank you, I feel dumb now
no need to square at all, just rationalize
Multiply by i / i
which is
Last edited by Polymorphic OOP; 04-12-2003 at 08:23 PM.
04-12-2003 #2
04-12-2003 #3
04-12-2003 #4 | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/37844-reciprical-i.html","timestamp":"2014-04-18T16:04:08Z","content_type":null,"content_length":"48518","record_id":"<urn:uuid:adc63fea-d094-4d3b-b3ed-9ed886089636>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theoretical yield
May 17th 2008, 06:00 PM #1
Theoretical yield
I think that the theoretical yield is the max amount of the products that can be formed. But I don't see how to find it.
The problem that is stumping me is this:
Given the equation $Cu_2 + O_2 \to 2Cu + SO_2$, what is the theoretical yield of $SO_2$ if $8.20 g$ of $0_2$ are used ?
Will someone simply and clearly explain how to go about solving this?
You have 8.20 g of $O_{2}$ so we must "convert" it into terms of $SO_{2}$. However, remember that a chemical reaction is representative of each species in moles. So that is where we will start
off with:
$\text{8.20 g } O_{2} \times \frac{\text{1 mol } O_{2} }{\text{32.00 g }O_{2} } = \text{0.25625 mol } O_{2}$
Now, we know that for every mole of $SO_{2}$ produced, 1 mole of $O_{2}$ was used up. So we have a 1-1 ratio.
$\text{0.25625 mol } O_{2} \times \frac{\text{1 mol } SO_{2} }{\text{1 mol } O_{2} } = \text{0.25625 mol } SO_{2} \text{ produced}$
Note how I set up the ratio so that the units "mol $O_{2}$" cancels out. That is pretty much the key in solving stoichiometry questions.
So you have the moles of $SO_{2}$ produced. Now all that is left is converting it into terms of grams and that should be your theoretical yield.
that makes complete sense! thank you.
May 17th 2008, 08:00 PM #2
May 18th 2008, 06:03 AM #3 | {"url":"http://mathhelpforum.com/advanced-applied-math/38659-theoretical-yield.html","timestamp":"2014-04-18T22:43:47Z","content_type":null,"content_length":"37726","record_id":"<urn:uuid:351a5292-1cee-48ab-a07e-d1a0dfbab60c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinatorics: A Problem Oriented Approach
The unique format of this book combines features of a traditional textbook with those of a problem book. The subject matter is presented through a series of approximately 250 problems with connecting
text where appropriate, and is supplemented by some 200 additional problems for homework assignments. While intended primarily for use as the text for a college-level course taken by mathematics,
computer science, and engineering students, the book is suitable as well for a general education course at a good liberal arts college, or for self-study.
Table of Contents
Part I. Basics
Section A: Strings
Section B: Combinations
Section C: Distributions
Section D: Partitions
Part II: Special Counting Methods
Section E: Inclusion and Exclusion
Section F: Recurrence Relations
Section G: Generating Functions
Section H: The Pólya-Redfield Method
List of Standard Problems
Dependence of Problems
Answers to Selected Problems
Excerpt: Section. G General Functions (p. 87)
In section E, we saw how the principle of inclusion and exclusion could be used to count combinations with limited repetition. In this section we will solve problems of this type, including more
complicated variations, by a different method.
Example Find the number of ways to select a three-letter combination from the set {A, B, C} if A can be included at most once, B at most twice, and C at most three times.
G1 Do this by counting directly.
Another approach to this problem is to form the expression
(1 + A)(1 + B + B^2)(1 + C + C^2 + C^3)
and notice that when this is multiplied out, each term represents a different combination of letters. For example, the term B^2C represents the combination BBC. (Notice how the term 1 in the first
factor allows A to be missing from this combination.)
The terms A^iB^jC^k having a total degree i + j + k = 3 correspond to the three-letter combinations counted in problem G1. This observation suggests the following idea: If we replace A, B, and C by X
everywhere they appear, the the number of combinations counted in problem G1 is equal to the number of times X^3 occurs in teh expansion of the product
(1 + X)(1 + X + X^2)(1 + X + X^2 + X^3).
In other words, the number of combinations is the coefficient of X^3 in the product.
About the Author
Daniel A. Marcus received his PhD from Harvard University. He was a J. Willard Gibbs Instructor at Yale University from 1972-74. He has published research papers in the areas of combinatorics, graph
theory, and number theory, and is the author of Number Fields, and Differential Equations : An Introduction.
MAA Review
Combinatorics: a problem oriented approach is a book on Combinatorics that mainly focuses on counting problems and generating functions. By restricting himself to an accomplishable goal, without
attempting to be encyclopedic, the author has created a well-focused, digestible treatise on the subject. Continued... | {"url":"http://www.maa.org/publications/books/combinatorics-a-problem-oriented-approach?device=desktop","timestamp":"2014-04-18T08:47:40Z","content_type":null,"content_length":"95303","record_id":"<urn:uuid:e0d1653e-3a44-4ed4-a6a4-b02a7d74fa77>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dealing with missing values
March 8, 2009
By Paolo
Two new quick tips from 'almost regular' contributor
Handling missing values in R can be tricky. Let's say you have a table
with missing values you'd like to read from disk. Reading in the table
read.table( fileName )
might fail. If your table is properly formatted, then R can determine
what's a missing value by using the "sep" option in read.table:
read.table( fileName, sep="\t" )
This tells R that all my columns will be separated by TABS regardless of
whether there's data there or not. So, make sure that your file on disk
really is fully TAB separated: if there is a missing data point you must
have a TAB to tell R that this datum is missing and to move to the next
field for processing.
Lastly, don't forget the "header=T" option if you have a header line in
your file.
Here's the 2nd tip:
Some algorithms in R don't support missing (NA) values. If you have a
data.frame with missing values and quickly want the ROWS with any
missing data to be removed then try:
myData[rowSums(is.na(myData))==0, ]
To find NA values in your data you have to use the "is.na" function.
for the author, please follow the link and comment on his blog:
One R Tip A Day
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/dealing-with-missing-values/","timestamp":"2014-04-18T16:03:58Z","content_type":null,"content_length":"35376","record_id":"<urn:uuid:e45cec7b-cf29-4039-9d36-5936115a49eb>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extra Attacks, Haste, and you. - Public Discussion
So primarily I'm concerned about Flurry + Sword spec.
2 2.0 speed, 50dps weapons
30% crit rate.
1 minute interval
Ignoring yellow damage so we have,
60 swings per minute.
1-(1-c)^3 * 30% is the amount of haste you have over an interval.
.657 * .3 = .1971
This is to say, we have 19.71% more swings essentially.
This raises our swings per minute to 71.826.
Although this is not the focus of what I wanted to establish, I'll see what adding 5% more crit does.
1-(1-c)^3 * 30% is the amount of haste you have over an interval.
0.725375 * .3 = .2176125
21.761235% more swings,
73.05675 swings a minute.
So what do we get with the 30% crit rate, and 5% chance to proc a sword proc?
Well first we have to understand how sword spec works. What I'm having trouble modeling is the effect sword spec has in this situation. Even if we assume sword spec doesn't make you "lose time" when
we look only at white damage (meaning it does not reset swings that have not completed). It seems like it will have different effect depending on the state you are in. So you have either crit, or you
have hit; and then the sword spec has crit or hit itself. What's more you may or may not be in flurry time when this occurs. This is very difficult for me to make into an equation. While I think on
this, does anyone else have any ideas?
So the Extra attack:
- Outside of flurry: gives an extra attack - nothing special.
- Inside of flurry: eats a flurry charge.
- Outside of flurry: starts flurry.
- Inside of flurry: refreshes flurry.
Grr. Still thinking... | {"url":"http://forums.elitistjerks.com/topic/9171-extra-attacks-haste-and-you/","timestamp":"2014-04-17T12:34:40Z","content_type":null,"content_length":"60155","record_id":"<urn:uuid:a5badc95-b2b1-4e40-8c40-e730eb218c7d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
MySQL Aggregate Functions
Summary: in this tutorial, you will learn how to use the MySQL aggregate functions including AVG, COUNT, SUM, MAX and MIN.
Introduction to MySQL aggregate functions
orderdetails table because the orderdetails table stores only quantity and price of each item. You have to select the quantity and price of item for each order and calculate the order’s total. To
perform such calculations in a query, you use aggregate functions.
By definition, an aggregate function performs a calculation on a set of values and returns a single value.
MySQL provides many aggregate functions including AVG, COUNT, SUM, MIN , MAX, etc. An aggregate function ignores NULL values when it performs calculation except for the COUNT function.
AVG function
The AVG function calculates the average value of a set of values. It ignores NULL values in the calculation.
You can use the AVG function to calculate the average buy price of all products in the products table by using the following query:
1 SELECT AVG(buyPrice) average_buy_price
2 FROM products
The AVG function in detail.
COUNT function
The COUNT function returns the number of the rows in a table. For example, you can use the COUNT function to get the number of products in the products table as the following query:
1 SELECT COUNT(*) AS Total
2 FROM products
The COUNT function has several forms such as COUNT(*) and COUNT(DISTINCT expression). For more information, check it out the COUNT function tutorial.
SUM function
The SUM function returns the sum of a set of values. The SUM function ignores NULL values. If no matching row found, the SUM function return NULL.
To get the total sales of each product, you can use the SUM function in conjunction with the GROUP BY clause as follows:
1 SELECT productCode,sum(priceEach * quantityOrdered) total
2 FROM orderdetails
3 GROUP by productCode
To see the result in more detail, you can join the orderdetails table to the products table as the following query:
1 SELECT P.productCode,
2 P.productName,
3 SUM(priceEach * quantityOrdered) total
4 FROM orderdetails O
5 INNER JOIN products P ON O.productCode = P.productCode
6 GROUP by productCode
7 ORDER BY total
More information on the SUM function in detail.
MAX function
The MAX function returns the maximum value in a set of values.
For example, you can use the MAX function to get the most expensive product in the products table as the following query:
1 SELECT MAX(buyPrice) highest_price,
2 FROM Products
How the MAX function works in detail.
MIN function
The MIN function returns the minimum value in a set of values.
For example, the following query uses the MIN function to find the product with the lowest price in the products table:
1 SELECT MIN(buyPrice) lowest_price,
2 FROM Products
Explains the MIN function in detail.
In this tutorial, we have shown you how to use the most commonly used MySQL aggregate functions. | {"url":"http://www.mysqltutorial.org/mysql-aggregate-functions.aspx","timestamp":"2014-04-18T18:50:14Z","content_type":null,"content_length":"44566","record_id":"<urn:uuid:fe91ac88-bcf6-4ac6-8740-e5cdf9a53f01>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Colloquium
Visit Penn's website
A New Colloquium: Applied Mathematics and Computational Science
A new colloquium series on Applied Mathematics and Computational Science—supported by the Provost along with SAS, SOM, and SEAS—begins September 9 at 2 p.m. in A8 DRL. The first speaker, Dr. Peter
Lax, from the Courant Institute at NYU, will discuss Oscillation and Overshoot in the Numerical Solution of Partial Differential Equations. Dr. Lax is considered to be one of the greatest
mathematicians of the twentieth century.
The talks are being presented by the Working Group in Applied Mathematics and Computational Science (WGAMCS) —faculty and students from a wide variety of disciplines, who share a common interest in
fostering and understanding the applications of mathematics to problems in empirical science. The group is working to provide a more coherent structure for education and research in applied math and
computational science at Penn, eventually leading to the formation of a graduate group in this field.
The WGAMCS provides a forum for researchers from fields that involve mathematics in a significant way to meet and discuss problems of common interest. Mathematics is not just the language of science,
it also provides the most comprehensive and incisive tools for modeling, analysis and quantification in empirical science and engineering. The most important developments in mathematics have grown
out of the demands of empirical science and the natural human imperative to efficiently organize knowledge, said Dr. Charles L. Epstein, Francis J. Carey Term Professor of Mathematics. For a list of
the affiliated faculty, see www.amcs.upenn.edu/affiliated.html.
They will hold two series of talks: a monthly colloquium series and a weekly seminar series. Starting this fall, the colloquium series is being supported, in part, by a grant from the Provost’s
Interdisciplinary Seminar Fund, as well as by contributions from departments in SAS, SEAS, and SOM. A tentative schedule can be found at www.amcs.upenn.edu/.
The colloquium aims to bring important ideas from applied mathematics to a large, technically sophisticated audience. The speakers are all world-renowned experts in various branches of applied
mathematics, though not all are mathematicians, per se. The speakers range in age from the mid 30s to almost 80, and come from all parts of the country.
Dr. Lax is the recipient of the Abel Prize in mathematics. This is an award intended to provide a “Nobel” prize in mathematics. It is awarded by the Norwegian Academy and was worth $980,000, this
year. Dr. Lax is one of the great figures of both pure and applied math. He started out, at age 18, working at Los Alamos on the Manhattan Project, and went on to make fundamental, and in some cases
seminal, contributions to a vast array of subjects.
On October 7, Dr. Stephen Smale, a Fields Medalist, who has made fundamental contributions in fields from topology, to dynamical systems and the analysis of computer algorithms, will speak at 2 p.m.
The monthly series continues on November 4 with Dr. Yannis Kevrekidis, a chemical engineer, working at the cutting edge of mathematical modeling in complex biological systems.
The weekly seminar, will feature somewhat more technical talks, also on applications of mathematics in empirical science. Both the colloquium and seminar meet on Fridays at 2 p.m. For up-to-date
information visit: www.amcs.upenn.edu/Seminar.html.
Almanac, Vol. 52, No. 2, September 6, 2005
September 6, 2005
Volume 52 Number 2 | {"url":"http://www.upenn.edu/almanac/volumes/v52/n02/amcs.html","timestamp":"2014-04-17T06:49:54Z","content_type":null,"content_length":"32580","record_id":"<urn:uuid:c9a89e6d-bd5c-494d-8242-f6e42a2115bd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'cubic polynomial' topic
Author Comment/Response
I have a problem: I want to solve this equation:
x^3+(3c-1)(x^2)-4cx-4(c^2)=0 with respect to x.
It has to have a real solution because it is continuous
in x. c is a positive parameter.
If i solve it numerically, plugging numbers for c, then it is fine. But I would like an analytical solution: in this case i get only one solution which should give me a real value for x
(the other two are imaginary) but
it has a square root with all negative members (all terms with c, which is positive, with a sign - in front of it, so immaginary). How is that possible? What procedure does mathematica
use to solve cubic expression?
How can I express the expression in a nicer way to get rid of these negative terms? My feeling is that the programm is not able to simplify the expression for the solution.
Could you please help me? if I write the expression I find in the paper I am writing, noone will believe it is real!
And even more funny, if I plug number into the solution I find for x, it comes the same number as I plug numbers directly into the function I want to solve, except for the last part
which is an imaginary number which shoult tend to zero.
What is going on?
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/3755","timestamp":"2014-04-19T22:20:32Z","content_type":null,"content_length":"27192","record_id":"<urn:uuid:53998e84-e6f6-4b3f-ac6b-8880aecf7f9b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
American Mathematical Society Translations--Series 2
Advances in the Mathematical Sciences
2000; 196 pp; hardcover
Volume: 200
ISBN-10: 0-8218-2663-8
ISBN-13: 978-0-8218-2663-8
List Price: US$103
Member Price: US$82.40
Order Code: TRANS2/200
Dedicated to the memory of Professor E. A. Leontovich-Andronova, this book was composed by former students and colleagues who wished to mark her contributions to the theory of dynamical systems. A
detailed introduction by Leontovich-Andronova's close colleague, L. Shilnikov, presents biographical data and describes her main contribution to the theory of bifurcations and dynamical systems.
The main part of the volume is composed of research papers presenting the interests of Leontovich-Andronova, her students and her colleagues. Included are articles on traveling waves in coupled
circle maps, bifurcations near a homoclinic orbit, polynomial quadratic systems on the plane, foliations on surfaces, homoclinic bifurcations in concrete systems, topology of plane controllability
regions, separatrix cycle with two saddle-foci, dynamics of 4-dimensional symplectic maps, torus maps from strong resonances, structure of 3 degree-of-freedom integrable Hamiltonian systems,
splitting separatrices in complex differential equations, Shilnikov's bifurcation for \(C^1\)-smooth systems and "blue sky catastrophe" for periodic orbits.
Graduate students, research and applied mathematicians interested in qualitative theory of differential equations; physicists.
• L. P. Shilnikov -- Evgeniya Aleksandrovna Leontovich-Andronova (1905-1996)
• V. Afraimovich and M. Courbage -- On the abundance of traveling waves in coupled expanding circle maps
• S. A. Alekseeva and L. P. Shilnikov -- On cusp-bifurcations of periodic orbits in systems with a saddle-focus homoclinic curve
• S. Aranson, V. Medvedev, and E. Zhuzhoma -- Collapse and continuity of geodesic frameworks of surface foliations
• V. N. Belykh -- Homoclinic and heteroclinic linkages in concrete systems: Nonlocal analysis and model maps
• A. A. Binstein and G. M. Polotovskiĭ -- On the mutual arrangement of a conic and a quintic in the real projective plane
• N. N. Butenina -- The structure of the boundary curve for planar controllability domains
• V. V. Bykov -- Orbit structure in a neighborhood of a separatrix cycle containing two saddle-foci
• N. Gavrilov and A. Shilnikov -- Example of a blue sky catastrophe
• S. V. Gonchenko -- Dynamics and moduli of \(\Omega\)-conjugacy of 4D-diffeomorphisms with a structurally unstable homoclinic orbit to a saddle-focus fixed point
• V. Z. Grines and R. V. Plykin -- Topological classification of amply situated attractors of \(A\)-diffeomorphisms of surfaces
• M. V. Shashkov and D. V. Turaev -- A proof of Shilnikov's theorem for \(C^1\)-smooth dynamical systems
• L. P. Shilnikov and D. V. Turaev -- A new simple bifurcation of a periodic orbit of "blue sky catastrophe" type
• V. P. Tareev -- On the splitting of the complex loop of a separatrix
• A. A. E. -- Some topological structures of quadratic systems with at least four limit cycles
• J. Guckenheimer and I. Khibnik -- Torus maps from weak coupling of strong resonances
• L. M. Lerman -- Isoenergetical structure of integrable Hamiltonian systems in an extended neighborhood of a simple singular point: Three degrees of freedom | {"url":"http://ams.org/bookstore?fn=20&arg1=trans2series&ikey=TRANS2-200","timestamp":"2014-04-17T21:57:12Z","content_type":null,"content_length":"17701","record_id":"<urn:uuid:3dfa7ec5-3a44-4e23-a7ed-2c87d37484da>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Focus and Directrix in a Quadratic Bézier Curve
Any quadratic Bézier curve (with unit parameter ) represents a parabolic segment. This Demonstration illustrates the relationship between the disposition of the points and the vertex, locus, and
directrix of the corresponding parabola.
You can drag the points , , and . The median of the triangle corresponding to the control point is perpendicular to the directrix of the parabola, but the vertex and focus are generally not on this
The point of maximal curvature in a quadratic Bézier curve is naturally the vertex of the parabola.
You can vary the parameter of the point , providing a
quadratic Bézier curve, as in the Demonstration
"Conic Section as Bézier Curve"
. See the details of that Demonstration for more information about rational Bézier curves.
A weight or produces an ellipse and a hyperbola, respectively. | {"url":"http://demonstrations.wolfram.com/FocusAndDirectrixInAQuadraticBezierCurve/","timestamp":"2014-04-20T03:30:40Z","content_type":null,"content_length":"44752","record_id":"<urn:uuid:f4497e03-4be3-4ad4-acfe-ea2fa7ef5b07>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Boston, Boston, MA
Cambridge, MA 02139
Polymath tutoring math, science, and writing
...I've also written and edited sections of workbooks, teacher editions, exams, and software packages for algebra. Because of this experience, I know multiple techniques for students to learn algebra
skills, and I'm able to adapt our sessions to your student's strengths...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/geo_East_Boston_Boston_MA_algebra_1_tutors.aspx?d=20&pagesize=5&pagenum=3","timestamp":"2014-04-19T01:07:20Z","content_type":null,"content_length":"62383","record_id":"<urn:uuid:eb5b3f64-6059-4a9b-b969-c2c093eb0e9b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help, with area and perimeter of triangle please!
Hint: Use Sine Rule a/sinA = b/sinB = c/sinC to find missing lengths & angles. Use Area = 1/2abSinC to calculate the area
Use law of sines to find angle B. See diagram. $\frac{\sin 15}{8}=\frac{\sin B}{11}$ $\sin B=\frac{11 \sin 15}{8} \approx .3559$ $\sin^{-1}=20.8^{\circ}$ $\angle C=144.2^{\circ}$ Now, use the law of
sin again to find the missing side c. $\frac{\sin 15}{8}=\frac{\sin 144.2}{c}$ Once you find c using the formula above, you will have all the sides needed. P = a + b + c Use Heron's formula to find
the area: $A=\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Heron's formula is used to find the area of a triangle when only the sides are known. It's pretty straight forward. The sides are represented as: a, b, and c. s stands for half the periimeter. If you
follow all the steps that have been given to you, you will have the lengths of the three sides. After that, it's just plug and chug. | {"url":"http://mathhelpforum.com/trigonometry/52840-help-area-perimeter-triangle-please.html","timestamp":"2014-04-17T12:47:47Z","content_type":null,"content_length":"47007","record_id":"<urn:uuid:080a6bca-7dd4-4b13-8c9c-cbe581eb1d6a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
A boundary-value problem for Hamilton-Jacobi equations in hilbert spaces
Find out how to access preview-only content
July 1991
Volume 24
Issue 1
pp 197-220
A boundary-value problem for Hamilton-Jacobi equations in hilbert spaces
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
We study a Hamilton-Jacobi equation in infinite dimensions arising in optimal control theory for problems involving both exit times and state-space constraints. The corresponding boundary conditions
for the Hamilton-Jacobi equation, of mixed nature, have been derived and investigated in [19], [2], [5], and [15] in the finite-dimensional case. We obtain a uniqueness result for viscosity solutions
of such a problem and then prove the existence of a solution by showing that the value function is continuous.
The work of P. Cannarsa was partially supported by the Italian National Project “Equazioni Differenziali e Calcolo delle Variazioni”. H. M. Soner's work was supported by National Science Foundation
Grant DMS-90-02249.
1. Aubin JP, Ekeland I (1984) Applied Nonlinear Analysis. Wiley Interscience, New York
2. Barles G, Perthame B (1988) Exit time problems in optimal control and vanishing viscosity method. SIAM J Control Optim 26:1133–1148
3. Barles G, Perthame B (1987) Discontinuous solutions of deterministic optimal stopping time problems. Math Methods Numer Anal 21(4):557–579
4. Cannarsa P (1989) Regularity properties of solutions to Hamilton-Jacobi equations in infinite dimensions and nonlinear optimal control. Differential Integral Equations 2:479–493
5. Capuzzo-Dolcetta I, Lions PL (to appear) Hamilton-Jacobi equations and state constraints problems. Trans Amer Math Soc
6. Clarke F (1983) Optimization and Nonsmooth Analysis. Wiley, New York
7. Crandall MG, Lions PL (1983) Viscosity solutions of Hamilton-Jacobi equations. Trans Amer Math Soc 277:1–42
8. Crandall MG, Lions PL (1985) Hamilton-Jacobi equations in infinite dimensions, I. J Funct Anal 62:379–396
9. Crandall MG, Lions PL (1986) Hamilton-Jacobi equations in infinite dimensions, II. J Funct Anal 65:368–405
10. Crandall MG, Lions PL (1986) Hamilton-Jacobi equations in infinite dimensions, III. J Funct Anal 68:368–405
11. Crandall MG, Lions PL (preprint) Hamilton-Jacobi equations in infinite dimensions, IV
12. Crandall MG, Evans LC, Lions PL (1984) Some properties of the viscosity solutions of Hamilton-Jacobi equations. Trans Amer Math Soc 282:487–502
13. Federer H (1959) Curvature measures. Trans Amer Math Soc 93:429–437
14. Giles JR (1982) Convex Analysis with Application in Differentiation of Convex Functions. Pitman, Boston
15. Ishii H (to appear) A boundary value problem of the Dirichlet type for Hamilton-Jacobi equations. Ann Scuola Norm Sup Pisa Cl Sci (4)
16. Lions PL (1982) Generalized Solutions of Hamilton-Jacobi Equations. Pitman, Boston
17. Lions PL (1985) Optimal control and viscosity solutions. Proc. Conf. Dynamic Programming, Rome, 1983. Springer-Verlag, Berlin
18. Loreti P (1987) Some properties of constrained viscosity solutions of Hamilton-Jacobi-Bellman equations. SIAM J Control Optim 25:1244–1252
19. Soner HM (1986) Optimal control with state-space constraint, I. SIAM J Control Optim 24:522–561
20. Soner HM (1988) On the Hamilton-Jacobi-Bellman equations in Banach spaces. J Optim Theory Appl 57:429–437
A boundary-value problem for Hamilton-Jacobi equations in hilbert spaces
Cover Date
Print ISSN
Online ISSN
Additional Links
Author Affiliations
□ 1. Dipartimento di Matematica, Via F. Buonarroti 2, 57127, Pisa, Italy
□ 2. Scuola Normale Superiore, 56126, Pisa, Italy
□ 3. Department of Mathematics, Carnegie-Mellon University, 15213, Pittsburgh, PA, USA | {"url":"http://link.springer.com/article/10.1007%2FBF01447742","timestamp":"2014-04-17T06:05:33Z","content_type":null,"content_length":"45473","record_id":"<urn:uuid:3e71097a-ee07-43a3-a216-39568d4e56e1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Apps (With Math Coursemate With Ebook Printed Access Card) 1st Edition | 9780840058225 | eCampus.com
List Price: [S:$63.00:S]
Currently Available, Usually Ships in 24-48 Hours
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 1st edition with a publication date of 1/1/2011.
What is included with this book?
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
Created through a "student-tested, faculty-approved" review process, MATH APPS is an engaging and accessible solution to accommodate the diverse lifestyles of today's learners at a value-based price.
The book's concept-based approach, multiple presentation methods, and interesting and relevant applications keep students who typically take the course--business, economics, life sciences, and social
sciences majors--engaged in the material. An innovative combination of content delivery both in print and online provides a core text and a wealth of comprehensive multimedia teaching and learning
assets, including end-of-chapter review cards, downloadable flashcards and practice problems, online video tutorials, solutions to exercises aimed at supplementing learning outside of the classroom.
Table of Contents
Linear Equations and Functions
Solutions of Linear Equations and Inequalities in One Variable
Linear Inequalities
Relations and Functions
Graphs of Functions
Function Notation
Domains and Ranges
Operations with Functions
Linear Functions
Rate of Change
Slope of a Line
Writing Equations of Lines
Solutions of Systems of Linear Equations
Graphical Solution
Solution by Substitution
Solution by Elimination
Three Equations in Three Variables
Applications of Functions in Business and Economics
Total Cost, Total Revenue, and Profit
Break-Even Analysis
Supply, Demand, and Market Equilibrium
Supply, Demand, and Taxation
Chapter Exercises
Quadratic and Other Special Functions
Quadratic Equations
Factoring Methods
The Quadratic Formula
Quadratic Functions: Parabolas
Business Applications of Quadratic Functions
Supply, Demand, and Market Equilibrium
Break-Even Points and Maximization
Special Functions and Their Graphs
Basic Functions
Polynomial and Rational Functions
Piecewise Defined Functions
Modeling Data with Graphing Utilities (optional)
Chapter Exercises
Chapter 3 Matrices
Operations with Matrices
Addition and Subtraction of Matrices
Scalar Multiplication
Multiplication of Matrices
Product of Two Matrices
Gauss-Jordan Elimination: Solving Systems of Equations
Systems with Unique Solutions
Systems with Nonunique Solutions
Nonsquare Systems
Inverse of a Square Matrix
Matrix Equations
Chapter Exercises
Inequalities and Linear Programming
Linear Inequalities in Two Variables
One Linear Inequality in Two Variables
Systems of Linear Inequalities
Linear Programming: Graphical Methods
Solving Graphically
The Simplex Method: Maximization
The Simplex Method
Tasks and Procedure
Nonunique Solutions: Multiple Solutions and No Solution
The Simplex Method: Duality and Minimization
Dual Problems
Duality and Solving
The Simplex Method with Mixed Constraints
Mixed Constraints and Maximization
Mixed Constraints and Minimization
Chapter Exercises
Exponential and Logarithmic Functions
Exponential Functions
Modeling with Exponential Functions
Logarithmic Functions and Their Properties
Logarithmic Functions and Graphs
Modeling with Logarithmic Functions
Properties of Logarithms
Change of Base
Applications of Exponential and Logarithmic Functions
Solving Exponential Equations Using Logarithmic
Growth and Decay
Economic and Management Applications
Chapter Exercises
Mathematics of Finance
Simple Interest and Arithmetic Sequences
Future Value
Arithmetic Sequences
Compound Interest and Geometric Sequences
Compound IntereSt. Geometric Sequences
Future Values of Annuities
Ordinary Annuities
Annuities Due
Present Values of Annuities
Ordinary Annuities
Annuities Due
Deferred Annuities
Loans and Amortization
Unpaid Balance of a Loan
Chapter Exercises
Introduction to Probability
Probability and Odds
Sample Spaces and Single Events
Empirical Probability
Unions, Intersections, and Complements of Events
Inclusion-Exclusion Principle
Conditional Probability: The Product Rule
Probability Trees and Bayes' Formula
Probability Trees
Bayes' Formula
Counting: Permutations and Combinations
Permutations, Combinations, and Probability
Chapter Exercises
Probability and Data Description
Binomial Probability Experiments
Describing Data
Statistical Graphs
Types of Averages
Variance and Standard Deviation
Discrete Probability Distributions
Discrete Probability Distributions
Measures of Dispersion
The Binomial Distribution
Binomial Formula
Normal Probability Distribution. z-Scores
Chapter Exercises
Notion of a Limit
Properties of Limits, Algebraic Evaluation
Limits of Piecewise Defined Functions
Continuous Functions
Limits at Infinity
Continuous Functions
Limits at Infinity
Average and Instantaneous Rates of Change: The Derivative
Instantaneous Rates of Change: Velocity
Tangent to a Curve
Differentiability and Continuity
Derivative Formulas
Additional Formulas
Marginal Revenue
The Product Rule and the Quotient Rule
Product Rule
Quotient Rule
The Chain Rule and the Power Rule
Chain Rule
Power Rule
Using Derivative Formulas
Higher-Order Derivatives
Second Derivatives
Higher-Order Derivatives
Derivatives in Business and Economics
Marginal Revenue
Marginal CoSt. Marginal Profit
Chapter Exercises
Applications of Derivatives
Relative Maxima and Minima: Curve Sketching
Concavity: Points of Inflection
Points of Inflection
Second-Derivative Test
Optimization in Business and Economics
Absolute Extrema
Maximizing Revenue
Minimizing Average CoSt. Maximizing Profit
Applications of Maxima and Minima
Rational Functions: More Curve Sketching
More Curve Sketching
Chapter Exercises
Derivatives Continued
Derivatives of Logarithmic Functions
Using Properties of Logarithms
Derivatives of Exponential Functions
Implicit Differentiation
Related Rates
Percent Rates of Change
Solving Related-Rates Problems
Applications in Business and Economics
Elasticity of Demand
Taxation in a Competitive Market
Chapter Exercises
Indefinite Integrals
The Indefinite Integral
The Power Rule
Integrals Involving Exponential and Logarithmic Functions
Integrals Involving Exponential Functions
Integrals Involving Logarithmic Functions
The Indefinite Integral in Business and Economics
Total Cost and Profit
National Consumption and Savings
Differential Equations
Solution of Differential Equations
Separable Differential Equations
Applications of Differential Equations
Chapter Exercises
Definite Integrals: Techniques of Integration
The Definite Integral: The Fundamental Theorem of Calculus
Estimating the Area under a Curve
Area between Two Curves
Definite Integrals in Business and Economics
Continuous Income Streams
Consumer's Surplus
Producer's Surplus
Using Tables of Integrals
Integration by Parts
Improper Integrals and Their Applications
Chapter Exercises
Functions of Two or More Variables
Functions of Two or More Variables
Partial Differentiation
First-Order Partial Derivatives
Higher-Order Partial Derivatives
Functions of Two Variables in Business and Economics
Joint Cost and Marginal CoSt
Production Functions
Demand Functions
Maxima and Minima
Constrained Optimization and Lagrange Multipliers
Chapter Exercises
Answers to Odd-Numbered Exercises
Table of Contents provided by Publisher. All Rights Reserved. | {"url":"http://www.ecampus.com/math-apps-math-coursemate-ebook-printed/bk/9780840058225","timestamp":"2014-04-18T04:00:42Z","content_type":null,"content_length":"66226","record_id":"<urn:uuid:3464a611-5710-4f8f-8db2-2cc97521cb32>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Confidence and prediction intervals for forecasted values
The 95% confidence interval for the forecasted values ŷ of x is
This means that there is a 95% probability that the true linear regression line of the population will lie within the confidence interval of the regression line calculated from the sample data.
Figure 1 – Confidence vs. prediction intervals
In the graph on the left of Figure 1, a linear regression line is calculated to fit the sample data points. The confidence interval consists of the space between the two curves (dotted lines). Thus
there is a 95% probability that the true best-fit line for the population lies within the confidence interval (e.g. any of the lines in the figure on the right above).
There is also a concept called prediction interval. Here we look at any specific value of x, x[0], and find an interval around the predicted value ŷ[0] for x[0] such that there is a 95% probability
that the real value of y (in the population) corresponding to x[0] is within this interval (see the graph on the right side of Figure 1).
The 95% prediction interval of the forecasted value ŷ[0] for x[0] is
where the standard error of the prediction is
For any specific value x[0] the prediction interval is more meaningful than the confidence interval.
Example 1: Find the 95% confidence and prediction intervals for the forecasted life expectancy for men who smoke 20 cigarettes in Example 1 of Method of Least Squares.
Figure 2 – Confidence and prediction intervals for data in Example 1
Referring to Figure 2, we see that the forecasted value for 20 cigarettes is given by FORECAST(20,B4:B18,A4:A18) = 73.16. The confidence interval, calculated using the standard error 2.06 (found in
cell E12), is (68.70, 77.61).
The prediction interval is calculated in a similar way using the prediction standard error of 8.24 (found in cell J12). Thus life expectancy of men who smoke 20 cigarettes is in the interval (55.36,
90.95) with 95% probability.
Example 2: Test whether the y-intercept is 0.
We use the same approach as that used in Example 1 to find the confidence interval of ŷ when x = 0 (this is the y-intercept). The result is given in column M of Figure 2. Here the standard error is
And so the confidence interval is
Since 0 is not in this interval, the null hypothesis that the y-intercept is zero is rejected.
9 Responses to Confidence and prediction intervals for forecasted values
1. Dr. Zaiontz,
Very neat and concise example. I’m particularly interested in a one sided C.I. (lower bound)
Would you agree to use
\hat{y} – t_{crit} s.e.
where t_{crit} should be calculated in Excel using =TINV(2*\alpha,df),
where \alpha = 1-p?
□ Joaquin,
I believe that what you wrote is correct.
2. Hi,
Whats the formula in J12? Cannot get the same results…
□ Hi Kristian,
J12 contains the same value as cell E9. The formula in E9 is =FORECAST(E8,B4:B18,A4:A18).
☆ Hi Charles,
I’m refering to J12, not J11
What formula is in cell J12??
I think it is in the (x – x_)^2 that something is wrong!
○ Hi Kristian,
The formula in cell J12 is =E10*SQRT(1+1/E5+(E8-E7)^2/E11).
3. Hi Charles,
Great. Thank u.
4. Please help how u got value of SSx which I suppose to be:-271.6
□ Anu,
SSx (cell E11) is calculated by the formula =DEVSQ(A4:A18). It has the value 2171.6. | {"url":"http://www.real-statistics.com/regression/confidence-and-prediction-intervals/","timestamp":"2014-04-20T08:15:37Z","content_type":null,"content_length":"45790","record_id":"<urn:uuid:3eaad70a-edaa-41db-973a-390b92084bb0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berezin integral
Berezin integral
Formal context
Integration theory
Analytic integration
Cohomological integration
A Berezinian integral is an integral over a supermanifold that is a superpoint $\mathbb{R}^{0|q}$.
For the infinite-dimensional version see fermionic path integral.
An exposition of the standard lore is here:
A general abstract discussion in terms of D-module theory is in
Revised on November 10, 2012 17:15:43 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/Berezin+integral","timestamp":"2014-04-20T23:28:59Z","content_type":null,"content_length":"27884","record_id":"<urn:uuid:23cc5fa8-c02c-432b-ac55-2df2411465af>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where did the logic go?
Most programmers assume that there is something odd going on with data typing but the answer is operator precedence.
The cause of the problem is the pesky assignment statement = which has a higher priority than the logical operator and.
What this means is that:
$inrange=$a and $b;
is evaluated as
($inrange=$a) and $b;
because the assignment is higher priority and so is performed first.
That is, $inrange is assigned the logical value stored in $a, which in our example is true, and then the and is evaluated as the expression:
true and $b
but the result of the operation is thrown away as the assignment operator has already been evaluated.
What this means is that you might have been expecting the $inrange to be false, i.e the result of $a and $b but it turns out to be true because it's equal to $a.
The solution to the problem is simple - brackets.
Instead of writing:
$inrange=$a and $b;
you have to write:
$inrange=($a and $b);
with this change you get the expected result of 'out range' just as in the original form of the if statement.
This problem is very difficult to spot unless you have a table of operator precedences stored in your head.
The only foolproof pattern that avoids precedence problems without having to check the precedence of every operator is to overuse brackets.
It is often said that brackets are free so go ahead and overuse them, but clarity can be lost if you really write things like:
which gives the same answer if you leave out all the brackets.
In the case of logical operators the problem described is created because there are two versions of and and or.
You can write and or && and the only difference is precidence with the first having a lower precidence than asignment and the second a higher precidence. What this means is that:
$inrange=$a && $b;
gives the correct or rather the expected answer of
$inrange=($a && $b);
In the same way there are two versions of or - or or || and the first has a precedence lower than assignment and the second a higher precedence.
So it look as if the best advice is to always use && and || and forget that and and or exist.
Unfortunately this isn't a complete solution as there is no alternative form of xor with a higher precedence and hence the xor operator always has a lower precedence than
assignment so the problem occurs again if you write
$inrange=$a xor $b;
expecting it to mean
$inrange=($a xor $b);
when it actually means
($inrange=$a) xor $b;
You have to use brackets.
More Puzzles
JavascriptJavaScript Puzzle - The Too Tidy Assignment
There is a rule about not optimizing code too soon. Perhaps we should add to that the rule that being too tidy is often a mistake. Here we have a JavaScript example where things
go wrong because of an [ ... ]
+ Full Story
Sharpen Your Coding SkillsVertex Coverings And The Cool Kids Problem
Joe Celko has posed another puzzle that requires you to think like a programmer. This one asks us to find the cool kids in a social network - the ones who taken together know
everyone else. This is al [ ... ]
+ Full Story
Sharpen Your Coding SkillsProgrammer Puzzle - Hermit Boxes
Can you program a solution to this puzzle in which you place 3D boxes on a 2D grid to prevent your opponent being able to make a legal move? Let's hear what the team at
International Storm Door [ ... ]
+ Full Story
Other Articles
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://www.i-programmer.info/programmer-puzzles/142-php/1542-where-did-the-logic-go.html?start=1","timestamp":"2014-04-17T21:26:33Z","content_type":null,"content_length":"39279","record_id":"<urn:uuid:e6e72814-e16f-43c4-807e-7cd96d806372>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
solution combinations
December 22nd 2008, 12:41 PM #1
Dec 2008
solution combinations
I will have 7 numbers from 1 to 39. They can be any 7 numbers of 1 to 39. So how many parts i have to divide of those numbers. Example it can be 1,6,9,23,33,35,37 or other differents number. So
can i have a solution of that?
Are you asking: "How subsets of seven numbers are there using the integers 1 to 39?"
Hello, faruk_fin!
I will have 7 numbers from 1 to 39. They can be any 7 numbers of 1 to 39.
So how many parts i have to divide of those numbers.
Example: it can be 1,6,9,23,33,35,37 or other differents number.
So can i have a solution of that?
I will assume the question is: In how may ways can we take 7 of 39 objects?
The answer is: . $_{39}C_7 \;=\;{39\choose7} \;=\;\frac{39!}{7!\,32!} \;=\;15,\!380,\!937$ ways.
Thanks a lot.
But i want to know how many times i have to write those numbers 1 t0 39 n each time their will be 7 numbers. So if someone give me any 7 numbers of 1 to 39 at least their will be similarity once.
Plz help me. Thanks again.
Thanks a lot.
But i want to know how many times i have to write those numbers 1 t0 39 n each time their will be 7 numbers. So if someone give me any 7 numbers of 1 to 39 at least their will be similarity once.
And how have to write all those numbers different time? Plz help me. Thanks again.
It not that kind. I want know how many times i have to write those numbers n how many parts they will be. Each time ther will be 7 numbers they must to be 1 to 39. So any 7 numbers i get there
will be similarity. Plz help.
Last edited by mr fantastic; December 23rd 2008 at 06:33 PM.
It not that kind. I want know how many times i have to write those numbers n how many parts they will be. Each time ther will be 7 numbers they must to be 1 to 39. So any 7 numbers i get there
will be similarity. Plz help.
As much as you seem to dislike it, Soroban’s answer it the number that you for which you are asking.
If you disagree with me then you must make your question more mathematical meaningful?
Hello again, faruk_fin!
I'll take a guess at what you are asking . . .
We have a list of the 15,380,937 sets of seven numbers from the set 1-to-39.
. . $\begin{array}{c} \{1,2,3,4,5,6,7\} \\ \{1,2,3,4,5,6,8\} \\ \{1,2,3,4,5,6,9\} \\ \vdots \\ \{33,34,35,36,37,38,39\} \end{array}$
How many times does a particular number (say, 23) appear on the list?
I would conjecture that any number appears $\tfrac{1}{39}$ of the time.
Therefore, "23" appears $\frac{15,380,937}{39} \:=\:394,383$ times.
December 22nd 2008, 12:46 PM #2
December 22nd 2008, 12:57 PM #3
Super Member
May 2006
Lexington, MA (USA)
December 23rd 2008, 11:22 AM #4
Dec 2008
December 23rd 2008, 04:05 PM #5
December 24th 2008, 11:46 AM #6
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/statistics/65822-solution-combinations.html","timestamp":"2014-04-17T05:51:08Z","content_type":null,"content_length":"51321","record_id":"<urn:uuid:af2eb2bc-e26c-4be8-8dc1-692a8be4cba7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
pkgsrc-changes: CVS commit: pkgsrc
Subject: CVS commit: pkgsrc
To: None <pkgsrc-changes@netbsd.org>
From: Jason Beegan <jtb@netbsd.org>
List: pkgsrc-changes
Date: 03/07/2001 22:57:53
Module Name: pkgsrc
Committed By: jtb
Date: Wed Mar 7 20:57:52 UTC 2001
Update of /cvsroot/pkgsrc/math/pari
In directory netbsd.hut.fi:/tmp/cvs-serv2123
Log Message:
Initial import of pari.
PARI-GP is a package which is aimed at efficient computations in
number theory, but also contains a large number of other useful
functions. It is somewhat related to a Computer Algebra System, but
is not really one since it treats symbolic expressions as mathematical
entities such as matrices, polynomials, series, etc..., and not as
expressions per se. However it is often much faster than other CAS,
and contains a large number of specific functions not found elsewhere,
essentially for use in number theory.
This package can be used in an interactive shell (GP) or as a C/C++
library (PARI). It is free software, in the sense of freedom AND 'free
of charge'.
Vendor Tag: TNF
Release Tags: pkgsrc-base
N pkgsrc/math/pari/Makefile
N pkgsrc/math/pari/pkg/PLIST
N pkgsrc/math/pari/pkg/DESCR
N pkgsrc/math/pari/files/md5
N pkgsrc/math/pari/files/patch-sum
N pkgsrc/math/pari/patches/patch-aa
N pkgsrc/math/pari/patches/patch-ab
N pkgsrc/math/pari/patches/patch-ac
No conflicts created by this import | {"url":"http://mail-index.netbsd.org/pkgsrc-changes/2001/03/07/0031.html","timestamp":"2014-04-16T13:50:17Z","content_type":null,"content_length":"1826","record_id":"<urn:uuid:ba3b0153-09b3-49da-8ad3-7ed9f467eb23>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
This protocol defines the interface for objects that represent custom terms in a force-field. Objects conforming to this protocol can be added to AdForceField objects thus extending the force field.
AdForceFieldTerm complient objects operate on a system i.e. an AdSystem or AdInteractionSystem object, calculating the energy and/or force due to the interaction which they represent. The potential
energy calculated by the term can be broken into multiple components and the value of each reported individually. For example a term representing an implicit solvent interaction could calculate both
polar and non-polar solvation energies.
Objects conforming to this protocol can accumulate their forces into an external ::AdMatrix structure instead of using an internal one. This feature increases calculation speed (more efficent memory
usage, less additions) at the cost of not always being able to retrieve the interactions contribution to the total force.
AdForceField assumes such objects observe AdSystemContentsDidChangeNotification from their systems if necessary and can update themselves on receiving such a notification.
Objects that conform to AdForceFieldTerm do not have to calculate both forces and energies. Though obviously they must at least calculate one or the other to be useful. | {"url":"http://home.gna.org/adun/protocol_ad_force_field_term-p.html","timestamp":"2014-04-18T05:29:58Z","content_type":null,"content_length":"25704","record_id":"<urn:uuid:9ff3452b-62a0-4d5e-888c-09195b74fcd8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hypothesis tests and social science example
Consider a fictional hypothetical illness. Let us call it "monthitis". It is thought that there is a serious reason why it might be more
common for people born in a particular month to get the condition. A study is to be done, but we do not have any advanced knowledge
of which month it would be that is more common. A month is to be modelled for simplicity in a way so that each month is the same
length that is to say the fact that it may be 28,29,30 or 31 days long can be disregarded for an initial model to avoid unnecessary high
(Obviously if you can program a computer to work out a better solution then by all means go ahead. I have used a binomial model with
normal approximation without continuity correction to get an estimate, really a multinomial model is best here, but I cannot do that
really without a computer package to help, because the calculations are likely to be very complicated and error prone unless someone
has a bright idea of an easy way of doing this in a simplified fashion analytically by hand.)
A normal approximation to a binomial model with p = (1/12) and n = 1000 (sample size 1000) obviously q = (11/12) will be considered
preferable to make this something that could be done using a calculator and a good table of Normal Z values.
Correct me if I am wrong, but this is what I think:
mean = np = 83.333...
variance = npq = 76.388...
Therefore we want the model N(83.333, 76.388...) [N = Normal distribution with mean then variance]
The standard deviation is 8.74 (approx.)
Now how many standard deviations do we need for a robust piece of evidence that allows the null hypothesis to be conventionally
rejected at the 1% level of significance? [You may be able to phrase that in better statistical terms]
Should we divide the value SP = 0.01 by 12 to allow for the number of months not being stated in advance ?
This intuitively seems right to me, but is it correct to divide by 12 statistically speaking ?
A value of SP = 0.005 (halving the SP to account for a two-tailed test) gave me 2.56 standard deviations.
However if you divide that by 12 then SP = 0.0004166....
this gave me a figure of 3.34 standard deviations (ie. 29.1916 which added to the mean of 83.333 is 112.5... or let us say 113 people)
So if we have 113 people with monthitis in the most commonly encountered month for people with this condition, is this the point at
which statistical significance can be argued to be reasonable at the 1% level or have I missed something out ?
Intuitively I would expect the figure to be higher. (eg. 160 approx.) [Perhaps this is a reflection of the unwise notion of not stating the month in advance with a reason for the choice.]
Question 1:
Can anyone do this with a more “exact” model using a computer assuming 28 days in February and the correct number of days for all
other months, using a multinomial model (or Loglinear generalized linear model)?
Obviously I ask just out of interest here (I am not studying anything) and as a challenge/exercise/discussion.
In reality of course a proper scientific theory would be needed to justify the initial hypothesis. It would probably mean in practice that
an illness could be more likely to be caught in a particular season such as winter or summer for instance (the fact that I mentioned
birth is irrelevant really in terms of statistical testing, so a similar principle can be applied to a question involving the start date of
the illness).
A better question might therefore be:
Question 2
If a statistical test were done to establish whether the common cold were more likely to be caught in the winter (ie. From the 1st
December to 28th February) in a country in which the cold season is at this time of year according to climate statistics, what threshold
would need to be crossed to confirm using a hypothesis test in terms of number of people out of 1000 cases of a cold being caught
(using a suitable design of a study) being in winter relative to the number caught at a different time of year?
(A method would be nice for that ideally, personally I would start by calculating the number of days from the 1st of December to the
28th of February and divide by 365, then use this as p. Then use the binomial model and use a normal approximation if no computer is
available, but a pocket calculator and a normal distribution table is. Use SP = 0.01 – I wonder whether any adjustment is necessary. It
is not really two tailed. Good idea to check whether a continuity correction makes a significant difference – I don’t think that the
difference is going to matter much in this case.)
The second question is better because a natural climate related reason has been given together with the potential for a plausible
theory. The first question is not good practice in science because no good reason has been given in advance for a particular month of
birth of the monthitis sufferer having a greater likelihood of getting the condition later on in life. It is very unusual for an illness to be
just most likely in a particular month without the neighbouring month or months being also more likely to a lesser extent (whether it is
birth or catching the illness that is being considered).
There was a suggestion that I heard about in a serious study* where because of the education system starting the academic year in
September, a psychological or even physical bias had been noticed to a better performance towards those born near the start of the
academic year (September) relative to the end (August). [*IFS according to the BBC website in Oct 2007.]
This is very different to question 1 of course because I have given a scientific basis for the hypothesis (also the response variable of the
academic performance example above is a numeric concept allowing more detail not true or false as with my made up monthitis). You
would expect a similar result to a lesser extent in people born in October to the people born in September and a sudden jump from
September to August then a minor improvement in July. I suppose with that one you could define the start of the year to be on a
certain day (with careful checks on the official start dates in the country concerned), and do a test based upon the number of days into
the academic year rather than use months. Exactly what statistical test would be used there I do not know (correlation/more linear
statistical modelling?). Obviously a correlation would not prove a causal relationship (the correlation is probably quite weak and does
not prove the matter completely even if it is strong; the sample size however in the real study was probably massive which is still not
proof of course, but makes the evidence stronger). No statistics can ever really prove a causal relationship, but on the other hand what
other method would we use for a social science related study?
Last edited by SteveB (2013-07-07 05:13:12)
Re: Hypothesis tests and social science example
That is too much of a question.
General questions such as those are already covered in the theory of the distribution I intend to use.
Never underestimate the power of a demonstration- Edward Muntz
Please provide some data. It should consist of one column, labeled months ( Jan - Dec ) and the next column having the number connected with that month. Then I can solve the problem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hypothesis tests and social science example
Question 1 was an entirely fictional example so I have made up some data.
Okay here goes:
Off the top of my head though suppose for instance that it was:
January: 114 people born in this month suffer from monthitis
February: 77
March: 78
April: 77
May: 74
June: 82
July: 72
August: 77
September: 71
October: 89
November: 93
December: 96
Right so the question is that given that no statement in advance has been made about January having a higher frequency
of cases than any other month and that a two tailed test is needed, how statistically significant is the above result if it were
a study by a research group using a conventional hypothesis test?
Obviously my question is entirely made up including all of the data.
The sample size is n = 1000
All numbers for each month are confirmed cases of the fictional medical condition or illness.
Last edited by SteveB (2013-07-08 17:36:57)
Re: Hypothesis tests and social science example
That will work fine:
I will use a criterion of 5%.
Theory: there is no correlation between months and the number of illnesses.
By conventional criteria, this difference is considered to be statistically significant.
The p value answers this question: If the theory that generated the expected values were correct, what is the probability of observing such a large discrepancy (or larger) between observed and
expected values? A small P value is evidence that the data are not sampled from the distribution you expected.
I would reject the theory that months and illness are not correlated since there is only a 3% chance that the above result happened by chance.
If your test wanted 1% then we can not rule out that the above data is chance.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hypothesis tests and social science example
That will work fine:
I will use a criterion of 5%.
Theory: there is no correlation between months and the number of illnesses.
x^2 = 20.936
Was the "x squared" bit based upon a statistical test? Was it a Chi-Squared Test? Are you sure that this is the correct test?
Or was this based upon linear regression?
There was ONLY SUPPOSED to be EXACTLY ONE illness concerned which ALL the people were known to have.
When you say "illnesses" was that a typing error or did you model it as if you were counting how many illnesses in total
were reported to have occured overal in that month? At the time of writing I have not been able to check how you could
model this. Perhaps Linear Regression can be used if this is the assumption, but that would be your question not mine.
You are probably much more experienced than me in this, better qualified and have better software and a newer computer
so don't get me wrong I am not arguing with you it's just that you have used a different test to the one I would have leapt
for had I still had access to Genstat. Out of interest do you have a statistical software package? If so which one if you don't
mind me asking? (Is there a free tool for this with Wolfram? A Wolfram website I looked at some time ago did not seem to do this.)
(3.404 %)
My calculation seemed to suggest about 0.9 % but as you could tell I did NOT trust my answer at all. 3.404% seems right roughly.
That agrees with my rough intuition better than my attempt at a calculation. Was that a linear regression related probability?
Or Chi-Squared? Or Loglinear? Or Multinomial simulation?
By conventional criteria, this difference is considered to be statistically significant.
The p value answers this question: If the theory that generated the expected values were correct, what is the probability of observing such a large discrepancy (or larger) between observed and
expected values? A small P value is evidence that the data are not sampled from the distribution you expected.
I would reject the theory that months and illness are not correlated since there is only a 3% chance that the above result happened by chance.
If your test wanted 1% then we can not rule out that the above data is chance.
Yes I seemed to have "concluded" that we could, but realised that there was something wrong with my attempt.
I thought that using a graphics calculator to fudge a rough solution by trying to "simplify" things was not really going
to give a very accurate answer. I was going to suggest a Loglinear Contigency Table style solution to this, but needed
Genstat to remind me as to whether this was appropriate. I have still got the text book that accompanied the course
which I studied in 2006 on this topic (for which I got a grade 2 pass in a system where grade 1 is best and grade 4 is
just an ordinary pass grade so I was above average, but not exactly amazing at Linear Statistical Modelling).
I think I need a bit of a refresher course on this if I ever do get to use it for something serious, but as yet I have never
had a job which I would need to do this, but you never know what might happen.
I have decided that I accept your answer as correct, but do not know how you reached the answer nor what formal test
you did. You may even have done a simulation of the "exact" situation to see how many times something as significant
or more significant would happen. In which case you could consider that an unbeatable answer.
Last edited by SteveB (2013-07-09 01:29:15)
Re: Hypothesis tests and social science example
This is a non parametric test called chi squared goodness of fit.
January: 114 people born in this month suffer from monthitis
February: 77
March: 78
April: 77
May: 74
June: 82
July: 72
August: 77
September: 71
October: 89
November: 93
December: 96
The chi squared goodness of fit test just computed whether the above data could possibly come from the hypothesis that each month should have the same number. It is unlikely that is true.
This vid will describe the technique:
There was ONLY SUPPOSED to be EXACTLY ONE illness concerned which ALL the people were known to have.
The calculation did precisely that.
(Is there a free tool for this with Wolfram? A Wolfram website I looked at some time ago did not seem to do this.)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hypothesis tests and social science example
Thankyou I am now convinced that you have answered the question 1 bit correctly.
I quite often find that I am okay once I have established which test is the right one to use
out of a very large number of hypothesis tests that have been invented.
Of course the design of the pseudo study was one I deliberately put in a way as to make it
unusual in that it would only be valid to think of the months in a categorical way if there
were an artificial construct that made one unknown month very different (perhaps) to all
of the others (or at least gave each month its own independent character).
If as in the "common cold" case you wanted to test for whether cold times of the year had
some link to the frequency of catching the cold virus you would not of course consider there
to be a sudden jump from February to March for instance. A natural occurence rather than
a sudden artificial leap would be happening, so the study would be best if it were designed
in a way to give us more accurate information to test correlation between temperature and
the number of cases of the cold illness.
I suppose if a factory were giving off a pollutant just in January only and in no other month,
then if the chemical were to make monthitis more likely to occur at birth, then maybe the
might be a very rare use for this. On the other hand why would the newborn infant be suddenly
able/unable to "catch" the "condition" ONLY exactly at birth/not after birth.
In practice it would not happen like that so even in this case the study would be daft.
To be frank I cannot at this time think of a serious reason why this would be a statistically desirable
hypothesis test study design.
On the other hand given the question that I asked I would give your answer full marks.
Nice one, and a good refresher lesson for me in Chi-Squared tests.
Last edited by SteveB (2013-07-09 04:58:39)
Re: Hypothesis tests and social science example
I think what I had meant really is perhaps initially to help understand things work out the distribution formula/formulas
of the maximum of the 12 values for the numbers in each month when distributed as you would expect when putting
1000 individuals randomly into the 12 categories.
Then to use this to work out the 99th percentile of the maximum of an arbitrary instance of 1000 individuals being randomly (uniformly)
put into 12 categories, or even according to the actual proportions of the months taking into account the exact number
of days. (The bit above is not really needed, but probably a good idea to try first if one were to try to calculate it exactly.)
Maybe it is better to say "Work out the 99th percentile having found a formula for the distribution of the maximum of a
given set from which 1000 people have been put into 12 categories in a way according to a uniform distrubution with
either equal proportions for all 12, or even more difficultly 12 non-equal proportions."
The answer is a single real number which should probably be rounded upwards to the nearest whole number above.
(A few rather vague guesses might be: 113, 129, 123 etc., but without more evidence I do not know. Chi-Squared does not
give the correct answer to this because it gives hypothesis analysis treating all 12 categories as potentially biassed and
it could be said that I am considering that either 11 are uniform and 1 is biassed or all 12 are uniform as the null hypothesis
this as I have explained is so ridiculous in real life contexts that I so far cannot think as to why a well designed serious
study should really do this, but an MIT lecture used something similar as a purposefully very bad example of a poor study design
since it "draws a dartboard around the result after the data was collected" or in my words either uses an unjustified unsound SP, or
leaves the statistician with the job of working out a suitable increased level of significance required to make the hypothesis
one which COULD still be rejected, but only if some very remarkable result were obtained where one month were very
unusual indeed. I am trying to formalise my attempt of dividing by 12 (or 24) the significance probability, but intuitively it needs
probably a very complicated adjustment, it is more difficult than I thought, and probably more like dividing SP (0.01) by 132 say.
The SP fudge factor may of course be an irrational number with factorials, square roots and all sorts all over the place.)
I suppose I have been taught the ground work to be able to program a computer to do a simulation, but it is rather
awkward - not something I would want to spend too long on, and not an entirely satisfactory answer even then.
If someone finds things like this easy or has a software package that can solve this in less than
30 minutes work (eg. bobbym perhaps if his software packages are very powerful indeed.)
then it would be interesting to see if you could answer this to help build my intuition.
If a serious course taught how to solve this they would probably want the algebra that proves
and/or illustrates why it works and a numerical answer (eg. 121) would not get any marks.
My intuition is this is of Masters Degree Level, but someone might prove me wrong. (I had intended it to be only A-Level or BSc)
I wonder whether the loaded dice formulas are of any use with this... (Here we go again...)
The problem is rather like trying to prove a matter concerning a 12 sided die by rolling it 1000 times.
I have tried looking at the biassed coin wikipedia page and I cannot really adapt it for this because
that is fine for two states (heads,tails) but not for {1,2,3,4,5,6,7,8,9,10,11,12} - 12 states.
It is at least one level of difficulty above a problem that I found that the wiki site did something
that I did not understand and anonimnystefy did not really give a very full account of, but sounded
like he knew and understood about in the Exercises - Statistics thread some time ago. I can apply the formula,
but do not fully understand the derivation, and this would have to be made even more complex
to solve the full solution to the problem.
Last edited by SteveB (2013-07-10 05:19:37)
Re: Hypothesis tests and social science example
The problem is rather like trying to prove a matter concerning a 12 sided die by rolling it 1000 times.
What is needed is a full understanding of the term "matter."
What property are you trying to prove? Please state it as simply as you can and try to think of an example. Then I can work on it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hypothesis tests and social science example
Here is my simple version of the problem to illustrate the distribution change caused by taking
the maximum of a uniform set:
Consider taking a set of n values from a set containing m integers.
Let us take a pair {1,2} from {1,2,3,4,5}.
You can always define a maximum of the set {1,2} in this case it is 2.
Therefore it is reasonable to ask: What is the median and/or mean maximum of the pair taken from {1,2,3,4,5}?
So the pairs are:
{1,2} {1,3} {1,4} {1,5} [Maximums:2,3,4,5]
{2,3} {2,4} {2,5} [Maximums:3,4,5]
{3,4} {3,5} [Maximums:4,5]
{4,5} [Maximum: 5]
So the mean is: 4
The list of 10 items is {2,3,3,4,4,4,5,5,5,5} so the median is 4 (slight variations are possible).
To the nearest integer you could call the 99th percentile 5, but conventions could make it
slightly lower. I have decided that 4.97 based upon 2 + (5-2)*0.99 is the most sensible way
of calulating a non-integer version of the 99th percentile. Unless I have misunderstood the
Wikipedia entry the formulas do not really work for this case and either you get a boring 5,
or a number that exceeds 5 which is obviously wrong or not a good idea. I notice that the normal
model of N(4,1) that is with mean=4 and variance=1 so standard deviation = 1, if you take
the value Z = 2.33 you get a silly value of 6.33 for the 99th percentile.
Obviously the normal model assumes a non bounded real valued domain, so I suppose things
that are likely to happen so 4.97 is my best attempt at this version of the problem.
Regarding the more difficult problem with 12 months, 1000 sampled and maximums taken, then
estimate the 99th percentile of repeats.
I have written in Java using the Math.random() function (subject to any issues regarding the
degree of correctness of the random distribution from 0 to 1 of the random generator),
a simulation of 1000 iterations of 1000 sampled days of the year converted into a month.
It assumes that there are 28 days in February, and 30 or 31 days in the other months in the
usual way you would expect them in an ordinary western calendar (UK, USA, etc..). So a non leap year.
I have then selection sorted the list of 1000 runs, and looked at about the last 20 items in
the list and made a judgement upon things like (989th, 990th, 991st cases) - for 99th percentile.
then (995th and 996th cases) - for 99.5th percentile (more robust two tailed test).
My conclusion after 5 runs of this 1000 of 1000 simulation:
(1) The 99th percentile I was trying to describe is 113 (once 112, but 113 four times).
(2) The 99.5th percentile I was also looking for is 114 (twice 113, but 114 three times).
It looks like my original normal approximation in post #1 was correct in terms of the logic
of what I was thinking about. My test data that followed was a good borderline case.
The Chi-Squared answer is a good answer to a slightly different question.
If there were 12 colours {red,yellow,blue,green,orange,purple,brown,black,white,grey,pink,indigo}.
Let us suppose that 1000 people were surveyed and a test was done to see whether the result
could be a random set of equally likely (p = (1/12)) with the people asked to give their favourite,
then the Chi-Squared test would be a very good test to do.
Provided that it is acceptable to think of the months as categories rather than the fact that time
is a major feature and that they merge into each other gradually in terms of physical (eg. climate)
effects which are the scientific things that are thought to influence the thing being studied, the
Chi-Squared would have done, but really it does not model the situation well. For instance all
of the month devations are taken into account not just the 'maximum' month.
However the chance of an unnamed highest frequency being greater than a threshold T being set
so 1% then "what is the borderline theshold T?" would need my version of the problem.
Using this sort of analysis with data involving months is usually a study design weakness, but
perhaps there are exceptional cases when it is okay (I cannot think of any).
Last edited by SteveB (2013-07-17 07:27:57)
Re: Hypothesis tests and social science example
I do not know whether this will help or not but the first question appears to be
Consider taking a set of n values from a set containing m integers.
Let us take a pair {1,2} from {1,2,3,4,5}.
You can always define a maximum of the set {1,2} in this case it is 2.
Therefore it is reasonable to ask: What is the median and/or mean maximum of the pair taken from {1,2,3,4,5}?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hypothesis tests and social science example
I have checked your formula for the mean in the case: n = 3 and m = 8
I worked out the mean of 6.75 using my rather tedious method, and your formula also reassuringly gave 6.75 for this case.
I will therefore assume that the formula is correct and works for the mean.
Now what about median (50th percentile) and other percentiles like 99th percentile (and 99.5, 95, 97.5) ?
The question for that part is harder than I had bargained for because it is not a continuous set that we are
taking the numbers from and we have formulas that are correspondingly discrete so we cannot just integrate
the formula and use an area under the curve argument to define the percentile such as for the median:
median = (point at which area up to that point from lowest range value = 0.5)
With n = 3 and m = 8:
The boring answer is 7 on the basis of ((56 + 1)/ 2) = 28.5 then take the average of the 28th and 29th data item of 56 items.
This is okay for an integer version of the median problem, but some conventions might want a curve fitted then some
transformed normal or another continuous approximation used.
If we use the integer version then anything from about the 64th percentile onwards the answer is just 8.
I was hoping for a continuous analogue solution to give some answer inbetween 7 and 8 for 80th, 90th, 95th percentile etc....
In other words something like for the 90th percentile: (56+1)*0.9 = "51.3 th item" (informally)
Then some clever weighted average is done and ends up with something like 7.73
eg. (27 * 8) + (10 * 7 ) = 286
then 286 / 37 = 7.72972.... = 7.73 to 2 d.p.
based on the 63th to 64th cut off between 7 and 8, and the fact that (90 - 63) = 27 and (100 - 90) = 10
then invert the balance with 27 * 8 and 10 * 7 and divide by the frequency.
However that was just one thought I had about that particular case which might give some strange results in others.
It either needs formalizing. Or maybe it just doesn't give the right statistical answers.
It is difficult enough trying to do that for a simple case. The months case is a lot more difficult because there are
two phases of random generation: First populate 12 values using 1000 samples, then take the maximums, and
then repeat a large number of times (eg. 100, 1000, etc.) and work out a 99th percentile or perhaps the 99.5th percentile.
It would be a struggle to derive a general formula for a percentile for the case n = 3 and m = 8.
That was with a uniform sequence {1,2,3,4,5,6,7,8} and select 3 and take the maximum.
The set with the months is not a uniform set of 12 numbers. They are randomly distributed multinomially from 1000 samples.
Last edited by SteveB (2013-08-04 01:40:12)
Re: Hypothesis tests and social science example
With the case of taking 3 items from 8 numbers from one to 8 that is n = 3 and m = 8, and with the example of trying
to calculate the 90th percentile I have worked out this illustration of using a continuous approximation and using the
area under the curve:
A good approximation to the frequencies is:
By the area under a curve definition:
By integration from 3 to 8 I am getting A = 535/12 = 44.5833333...
So 0.9A = 40.125
Using the area under the curve formula and by solving the cubic polynomial I am getting p = 7.7803 (to 4 d.p.)
Looks like a sensible answer and close to my rough estimate in the previous post.
By a similar process using the above approximation I am getting 6.6912 (to 4 d.p.) for the median.
Last edited by SteveB (2013-08-04 17:43:28)
Re: Hypothesis tests and social science example
You could argue that the curve between the points is not really justified and that perhaps a straight line between them
is better so you could start with the frequency chart and then draw a triangles and rectangles chart from it by joining
the points (X, Frequency) for X = 3 to X = 8.
X Frequency
The rectangle areas are: 1 + 3 + 6 + 10 + 15 = 35
The triangle areas are: 0.5 * (2 + 3 + 4 + 5 + 6) = 10
For the 90th percentile:
45 * 0.9 = 40.5
The first four rectangles are: 1 + 3 + 6 + 10 = 20
The first four triangles are: 0.5 * (2 + 3 + 4 + 5) = 7
Total of first four = 27
So 13.5 more needed to make up the 90th percentile.
This is out of 18 remaining.
Using a diagram I concluded that 15x + 3x^2 = 13.5
The valid solution is x = 0.778719262151...
Hence by adding 7 to this my perecentile estimate was 7.7787 (to 4 d.p) (assuming I haven't made any mistakes with that)
By a similar argument for the 50th percentile (the median) I am getting 6.683281573
or 6.6833 to 4 d.p.
The discrete integer median is 7 and the discrete integer 90th percentile is 8.
The non integer versions are really approximations based upon a continuous curve that either passes through the
points or gives a good approximation of the distribution. In this case I have made a curve through the points for the
first pair of estimates, and a series of straight lines for the second.
The orignial solution that I gave to the 12 months maximums problem used an approximation using the normal
distribution model without any transformation apart from using an appropriate standard deviation and mean.
It assumed that the distribution was symmetrical, whereas in reality it had a right skew caused by the greater capacity for
maximums higher than the mean compared to a squashed distribution below the mean. I could try transforming the
normal curve and some statistics software you can get nowadays makes this easy to do. I should think that the 99.5th
percentile is either 114 or 115, but it is difficult to be more exact without better software or writing some code myself
to do a better simulation. For the time being I am calling it 114.
(That was the 99.5th percentile estimate of maximums taken from 1000 samples put into 12 month categories randomly.)
Last edited by SteveB (2013-08-08 18:42:21)
Re: Hypothesis tests and social science example
With 10000 repeats of 1000 samples giving a maximum for the highest month, then giving frequencies for each maximum
of the highest month. My first proper run, after debugging of this, had a result of 114 for the 99.5th percentile,
but it was very close to it being 115 as the 99.5th percentile:
84 frequency 0
85 frequency 0
86 frequency 0
87 frequency 0
88 frequency 5
89 frequency 33
90 frequency 92
91 frequency 173
92 frequency 308
93 frequency 491
94 frequency 589
95 frequency 761
96 frequency 793
97 frequency 875
98 frequency 872
99 frequency 842
100 frequency 725
101 frequency 691
102 frequency 590
103 frequency 463
104 frequency 344
105 frequency 315
106 frequency 268
107 frequency 202
108 frequency 141
109 frequency 118
110 frequency 84
111 frequency 79
112 frequency 48
113 frequency 25
114 frequency 24
115 frequency 11
116 frequency 13
117 frequency 12
118 frequency 3
119 frequency 4
120 frequency 1
121 frequency 1
122 frequency 2
123 frequency 1
126 frequency 1
Median: 99
99.5th percentile: 114
Notice that 49 items are 115 or higher. With only one run of the code 115 could still be argued to be a good answer.
Technically the 99.5th percentile is 114, but it is so close that more data is needed.
EDIT/UPDATE: With an adjustment to the program to make it do 100,000 runs I have decided that 115 is the 99.5 percentile,
but it is not exactly that clear cut even then, with 400 runs of 116 or higher, and 553 runs of 115 or higher.
Assuming that run of 100,000 is typical though it is reasonable to call the 99.5th percentile 115.
So in a two tailed strong result (at the 1% significance level) for this at least 115 is needed in the highest month.
In the one tailed version at the 1% significance level, at least 113 is needed for a strong result according to the 100,000 repeats run.
(That is to say the 99th percentile was 113.)
For a two tailed (97.5th percentile) moderate evidence (5% level significance) result at least 110 was needed.
For a one tailed (95th percentile) moderate evidence (5% level significance) result at least 108 was needed.
The 90th percentile was 105, the median (50th percentile) was 98.
Last edited by SteveB (2013-08-09 06:34:57)
Re: Hypothesis tests and social science example
For the somewhat easier question (2) my answer is:
December has 31 days, January has 31 days, February has 28 days (assuming the study is done on a non leap year).
31 + 31 + 28 = 90 days in winter period as defined in the question
365 - 90 = 275 days in non winter period
Let the sample size be n = 1000
mean in winter period using binomial model = np = (90/365) x 1000 = 246.57....
variance using binomial model = npq = (90/365) x (275/365) x 1000 = 185.775....
standard deviation = 13.62996....
Use normal approximation with 2.33 standard devations from the mean for 1% siginificance test, one tailed:
2.33 x 13.62996.... + 246.57.... = 278.3331....
Using rounding up to allow a little extra 279 would be my answer (using the normal approximation) to question 2.
So at least 279 cases of catching a cold in winter out of 1000 cases of catching a cold would give strong evidence
in a one tailed scientific study.
In practice the scientist would probably want to log every exact date combined with the recent weather conditions of each,
including maximum and minimum temperature of recent days prior to catching a cold, then do some sort of regression analysis of
temperature against the number of cold cases per unit of time. Exactly how to do a hypothesis test for this data I am not
sure at the moment. I think something similar did get taught in my course in 2006, but I cannot remember the details at present.
I have lost some of my notes and materials, but do still have the textbook that came with the course.
A scattergraph would probably show an interesting picture, but how to formalize it I am not sure. I think it was made easy
with some software which avoided the need for hand calculations.
Last edited by SteveB (2013-08-11 06:02:46) | {"url":"http://mathisfunforum.com/viewtopic.php?id=19738","timestamp":"2014-04-21T15:16:32Z","content_type":null,"content_length":"61986","record_id":"<urn:uuid:847c621b-ccb8-42f3-af06-71628f9255da>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cary Millsap
I gave my two boys an old puzzle to solve yesterday. I told them that I’d give them each $10 if they could solve it for me. It’s one of the ways we do the “allowance” thing around the house
Here’s the puzzle. A piece of string is stretched tightly around the Earth along its equator. Imagine that this string along the equator forms a perfect circle, and imagine that to reach around that
perfect circle, the string has to be exactly 25,000 miles long. Now imagine that you wanted to suspend this string 4 inches above the surface of the Earth, all the way around it. How much longer
would the string have to be do do this?
Before you read any further, guess the answer. How much longer would the string have to be? A few inches? Several miles? What do you think?
Now, my older son Alex was more interested in the problem than I thought he would be. He knows the formula for computing the circumference of a circle as a function of its diameter, and he knew that
raising the string 4 inches above the surface constituted a diameter change. So the kernel of a solution had begun to formulate in his head. And he had a calculator handy, which he loves to use.
We were at
for dinner. The rest of the family went in to order, and Alex waited in the truck to solve the problem “where he could have some peace and quiet.” He came into the restaurant in time to order, and he
gave me a number that he had cooked up on his calculator in the truck. I had no idea whether it was correct or not (I haven’t worked the problem in many years), so I told him to explain to me how he
got it.
When he explained to me what he had done, he pretty quickly discovered that he had made a unit conversion error. He had manipulated the ‘25,000’ and the ‘4’ as if they had been expressed in the same
units, so his answer was wrong, but it sounded like conceptually he got what he needed to do to solve the problem. So I had him write it down. On a napkin, of course:
The first thing he did was draw a sphere (top center) and tell me that the diameter of this sphere is 25,000 miles divided by 3.14 (the approximation of π that they use at school). He started
dividing that out on his calculator when I pulled the “Whoa, wait” thing where I asked him
he was dividing those two quantities, which caused him, grudgingly, to write out that
= 25,000 mi, that
= π
, and that therefore
/π. So I let him figure out that
d ≈
7,961 mi. There’s loss of precision there, because of the 3.14 approximation, and because there are lots of digits to the right of the decimal point after ‘7961’, but more about that later.
I told him to call the length of the original string
(for circumference) and to call the 4-inch suspension distance of the string
(for height), and then write me the formula for the length of the 4-inch high string, without worrying about
unit conversion issues. He got the formula pretty close on the first shot. He added 4 inches to the diameter of the circle instead of adding 4 inches to the radius (you can see the ‘4’ scratched out
and replaced with an ‘8’ in the “8 in/63360 in” expression in the middle of the napkin. Where did the ‘63360’ come from, I asked? He explained that this is the number of inches in a mile (5,280 ×
12). Good.
But I asked him to hold off on the unit conversion stuff until the very end. He wrote the correct formula for the length of the new string, which is [(
/π) + 2
]·π (bottom left). Then I let him run the formula out on his calculator. It came out to something bigger than exactly 25,000; I didn’t even look at what he got. This number he had produced minus
25,000 would be the answer we were looking for, but I knew there would be at least two problems with getting the answer this way:
• The value of π is approximately 3.14, but it’s not exactly 3.14.
• Whenever he had to transfer a precise number from one calculation to the next, I knew Alex was either rounding or truncating liberally.
So, I told him we were going to work this problem out completely symbolically, and only plug the numbers in at the very end. It turns out that doing the problem this way yields a very nice little
Here’s my half of the napkin:
I called the new string length
ʹ and the old string length
. The answer to the puzzle is the value of
ʹ −
The new circumference
ʹ will be π times the new diameter, which is
/π + 2
, as Alex figured out. The second step distributes the π factor through the addition, resulting in
ʹ −
= π
/π + 2π
. The π
/π term simplifies to just
, and it’s the final step where the magic happens:
ʹ −
+ 2π
reduces simply to
ʹ −
= 2π
. The difference between the new string length and the old one is 2π
, which in our case (where
= 4 inches) is roughly 25.133 inches.
So, problem solved. The string will have to be about 25.133 inches longer if we want to suspend it 4 inches above the surface.
Notice how simple the solution is: the only error we have to worry about is how precisely we want to represent π in our calculation.
Here’s the even cooler part, though: there is no ‘
’ in the formula for the answer. Did you notice that? What does that mean?
It means that the original circumference doesn’t matter. It means that if we have a string around the Moon that we want to raise 4 inches off the surface, we just need another 25.133 inches. How
about a string stretched around Jupiter? just 25.133 more inches. Betelgeuse, a star whose diameter is about the same size as Jupiter’s
? Just 25.133 more inches. The whole solar system? Just 25.133 more inches. The entire Milky Way galaxy? Just 25.133 more inches. A golf ball? Again, 25.133 more inches. A single electron? Still
25.133 inches.
This is the kind of insight that solving a problem symbolically provides. A numerical solution tends to answer a question and halt the conversation. A symbolic formula answers our question and
invites us to ask more.
The calculator answer is just a fish (pardon the analogy, but a potentially tainted fish at that). The symbolic answer is a fishing pole with a stock pond.
So, did I pay Alex for his answer? No. Giving two or three different answers doesn’t close the deal, even if one of the answers is correct. He doesn’t get paid for blurting out possible answers. He
doesn’t even get paid for answering the question correctly; he gets paid for
me that he has created a correct answer. In the professional world,
is the key: the convincing.
Imagine that a consultant or a salesman told you that you needed to execute a $250,000 procedure to make your computer application run faster. Would you do it? Under what circumstances? If you just
trusted him and did it, but it didn’t do what you had hoped, would you ever trust him again? I would argue that you shouldn’t trust an answer without a compelling rationale, and that the
recommender’s reputation alone is not a compelling rationale.
The deal is, whenever Alex can show me the right answer and convince me that he’s done the problem correctly, that’s when I’ll give him the $10. I’m guessing it’ll happen within the next three days
or so. The interesting bet is going to be whether his little brother beats him to it. | {"url":"http://carymillsap.blogspot.co.uk/2012_03_01_archive.html","timestamp":"2014-04-17T09:52:53Z","content_type":null,"content_length":"73356","record_id":"<urn:uuid:51f65ccc-bff9-4cea-9268-f43f8c11d660>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
LRC - Trouble With Physics - Larsonian Physics and Einstein's Plague
A friend called me the other day and asked, “What is ‘Larsonian physics,’ as distinguished from ‘Newtonian physics?’” To compare the two systems (see here), it is helpful to understand Hestenes’
description of Newtonian physics. He writes: ^1
Newtonian mechanics is, therefore, more than a particular scientific theory; it is a well defined program of research into the structure of the physical world…. [The foundation of the program to
this day is that] a particle is understood to be an object with a definite orbit in space and time. The orbit is represented by a function x = x(t), which specifies the particle’s position x at
each time t. To express the continuous existence of the particle in some interval of time, the function x(t) must be a continuous function of the variable t in that interval. When specified for
all times in an interval, the function x(t) describes a motion of the particle.
This continuous function, representing the motion of an object from one location to another over time, which “expresses the continuous existence of a particle” and forms the foundation of Newtonian
physics, is replaced in Larsonian physics, as developed here at the LRC, by a progression of space and time, independent of any object. When either the space or time aspect of the space|time
progression oscillates, at a given point in the progression, a point in space (or time) is established, the position x of which is represented by the continuous function x = x(t) = 0 (or x = x(s) =
0); that is, the point’s spatial (or temporal) position, in the progression, no longer changes with the progression, due to the oscillation. In other words, it becomes stationary (non-progressing) in
space (or time); Its fixed position, relative to the progression, is actually generated and maintained dynamically by its oscillation (as a 1D analogy, we can think of a stationary fish swimming
against the current of a flowing stream.)
Thus, in the LRC’s Larsonian physics, the continuous function expresses a spatial or temporal location that had no prior existence, while in Newtonian physics, the continuous function expresses a
change between pre-existing locations. Subsequently, the change that Einstein introduced into the Newtonian program replaced Newton’s ideas of absolute time and absolute space, so that
spatio-temporal locations are no longer pre-existent in the Newtonian system, but are dynamically generated from a gravitational field in general relativity.
Of course, the fixed background of locations in Newton’s concept is still necessary to define motion in the quantum field theory of particle physics, where only Einstein’s special relativity is
employed and his general relativity is ignored, causing the immense trouble with the attempts to unify modern physics, which is the subject of this blog: The theory of gravity generates the space and
time continuum, while the theory of matter pre-supposes it, even though they are both field-theoretic constructs.
However, the important thing to understand about the LRC’s Larsonian physics is that the dynamic creation of points out of the space|time progression provides the basis for a physics of the
discretium, without the need to resort to the continuous field concept, or, as Einstein would have expressed it, it provides a basis for an algebraic physics, as opposed to a physics of the
continuum, or a geometric physics.
Though it’s not widely known, Einstein actually would have preferred a discretium based, or algebraic physics, but was unable to find a way to get to such a system. He was convinced, as today’s
physicists are too, now, after pursuing unification as diligently as he did (even though he was mocked for doing so at the time) that the space continuum is “doomed,” as Witten puts it.^2 In fact,
according to Arkani-Hamed, “The idea that [the space continuum] might not be fundamental is pretty well accepted…”^3^ in the legacy physics community.
But, in his day, Einstein suffered alone, “plagued” with his thoughts that the assumption of a space and time continuum was probably the wrong approach, given that physical phenomena are quantized.
Nevertheless, all the while, he is celebrated as the champion who revolutionized continuum physics. John Stachel, of the University of Boston’s Center for Einstein Studies, who first discovered this
other side of Einstein,” explains: ^4
If one looks at Einstein’s work carefully, including his published writings, but particularly his correspondence and reminiscences of conversations with him, one finds strong evidence for the
existence of another Einstien, one who questioned the fundamental significance of the continuum. His skepticism seems to have deepened over the years, resulting late in his life in a profound
pessimism about the field-theoretical program, even as he continued to pursue it.
What Einstein would have discovered, had he lived to study the algebraic physics that we are developing at the LRC, is that, while the continuum is something that can be conceived by the human mind,
it isn’t necessary to conceive of it as an a priori construction needed to develop a discrete set of points, which was the great obstacle that baffled him. As Stachel points out, Einstein wrote to
Walter Dallenbach, confirming that his former student had also correctly grasped the “drawback” of the continuum, which drawback is essentially that it seems that one needs to have a continuum in
order to have a discontinuum:
The problem seems to me [to be] how one can formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a
supplementary construction, not justified by the essence of the problem, [a construction] which corresponds to nothing “real.” But we still lack the mathematical structure unfortunately. How much
have I already plagued myself in this way.
Of course, the mathematics of the time were still going the opposite way. Mathematicians were happily following Dedekind and Cantor in constructing a continuum (infinite sets and smooth functions)
from a discontinuum (discrete numbers). In fact, Stachel, speculates that Einstein’s doubts about the reality of the continuum stem, in part, from his reading of Dedekind, from whom he borrows his
oft used phrase “free inventions of the human mind,” that did anything but endear him to Larson. Dedekind argues against the continuum by insisting that discontinuity in the concept of space does not
affect Euclidean geometry in the least:
For a great part of the science of space, the continuity of its configuration is not even a necessary condition…If anyone should say that we cannot conceive of space as anything else than
continuous, I should venture to doubt it and call attention to the fact that a far advanced, refined, scientific, training is demanded in order to perceive clearly the essence of continuity and
to comprehend that besides rational quantitative relations also irrational, and besides algebraic, transcendental quantitative relations, are conceivable.
Of course, it was highly unlikely that Einstein was aware that Dedekind’s intellectual journey into irrational numbers and infinite sets began some fifty years previously with his exposure to
Hamilton’s work, who had defined irrational numbers, but in the context of numbers derived, not from the abstract notion of a set, but from the intuition of the progression of time. And while
Hamilton’s work on irrational numbers in his “Algebra of Pure Time,” is little regarded today, who could have known that it would have been eventually synthesized by Clifford with the Grassmann
numbers as an “operationally interpreted” number, leading to Hestenes’ pioneering work in the recognition of geometric algebra as the unification of geometry and algebra through the geometric
While the bottom line can only be sketched at this point, all indications are that the mathematical structure, which Einstein pined for, that would enable him to be able to define a discontinuum,
without the aid of a continuum, appears to be at hand. To be sure, he outlined major conceptual obstacles with both concepts in his letter to Dallenbach:
Yet, I see difficulties of principle here too. The electrons (as points) would be the ultimate entities in such a system (building blocks). Are there indeed such building blocks? Why are they all
of equal magnitude? Is it satisfactory to say: God in his wisdom made them all equally big, each like every other because he wanted it that way? If it had pleased him, he could also have created
them different. With the continuum viewpoint one is better off in this respect, because one doesn’t have to prescribe elementary building blocks from the beginnning. Further, the old question of
the vacuum! But these considerations must pale beside the overwhelming fact: The continuum is more ample than the things to be descibed.
Thus, Einstein was left to plague himself with these thoughts, but now with a knowledge of Hamilton’s “algebraic numbers,” Larson’s “speed displacements,” and Clifford’s “operational interpretation”
of numbers, and his multi-dimensional algebras, there is much more to work with than there was in Einstein’s day. Perhaps, it’s time to now stand with the legendary icon of physics and say:
[O]ne does not have the right today to maintain that the foundation [of physics] must consist of a field theory in the sense of Maxwell. The other possibility leads, in my opinion, to a
renunciation of the space-time continuum and to a purely algebraic physics.
Logically, this is quite possible: The system is described by a number of integers; “Time” is only a possible viewpoint, from which the other “observables” can be considered - an observable
logically coordinated to all the others. Such a theory doesn’t have to be based upon the probability concept…
I hope my friend would understand.
Reader Comments
There are no comments for this journal entry. To create a new comment, use the form below. | {"url":"http://www.lrcphysics.com/trouble-with-physics/2008/4/29/larsonian-physics-and-einsteins-plague.html","timestamp":"2014-04-17T00:59:59Z","content_type":null,"content_length":"35515","record_id":"<urn:uuid:7016e3e9-b0e0-4b07-8209-560376b584a4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rational Homotopy Theory
Rational homotopy theory is analogous to the study of linear algebra versus general ring and module theory. On the one hand, it is simpler, but yet has useful applications and predictive power. It
isolates in algebraic topology those questions and techniques that deal with non-torsion data.
For example, if M is an orientable manifold, using the differentiable forms, one can calculate the real cohomology of M. Quillen and Sullivan realized 30 years ago that this was just the top level of
information available-- in fact, information about the entire homotopy type of M is implicit in these differentiable forms. This, along with related methods of localization and completions, marked a
change in the approach to the sudy of homotopy types of topological spaces.
The rational homotopy type of a space X provides the backbone, to which more detailed information concerning various primes is attached to build a picture of the homotopy type of X. The techniques
involved in studying rational homotopy theory are simpler and more algebraic than those needed in traditional algebraic topology.
This course should be of use to students studying topology, commutative algebra, geometry, and several complex variables. The book by Halperin, et al provides an overall survey of the area.
@book {MR2002d:55014,
AUTHOR = {F{\'e}lix, Yves and Halperin, Stephen and Thomas, Jean-Claude},
TITLE = {Rational homotopy theory},
SERIES = {Graduate Texts in Mathematics},
VOLUME = {205},
PUBLISHER = {Springer-Verlag},
ADDRESS = {New York},
YEAR = {2001},
PAGES = {xxxiv+535},
ISBN = {0-387-95068-0},
MRCLASS = {55P62 (18Gxx 55U35)},
MRNUMBER = {2002d:55014},
MRREVIEWER = {John F. Oprea}, | {"url":"http://www.math.purdue.edu/~wilker/rational.html","timestamp":"2014-04-21T04:37:58Z","content_type":null,"content_length":"2284","record_id":"<urn:uuid:91d714d1-50cd-4a45-9ab4-4f975c89a2c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: US teachers are overworked and underpaid
Date: Oct 12, 2012 2:43 AM
Author: Paul A. Tanner III
Subject: Re: US teachers are overworked and underpaid
You still have no idea of the actual contract for the teachers in
terms of teaching hours.
Name a school district anywhere in FL where the classes are standard
1-hour classes - actually technically less since there needs to be
time for the students to get to the next class - where you think that
the teachers teach only five such classes per day.
Everything I said in the original post still stands even if one were
to say that based on some technicality that a 1-hour course is
actually only a 50 or 55 minute course due to needed time to get to
the next class, US teachers teach only 5 hours per day on average when
school is in session. This is because we're still talking about only
around 3 hours per day on average when school is in session for those
teachers of those top-performing countries of Finland, Korea, and
Japan. Still a very massive difference.
But then again, if those charts are overstating teaching hours for US
teachers, they could overstating teaching hours for those countries'
teachers as well, in which case they teach even less than three hours
per day!
On Fri, Oct 12, 2012 at 1:34 AM, Robert Hansen <bob@rsccore.com> wrote:
> We were talking primary school. High school is different.
> Bob Hansen
> On Oct 12, 2012, at 1:15 AM, Paul Tanner <upprho@gmail.com> wrote:
> I taught in FL, at the secondary level. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7904748","timestamp":"2014-04-16T11:09:06Z","content_type":null,"content_length":"2500","record_id":"<urn:uuid:df8a904e-61d5-45df-92e3-fa6f5d57c90c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Reid
Total # Posts: 30
3rd grade math
what is the next number in the sequence 1,3,9,27,______ I'm confused on the skip counting they use.
3rd grade math
sara wants to know how many boxes of cookies her entire girl scout troop sold. How could she use compatible numbers to find the sum? kate 52 boxes sara 43 boxes jen 18 boxes haliey 27 boxes
A car travels along a straight stretch of road. It proceeds for 11.3 mi at 51 mi/h, then 26.3 mi at 45 mi/h, and finally 42.3 mi at 31.1 mi/h. What is the car s average velocity during the entire
trip? Answer in units of mi/h
Bob and Jim had a total of 90 marbles. Bob had half a dozen more than Jim. How many marbles did Jim have?
in traveling to Greece, Jim found the exchange rate to be 148 drachma equal to one American dollar. He bought 3 of the same priced items and spent almost 10 American dollars. What was the maximum
amount of drachmas for each item?
4th grade word problems
if tiles at a store cost .$20 for a 1"x1" square and $.30 for 1"X2" rectangle what would be the cheapest way to buy tiles to make a tiled square 3 inches on each side.
Lee & Lynn started out on a trip at 8:00am and arrived at 4:00 pm. The next day they started back home at 11:00am using the same exact route. If their driving speed was constant for the entire trip,
at what time would they have been at the same place at the same time on both d...
A nine liter tank has 150 atmospheres of bromine in it at 27°C. What is the added mass of the tank due to the gas?
8. The usual partial pressure of oxygen that people get at sea level is 0.20 Atm., that is, a fifth of the usual sea level air pressure. People used to 1 Atm. air pressure begin to become
"light-headed" at about 0.10 Atm oxygen. As a rule of thumb, the air pressure d...
express the ratio 100 to 60 as a fraction
The length of the smaller of the two diagonals of a rhombus is 12 and the length of the larger diagonal is 16. What is the length of a side of the rhombus?
PO2+PHe+PCO2=1.2atm solve for PO2 PO2= 1.2atm-PHe-PCO2 answer is d
English 2
can somebody tell me how to make a body paragraph. How do i put this, whats the paragraph made of?
chemistry - grade 9
answer the question someone!
A rubberband has mass 8.5g. The spring constant is 13.0. The rubberband is pulled back 3.4 cm. What is the velocity when rubberband is let go?
ok.. sorry to post anyother question on here but i got the answer to my previous question but im having trouble with this one.. In the explosion of a hydrogen-filled balloon, 0.43 g of hydrogen
reacted with 7.2 g of oxygen to form how many grams of water vapor? (Water vapor is...
Are the following data sets on chemical changes consistent with the law of conservation of mass? (a) A 15.0 g sample of sodium completely reacts with 19.6 g of chlorine to form 34.6 g of sodium
chloride. the answer is yes, because 15 and 19.6 add up to 34.6 right?
Math - factoring
of yes. sorry i almost forget whenever solving for x take the problem (x-4)(x+3) and make each their own equation 1) (x-4)=0 2)(x+3)=0 then add the 4 and the subtract the three and you get x=4,-3
Math - factoring
First in these type of problems if there is a variable that is squared i always start by putting those in first 1) (x ) (x ) then i decide if im going to need a - or + and in this case because we
want to make it - we have to put one of each 2)(x- )(x+ ) then i just start by pl...
If 76g of Cr2O3 and 27g of Al completely ract to form 51g of Al2O3, how many grams of Cr are formed?
that did not help at all, in order for that expression to equal zero, the deta T or "(Tf-Ti)" term has to equal zero. Im stuck where you are Capacino, idk what im doing wrong, this was a test
question on my last exam and had me stumped.
What caused Mendel to get different ratio results in first and second experiments with crosses?
7th grade
If a man says he has three children who's ages are the product 36,the sum is the house number across the street and his oldest son loves chocolate cake,how old are the kids?
I did not see a second response to my answer to your question of what method?
Can Ms. Sue go back to my science question and see my response?
Yes - I think so?
Dosen't matter - not an exact number - the question is why the unit is used - just a general knowledge question?
Graduated cylinder is the tool you use or length by width by height. Is that what you mean by method?
Why can you express the volume of a gold nugget measured by the method in cubic units? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Reid","timestamp":"2014-04-21T00:26:25Z","content_type":null,"content_length":"12132","record_id":"<urn:uuid:6c246009-198d-40a5-aade-4157cd71bd91>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solvable groups in which all subgroups are supplemented
up vote 3 down vote favorite
Let $G$ be a solvable group in which all non-trivial subgroups are supplemented. (A subgroup $H$ of a group $G$ is called supplemented if $G=HK$ for some proper subgroup $K$. Such a subgroup $K$ is
called a supplement of $H$ in $G$). Suppose further that $G$ has a maximal subgroup $M$ having only finitely many supplements in $G$.
Is $G$ necessarily finite?
Dear Prof. Abdollahi, I am thinking about $Q_8 \times G$, where $G$ is a suitable infinite abelian poly-cyclic group. Did you examined such groups? – Shahrooz Apr 27 '13 at 20:31
@Shahrooz: You mean if one takes $G=Q_8\times A$ for a suitable infinite finitely generated abelian group $A$, then $G$ may satisfy conditions given in my question? I think these such groups cannot
be counterexample for my question, as the question has positive answer for supersolvable groups. Note that abelian polycyclic groups are exactly finitely generated abelian groups. – Alireza
Abdollahi Apr 28 '13 at 2:26
Dear Prof. Abdollahi, after some attempts, I could not find suitable $A$. – Shahrooz Apr 28 '13 at 19:20
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/128930/solvable-groups-in-which-all-subgroups-are-supplemented","timestamp":"2014-04-20T13:34:57Z","content_type":null,"content_length":"49297","record_id":"<urn:uuid:930ffbca-7b4e-4aec-b423-ec4cb9dd583e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
etd AT Indian Institute of Science: Polarized Line Formation In Turbulent And Scattering Media
& Collections
Thesis Guide
Submitted Date
Sign on to:
Receive email
Login / Register
authorized users
Edit Profile
About DSpace
etd AT Indian Institute of Science >
Division of Physical and Mathematical Sciences >
Physics (physics) >
Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/911
Title: Polarized Line Formation In Turbulent And Scattering Media
Authors: Sampoorna, M
Advisors: Nagendra, K N
Polarized Magnetic Media
Turbulent Magnetic Fields
Polarized Radiation
Solar Polarization
Polarized Line Radiation Transfer
Zeemam Line Radiative Transfer
Keywords: Hanle Scattering
Polarized Line Formation
Stochastic Zeeman Line Formation
Light Scattering Theory
Atoms - Light Scattering
Hanle-Zeeman Line Formation
Zeeman Propagation Matrix
Scattering Polarization
Submitted Apr-2008
Series/ G22427
Report no.:
This thesis is devoted to improve our knowledge on the theory of polarized line formation in a magneto-turbulent medium, and in a scattering dominated magnetized medium, where partial
redistribution (PRD) effects become important. Thus the thesis consists of two parts. In the first part we carry out a detailed investigation on the effect of random magnetic fields on
Zeeman line radiative transfer. In the second part we develop the theory of polarized line formation in the presence of arbitrary magnetic fields and with PRD. We present numerical
methods of solution of the relevant transfer equation in both part-I and II. In Chapter I we give a general introduction, that describes the basic physical concepts required in both
parts of the thesis. Chapters 2-6 deal with the part-I, namely stochastic polarized Zeeman line formation. Chapters 7-10 deal with part –II, namely the theory and numerics of polarized
line formation in scattering media. Chapter II is devoted to the future outlook on the problems described in part-I and II of the thesis. Appendices are devoted to additional
mathematical details. Part-I of the Thesis: Stochastic polarized line formation in magneto-turbulent media Magneto-convection on the Sun has a size spectrum that spans several orders of
magnitudes and hence develops turbulent elements or eddies the sizes of which are much smaller than the spatial resolution of current spectro-polarimeters (about 0.2 arcsec or 150km at
the photospheric level). We were thus strongly motivated to consider the Zeeman effect in a medium where the magnetic field is random with characteristic scales of variation comparable
to the radiative transfer characteristic scales. In Chapter 2, we consider the micro-turbulent limit and study the mean zeeman absorption matrix in detail. The micro-turbulent limit
refers to the case when the scales of fluctuations of the random field are much smaller than the photon mean free paths associated to the line formation. The ‘mean’ absorption and
anomalous dispersion coefficients are calculated for random fields with a non-Zero mean value - isotropic or anisotropic Gaussian distributions that are azimuthally invariant about the
direction of the mean field. The averaging method is described in detail, and fairly explicit expressions for the mean coefficients are established. A detailed numerical investigation of
the mean coefficients illustrates two simple effects of the magnetic field fluctuations: (i) broadening of the components by fluctuations of the field strength, leaving the π-components
unchanged, and (ii) averaging over the angular dependence of the π and components. Angular averaging can modify the frequency profiles of the mean coefficients quite drastically, namely,
the appearance of an unpolarized central component in the diagonal absorption coefficient, even when the mean field is in the direction of the line-of-sight. For isotropic fluctuations,
the mean coefficients can be expressed in terms of generalized Voigt and Faraday-Voigt functions, which are related to the derivatives of the Voigt and Faraday-Voigt functions. In
chapter 3, we study these functions in detail. Simple recurrence relations are established and used for the calculation of the functions themselves and of their partial derivatives.
Asymptotic expansions are also derived. In Chapter 4, we consider the Zeeman effect from a magnetic field which has a finite correlation length(meso-turbulence) that can be varied from
zero to infinity and thus made comparable to the photon mean free-path. The random vector magnetic field B is modeled by a Kubo-Anderson process – a piecewise constant Markov process
characterized by a correlation length and a probability distribution function(PDF) for the random values of the magnetic field. The micro- and macro-turbulent limits are recovered when
the correlation length goes to zero or infinity respectively. Mean values and rms fluctuations around the mean values are calculated numerically for a random magnetic field with
isotropic Gaussian fluctuations. The effects of a finite correlation length are discussed in detail. The rms fluctuations of the Stokes parameters are shown to be very sensitive to the
correlation length of the magnetic field. It is suggested to use them as a diagnostic tools to determine the scale of unresolved features in the solar atmosphere. In Chapter 5, using
statistical approach, we analyze the effects of random magnetic fields on Stokes line profiles. We consider the micro and macro-turbulent regimes, which provide bounds for more general
random fields with finite scales of variations. The mean Stokes parameters are obtained in the micro-turbulent regime, by first averaging the Zeeman absorption matrix Φ over the PDF P(B)
of the magnetic field and then solving the concerned radiative transfer equation. In the macro-turbulent regime, the mean solution is obtained by averaging the emergent solution over P
(B). In this chapter, we consider the same Gaussian PDFs that are used to construct (Φ) in chapter 2. Numerical simulations of magneto-convection and analysis of solar magnetograms
provide the empirical PDF for the magnetic field line-of-sight component on the solar atmosphere. In Chapter 6, we explore the effects of different kinds of PDFs on Zeeman line
formation. We again consider the limits of micro and macro-turbulence. The types of PDFs considered are: (a) Voigt function and stretched exponential type PDFs for fields with fixed
Abstract: direction but fluctuating strength. (b) Cylindrically symmetrical power law for the angular distribution of magnetic fields with given field strength. (c) Composite PDFs accounting for
randomness in both strength and direction obtained by combining a Voigt function or a stretched exponential with an angular power law. The composite PDF proposed has an angular
distribution peaked about the vertical direction for strong fields and is nearly isotropically distributed for weak fields, which could mimic solar surface random fields. We also
describe how the averaging technique for a normal Zeeman triplet may be generalized to the more common case of anomalous Zeeman splitting patterns. Part-II of the Thesis: Polarized line
formation in scattering media-Theory and numerical methods Many of the strongest and most conspicuous lines in the Second Solar Spectrum are strong lines that are formed rather high,
often in the chromosphere above the temperature minimum. From the standard, unpolarized and non-magnetic line-formation theory such lines are known to be formed under the conditions that
are very far from local thermodynamic equilibrium. They are characterized by broad damping wings surrounding the line core. Doppler shifts in combination with collisions cause photons
that are absorbed at a given frequency to be redistributed in frequency across the line profile in a complex way during the scattering process. Two idealized, limiting cases to describe
this redistribution are “frequency coherence” and “complete redistribution” (CRD), but the general theory that properly combines these two limiting cases goes under the name “partial
frequency redistribution” (PRD). Resonance lines which are usually strong can be properly modeled only when PRD is taken into account. To use these strong lines for magnetic field
diagnostics we need a line scattering theory of PRD in the presence of magnetic fields of arbitrary strength. In the second part of the thesis we develop such a theory and derive the
polarized PRD matrices. These matrices are then used in the polarized line transfer equation to compute the emergent Stokes parameters. Polarized scattering in spectral lines is governed
by a 4 x 4 matrix that describes how the Stokes vector is scattered in all directions and redistributed in frequency within the line. In Chapter 7, using a classical approach we develop
the theory for this redistribution matrix in the presence of magnetic fields of arbitrary strength and direction, and for a J = 0 → 1 → 0 transition. This case of arbitrary magnetic
fields is called the Hanle-Zeeman regime, since it covers both the partially overlapping weak and strong-field regimes, in which the Hanle and Zeeman effects respectively dominate the
scattering polarization. In this general regime the angle-frequency correlations that describe the so-called PRD are intimately coupled to the polarization properties. We also show how
the classical theory can be extended to treat atomic and molecular scattering transitions for any combinations of J quantum numbers. In chapter 8 , we show explicitly that for a J = 0 →
1 → 0 scattering transition there exists an equivalence between the Hanle-Zeeman redistribution matrix that is derived through quantum electrodynamics(Bommier 1997b) and the one derived
in Chapter 7 starting from the classical, time-dependent oscillator theory of Bommier & Stenflo (1999). This equivalence holds for all strengths and directions of the magnetic field.
Several aspects of the Hanle-Zeeman redistribution matrix are illustrated, and explicit algebraic expressions are given, which are of practical use for the polarized line transfer
computations. In chapter 9, we solve the polarized radiative transfer equation numerically, taking into account both the Zeeman absorption matrix and the Hanle-Zeeman redistribution
matrix. We compute the line profiles for arbitrary field strengths, and scattering dominated line transitions. We use a perturbation method (see eg. Nagendra et al. 2002) to solve the
Hanle-Zeeman line transfer problem. The limiting cases of weak field Hanle scattering and strong field Zeeman true absorption are retrieved. The ilntermediate regime, where both Zeeman
absorption and scattering effects are important, is studied in some detail. Numerical method used to solve the Hanle-Zeeman line transfer problem in Chapter 9 is computationally
expensive. Hence it is necessary to develop fast iterative methods like PALI (Polarized Approximate Lambda Iteration). As a first step in this direction we develop such a method in
Chapter 10 to solve the transfer problem with weak field Hanle scattering. We use a ‘redistribution matrix’ with coupling between frequency redistribution and polarization and no domain
decomposition. Such a matrix is constructed by angle-averaging the frequency dependent terms in the exact weak field Hanle redistribution matrix for a two-level atom with unpolarized
ground level (that can be obtained by taking the weak field limit of the Hanle-Zeeman redistribution matrix). In the past, the PALI technique has been applied to redistribution matrices
in which frequency redistribution is ‘decoupled’ from scattering polarization, the decoupling being achieved by an adequate decomposition of the frequency space into several domains. In
this chapter, we examine the consequences of frequency space decomposition, and the resulting decoupling between the frequency redistribution and polarization, on the solution of the
polarized transfer equation for the Stokes parameters.
URI: http://hdl.handle.net/2005/911
Appears in Physics (physics)
Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated. | {"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/911","timestamp":"2014-04-18T11:12:57Z","content_type":null,"content_length":"36789","record_id":"<urn:uuid:265a0b32-e900-457d-9e61-fb2c3de0b153>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spacetime Geometry Inside a Black Hole
by Jim Haldenwang
written Nov. 12, 2004
revised July 30, 2012
This paper describes the nature of spacetime in and around black holes. Toward the end of the paper, the reader is taken on an imaginary journey inside a black hole. But first, the reader is
introduced to a very elegant theory of space, time, and gravity – Einstein's general theory of relativity. General relativity gives us the tools we need to understand black holes. In this paper, I
assume the reader is familiar with calculus and special relativity. We start by reviewing the special theory of relativity, introducing a geometric approach that leads naturally to the general
The Special Theory of Relativity
Special relativity is called "special" because it describes a special type of motion: uniform, straight-line motion without any acceleration. Two observers who are in uniform, straight-line motion
relative to one another are called inertial observers. Their frames of reference are called inertial reference frames. If two observers are accelerating relative to one another, they are not inertial
observers, and their situation cannot be handled by special relativity. Einstein developed special relativity in the year 1905, focusing on inertial observers and ignoring acceleration. Over the next
ten years, he was able to generalize his theory to include all types of motion, including acceleration and gravity. Einstein published his general theory of relativity in 1916. (He published his
first paper on the subject in November, 1915.) We will explore the general theory of relativity shortly.
Special relativity is based on two principles. First is the special principle of relativity, which states that the laws of physics are the same in all inertial reference frames. This means that there
is no absolute rest frame. One observer's point of view is as valid as another's. The second principle is Einstein's light postulate, which states that the speed of light in vacuum is constant and
has the same value for all inertial observers. This postulate has been verified by innumerable experiments. These experiments have always found the same value for the speed of light in empty space,
regardless of the (uniform) motions of the measuring devices. The speed of light in vacuum is about 300,000 km (kilometers) per second. This speed is represented by the letter c, which stands for
celeritas (Latin for "swift").
The constancy of the speed of light runs counter to our ordinary, everyday experience of motion. Normally when we measure the speed of an object, we measure that speed with respect to something. For
example, if I am walking at 3 miles per hour, it is understood my speed is measured with respect to the ground I'm walking on. If I walk at 3 mph on a moving train, an observer standing next to the
tracks might measure the train's speed to be 60 mph, and my speed as 63 mph. A passenger sitting in the train would measure my speed to be 3 mph, however. We say that motion is relative. Light does
not behave like this, however. If I'm in a spaceship flying away from the Sun at 100,000 km/sec and I measure the speed of the Sun's light rays streaming past me, I will not get a value of 200,000 km
/sec, as one might expect. Instead, the measurement will turn out to be 300,000 km/sec – the same value obtained by observers here on Earth.
In order to understand this curious result, we must take a closer look at what we are measuring when we measure an object's speed. In order to measure the speed of anything, we divide the distance in
space by the amount of time it took to travel that distance. In other words, we measure speed in units of space divided by time. So if the speed of light is truly constant for all observers
regardless of their relative motions, it must be the units of space and time that change. Einstein's insight was to realize that space and time are not absolute and unchanging, as previously thought.
Instead, space and time are relative and depend on the motion of the observer.
In his special theory of relativity, Einstein worked out exactly how the units of space and time must vary in order for the speed of light to remain constant for all inertial observers. For example,
suppose two observers, Alice and Bob, both decide to measure the speed of a passing light ray from the Sun. Alice is in a spaceship zipping away from the Earth and Sun at 150,000 km/s (half the speed
of light), while Bob is at rest in his observatory back on Earth. Both Alice and Bob come up with the same speed for the light ray, namely 300,000 km/s. They both measured the distance the light ray
traveled over some period of time. In order for Alice's measurement on board her moving spaceship to come out the same as Bob's, Bob concludes that Alice's measuring stick must be shorter than his
own, and her stop watch must be running slower than his. From Bob's point of view, space has contracted in Alice's direction of motion and time for Alice has slowed down or dilated. Space contracts
and time dilates by exactly the amount needed so that when Alice measures the speed of a passing light ray, she obtains the same value that Bob does. Two handy mnemonics for remembering how space and
time change with motion are: "moving sticks are shortened" and "moving clocks run slow."
In order to quantify the amount that space and time change with uniform motion, Einstein introduced a factor called γ (gamma). In SI (standard international) units of meters (m) and seconds (s), γ is
given by
For example, if Alice's spaceship is traveling at half the speed of light, then v = 150,000,000 m/s, so 1 − v^2/c^2 = 1 − (150,000,000 m/s)^2 / (300,000,000 m/s)^2 = 0.75, and so γ = 1/√(0.75) ≈
1.15. To determine the length of Alice's measuring stick relative to Bob, we divide by γ. If her measuring stick is one meter long when she is at rest, then the length of Alice's stick when she is
moving at half the speed of light is, according to Bob, 1 m / 1.15 = 0.87 m. According to Alice, however, her stick is still exactly one meter long. To determine how much slower her clock runs
relative to Bob, we multiply by γ. Since 1 s × 1.15 = 1.15 s, Bob finds that Alice's clock runs about 15% slower than his.
In general relativity, SI units are not very convenient to work with. Instead, we use what are known as "geometrized units." With geometrized units, we measure both distance and time in meters. To
convert time from seconds to meters, we ask, how long does it take light to travel a distance of one meter? Since light travels 300 million meters in one second, light travels one meter in just one
300-millionth of a second. One meter of time is equal to 1/300,000,000 s. In geometrized units, the speed of light is 1 and dimensionless, since light travels 1 meter of distance in 1 meter of time
(c = 1 m / 1 m = 1). Geometrized units are much more convenient to work with in relativity theory. Also, as we shall soon see, relativity treats space and time much alike, so it is appropriate to use
the same units for both.
If we substitute c = 1 into the formula for γ given above, we obtain γ in geometrized units:
Returning to our example of Alice and Bob, we can now calculate γ using geometrized units. Since the spaceship is traveling at half the speed of light, v = 0.5, so 1 − v^2 = 1 − (0.5)^2 = 0.75, and γ
= 1/√(0.75) ≈ 1.15, the same result we obtained earlier.
In relativity theory, we often work with two coordinate systems (reference frames) that are in motion relative to one another. Each coordinate system represents spacetime as experienced by one of the
observers. Since spacetime has four dimensions (three of space and one of time), we need to specify four coordinates. If we know the coordinates of a point in one coordinate system, say (t, x, y, z),
we may want to know the coordinates of that point in the other coordinate system. In Newtonian mechanics, we can find these other coordinates, say (t′, x′, y′, z′), by using the Galilean
transformation. Consider two observers O and O′, where O′ moves with constant velocity v along the positive x axis of a Cartesian coordinate system, and the positions of O and O′ coincide with the
origin at time t = 0. See figure 1. In this configuration (known as the standard configuration), the Galilean transformation is given by
t′ = t, x′ = x − vt, y′ = y, z′ = z.
In special relativity, the Galilean transformation will not work. This is because of time dilation and spatial contraction. Notice, for example, how the time coordinate does not change in the
Galilean transformation, even though the two coordinate systems are in motion relative to one another. This reflects the Newtonian world view, in which time is absolute and unchanging. In special
relativity, the Galilean transformation is replaced by the Lorentz transformation. In geometrized units, the Lorentz transformation is given by
t′ = γ(t − vx), x′ = γ(x − vt), y′ = y, z′ = z. (1)
The Lorentz transformation can be derived by applying the special principle of relativity and the light postulate to two coordinate systems in the standard configuration described above.
In Euclidean geometry, we measure the spatial distance Δd between two objects or events by using the Pythagorean theorem:
Δd^2 = Δx^2 + Δy^2 + Δz^2 ,
where Δx, for example, is the change in x, that is, the difference between the x coordinates of two events. Suppose, for example, that Alice and Bob are playing catch. Bob throws the ball to Alice.
Let's refer to the place and time where Bob throws the ball as event P and the place and time where Alice catches it as event Q. We can designate the x coordinate of event P as x[P] and the x
coordinate of event Q as x[Q]. Then Δx = x[Q] − x[P]. Similarly, Δy = y[Q] − y[P] and Δz = z[Q] − z[P]. Using the Pythagorean theorem, we can compute Δd, the distance between these two events.
In Newtonian mechanics the Pythagorean theorem is used to measure the distance or spatial separation between two events. The distance Δd is frame-invariant, which means that two Newtonian observers
in relative motion to one another will always obtain the same value when they measure the distance between two events. In order to prove that the Pythagorean distance is invariant under the Galilean
transformation, we start with
and replace x′, y′ and z′ using the Galilean transformation. For example,
After completing these algebraic manipulations, the reader can verify that
which shows that the distance is frame-invariant under the Galilean transformation. The Pythagorean distance is not invariant under the Lorentz transformation, however. This is because in special
relativity, spatial distances can vary for observers who are moving relative to one another.
Although Einstein found that space and time measurements can vary in different reference frames, his math teacher Minkowski found a way to measure space and time together in such a way that the
measurement does not vary. This discovery was of fundamental importance to the subsequent development of general relativity. Minkowski replaced the Euclidean distance with the spacetime interval. The
frame-invariant spacetime interval Δs between two events is defined by
Δs^2 = −Δt^2 + Δx^2 + Δy^2 + Δz^2 .
Unlike ordinary distance, the spacetime interval measures the separation between two events in both space and time. In order to prove that the spacetime interval is invariant under the Lorentz
transformation, we start with
and replace t′, x′, etc. using the Lorentz transformation, as given above (1). After some algebraic manipulations, the reader can verify that
which shows that the interval is frame-invariant. Minkowski's discovery allows us to use the spacetime interval in place of the theorem of Pythagoras in order to measure "distance" (spacetime
separation) in the non-Euclidean geometry of special relativity.
What does the frame-invariance of the spacetime interval really mean? Minkowski said: "From henceforth, space by itself, and time by itself, have vanished into the merest shadows and only a kind of
blend of the two exists in its own right." Consider a three-dimensional object, say a stick. It casts a two-dimensional shadow against a wall. As we turn the stick about, the length of the shadow
changes, even though the stick itself remains the same length. In an analogous fashion, we can imagine a four-dimensional spacetime "object." All inertial observers agree that this object has the
same "length" (interval) in spacetime. However, different observers see different lengths for the three-dimensional "shadow" of the object in space.
Using the spacetime interval and geometrized units, we can classify the separation between two events in one of three ways:
1. If Δs^2 < 0, the events are said to have timelike separation. For any two events with timelike separation, the separation in space is less than the separation in time. To see this, consider the
which shows that Δd < Δt. In this case, it is possible for one inertial observer to experience both events. We say the events lie on the world line of such an observer. The observer is traveling
slower than the speed of light, since Δd/Δt < 1.
2. If Δs^2 = 0, the events are said to have null or lightlike separation. In this case the events lie on the world line of a light ray. The separation in space and the separation in time are equal.
(Δd = Δt, or Δd/Δt = 1, the speed of light in geometrized units.)
3. If Δs^2 > 0, the events are said to have spacelike separation. Two events with spacelike separation cannot lie on the same world line, since Δd/Δt > 1 (greater than the speed of light). In this
case there always exists some inertial reference frame in which the events are simultaneous (separated in space but not in time).
In special relativity, the paths of material particles are restricted to timelike world lines, and the paths of photons (light rays) are restricted to null or lightlike world lines. Spacelike world
lines are excluded. (Spacelike world lines correspond to paths that are faster than the speed of light, or that go backward in time.) See figure 2. In this diagram, the vertical axis t represents
time and the horizontal axis x represents one dimension of space. The origin O represents the present moment for some observer. The observer's future lies somewhere between the two lightlike world
lines, with t > 0. The observer's past lies somewhere between these two world lines, with t < 0. If we were to add a dimension by adding a y axis perpendicular to the x and t axes, the lightlike
world lines would form two cones meeting at the origin. These are called "light cones."
When two events have timelike separation (Δs^2 < 0), we define the proper time interval between the events to be Δτ (delta tau), where
The proper time, also known as the wristwatch time, is the time measured by an observer who experiences both events. Since the proper time for lightlike world lines is zero, we can say that photons
do not experience the passage of time. (Δτ = |Δs| = 0 for photons.)
Let's consider the spacetime interval of a photon traveling in the x direction. For this photon, Δy and Δz are zero. Setting Δs^2 = 0, we obtain
0 = −Δt^2 + Δx^2 or
so that
which is the result to be expected. In all inertial frames of reference, the speed of light is constant and equal to one in geometrized units.
As another example, let's consider the case of the twin sisters, Mary and Jane. Jane decides to travel to the nearest star, Alpha Centauri. She has at her disposal a very fast spaceship, one that can
travel almost as fast as light. Her sister Mary is somewhat of a homebody, however, and prefers to stay on Earth. Let's compare the proper times of the two sisters, using Mary's reference frame for
our calculations. We ignore acceleration in this example, for which we would need to use general relativity. Also, instead of meters, we use years for t and light-years for x.
Mary says good-bye to Jane and waits 12 years. After 12 years, Jane returns from her journey to Alpha Centauri, 4.5 light-years away. Mary calculates her own proper time interval as follows:
Since she stayed at home, she used Δx = 0 in her calculation.
Mary is surprised to find that Jane has not aged as much as her, however. She is not so surprised after calculating Jane's proper time interval. Using Δx = 9 light-years, she obtains:
See figure 3. Again, this calculation is only approximately correct, since no adjustment was made for the time Jane spent accelerating and decelerating. However, this example illustrates an
interesting fact: the longest path through Minkowski spacetime is actually the one that involves no movement through space, only time. This is because we subtract the spatial components of the path
in order to compute the interval. Minkowski spacetime (our spacetime!) is truly non-Euclidean in character.
The General Theory of Relativity
Special relativity does not take acceleration into account. Acceleration occurs whenever an observer changes her speed or direction of motion. In 1907, Einstein had what he called "the happiest
thought of my life," that the effects of acceleration cannot be distinguished from the effects of gravity. Einstein called this the principle of equivalence. For example, suppose you are traveling in
a spaceship far away from any planet. Suppose the rockets fire, causing the spaceship to accelerate forward at a rate of 10 m/s^2. You would soon find yourself on the "floor" of the spaceship (the
rear bulkhead). You would experience a force which could not be distinguished from gravity. You could stand up and walk around the rear bulkhead, and it would feel the same to you as if you were
walking around on the Earth.
Once Einstein realized that the effects of gravity and acceleration are equivalent, he was able to extend the principle of relativity from inertial reference frames to all reference frames. He did
this by showing that the laws of physics that describe acceleration could also be used to describe gravity.
In order to understand how Einstein made use of his principle of equivalence, imagine a playground merry-go-round (the type children grab onto and run next to in order to build up speed and then jump
onto). When the merry-go-round spins around, any children on board will experience an outward-directed force, called centrifugal force. Now imagine placing hundreds of toothpicks on the
merry-go-round, forming a series of concentric rings, some near the center of the merry-go-round, some out near the edge, and some in between. Imagine the toothpicks are glued in place and arranged
so that they all point in the direction of the merry-go-round's rotation. Now imagine standing next to the merry-go-round and watching it spin around, with the concentric rings of toothpicks whirling
around with it. By special relativity, since the toothpicks are in motion relative to you, they will become shorter ("moving sticks are shortened"). The toothpicks near the outside of the
merry-go-round move fastest, so they contract the most. Just as the toothpicks are shortened, so space itself is shortened or warped. Next, replace the toothpicks with small clocks, hundreds of them
all over the merry-go-round. Now when the merry-go-round whirls around the clocks run slower, with the ones near the outside running more slowly than the ones near the center. Time slows down on the
merry-go-round, especially near the outer edge. Thought experiments like this showed Einstein that spacetime must be warped in the presence of acceleration. And since he believed that acceleration is
equivalent to gravity, Einstein saw that spacetime must also be warped in the presence of gravity. He soon came to believe that matter and energy cause spacetime to warp, and this warping of
spacetime is what we experience as gravity.
According to Einstein, a kid on a merry-go-round is traveling through warped spacetime, which she experiences as centrifugal force. She is attempting to follow a path through warped spacetime that
leads away from the merry-go-round. Similarly, our travels through the warped spacetime near planet Earth cause us to experience the force we call gravity. (And note that even when we are standing
still, we are still traveling through time, so we still feel the force of gravity. We are always moving through spacetime.) With these insights to guide him, Einstein went on to develop his general
theory of relativity.
General relativity is Einstein's theory of gravity. It is founded on two core principles: (1) The principle of equivalence: the effects of acceleration and the effects of gravity are essentially
equivalent. (2) The general principle of relativity: the laws of physics are the same in all reference frames, including accelerating frames. An accelerating frame of reference without gravity is
equivalent to a non-accelerating frame with gravity. For example, the occupant of an accelerating car is pressed back against his seat. Alternatively, the occupant could choose to view himself as
stationary while the rest of the universe accelerates past him. From this alternative viewpoint, the gravitational field of the rest of the universe accounts for the force he experiences pressing him
back against his seat. General relativity tells us that these two viewpoints are equivalent.
General relativity is a geometric theory. Gravity is no longer treated as a force of attraction between two bodies. Instead, gravity is associated with the warping of space and time. The warping or
curvature of spacetime is caused by massive objects such as stars or planets. In fact, according to general relativity, matter, energy, and pressure all cause spacetime to warp.
How does the warped spacetime around Earth cause objects to fall toward it? Free-falling objects in a gravitational field follow geodesics, which are paths of extremal length. They are either the
longest or the shortest possible paths between any two points in spacetime. According to general relativity, free-falling objects take the longest possible paths through spacetime, so they follow
geodesics. We perceive free-falling objects in a gravitational field to be falling because they are following geodesics through curved spacetime.
Light rays also follow geodesics, but these geodesics cannot be defined in terms of length, since all lightlike paths have the same length (spacetime interval), namely zero. Instead, we define these
geodesics as the straightest possible paths through curved spacetime. This definition is made precise through the techniques of differential geometry, the generalized non-Euclidean geometry developed
in the 19th century by the great mathematicians Gauss and Riemann, and utilized by Einstein in general relativity.
In differential geometry, a curved surface is divided up into an infinite number of infinitesimally small pieces. Each piece is measured by the "metric" or "line element" ds, a differential which
gives the distance between two points that are infinitesimally close together on the surface. See figure 4. The techniques of differential geometry can be used to find an equation for the metric that
is valid anywhere on the curved surface. Unlike conventional geometry, with differential geometry distances can be determined on a curved surface without reference to anything outside of the surface.
For example, if we consider a sphere as a two-dimensional curved surface, the circumference of the sphere can be determined without knowing its radius, which lies outside the surface of the sphere.
The techniques of differential geometry will work in general, for any continuous curved surface whatsoever, in any number of dimensions. Just what Einstein needed for his theory of gravity.
In general, in the four dimensions of spacetime, the equation for the metric assumes the following form:
ds^2 = g[11 ]dt^2 + g[22 ]dx^2 + g[33 ]dy^2 + g[44 ]dz^2
+ 2 g[12 ]dt dx + 2 g[13 ]dt dy + 2 g[14 ]dt dz + 2 g[23 ]dx dy + 2 g[24 ]dx dz + 2 g[34 ]dy dz. (3)
The ten coefficients g[αβ] are, in general, functions of t, x, y and z. In three-dimensional Euclidean space, we set
g[22] = g[33] = g[44] = 1,
and the other coefficients equal to zero. We then obtain the conventional Pythagorean formula
ds^2 = dx^2 + dy^2 + dz^2,
which measures the distance between two infinitesimally close points in Euclidean space. To find the distance Δs between two points a finite distance apart, we sum up the differentials by integrating
the metric along the geodesic connecting the two points. Of course, in Euclidean space we know the geodesic is just the straight line connecting the two points, and we don't have to use integration
to find the distance. However, the technique of taking the line integral of the metric along the geodesic between two points to find the distance between them will work in any geometry whatsoever,
Euclidean or non-Euclidean, flat or curved.
The general formula for the metric given above (3) can be called the generalized Pythagorean theorem for four-dimensional spacetime. In the flat Minkowski spacetime of special relativity, we set
g[11] = −1, g[22] = g[33] = g[44] = 1,
and the other coefficients equal to zero. We then obtain the Minkowski metric
ds^2 = −dt^2 + dx^2 + dy^2 + dz^2. (4)
By integrating this metric along the world line connecting two events in flat (Minkowski) spacetime, we can find the interval Δs between the events. Again, there are easier ways to find the interval
in flat spacetime, but in the curved spacetime of general relativity, taking the line integral of the metric is the only way that will always work.
When gravity or acceleration is present, spacetime becomes warped, so the formula for the metric becomes more complex than the ones used for Euclidean or Minkowski space. We will see an example of
such a metric shortly, when we discuss black holes.
According to general relativity, the metric ds is frame-invariant. We can use it to find the interval between two events in any frame of reference, no matter what motion or acceleration may be
present. In order to change from one reference frame to another, however, the coefficients g[αβ] must be transformed (using an appropriate transformation formula). The value of ds is not changed by
this transformation, even though the coefficients of the metric do change.
Along with differential geometry, general relativity uses a mathematical tool called tensor analysis. Tensors are real-valued, multi-dimensional functions of vectors. They are useful because they are
frame-invariant. In other words, a tensor will give the same real number regardless of the reference frame in which the vector components are calculated. This property makes tensors a useful
short-hand for representing the frame-invariant metrics of general relativity.
In general relativity, a tensor representing the curvature of spacetime is set equal to a tensor representing the stress-energy content of spacetime (the matter, energy and pressure present). This is
the way that general relativity mathematically models the concept that matter, energy and pressure cause spacetime to warp, which we experience as gravity. This tensor equation can be written as
G^αβ = 8π T^ αβ ,
where G and T are the tensors representing the curvature and the stress-energy content of spacetime, respectively. This equation is the most important result of general relativity. It can be used to
construct mathematical models of the universe, as well as of stars and black holes. The expansion of this tensor equation gives the "Einstein field equations," a system of ten non-linear partial
differential equations. These cannot be solved in general. Instead, initial conditions and simplifying assumptions, such as spherical symmetry, are used to obtain a simpler, solvable set of
equations. When these equations are solved, the result is the metric tensor, which contains the ten coefficients g[αβ] of the metric (3). The resulting metric describes the geometry of the spacetime
being modeled (for example a star, a black hole, or the universe). (With simplifying assumptions, some of the ten coefficients will be zero.)
In the simplest case, that of flat spacetime (no acceleration or gravitational field), the metric tensor expands to give the Minkowski metric (4). When the methods of general relativity are applied
to a curved spacetime geometry (such as we find inside or near a star), the result is a more complex metric.
In general, the proper time interval between two infinitesimally close events on a timelike world line can be obtained from the metric by finding dτ = |ds|.
The proper time interval Δτ between two events P and Q that lie on a timelike world line W is defined to be the line integral of dτ along W from P to Q. See figure 5. In general relativity, world
lines can be curved due to acceleration or gravity, but the slope must remain less than 45 degrees (in geometrized units).
The proper time Δτ from P to Q is equal to the time measured on a clock that moves along W from P to Q. (This is known as the Clock Hypothesis.)
Our experience of physical reality tells us that time has a direction. We assume that it is not possible for an observer to travel along the world line W from Q to P (backward in time).
The Schwarzschild Metric
In 1915, the German physicist Karl Schwarzschild solved the Einstein field equations for the special case of a spherical, non-rotating mass (such as a star or black hole). The so-called Schwarzschild
metric can be used to describe the curvature of spacetime caused by a non-rotating black hole. The Schwarzschild metric was derived using the reference frame in which the spherical object is
stationary. This means that the coefficients of the Schwarzschild metric are valid only for observers who are "at rest" or motionless relative to the object. "At rest" means that the spatial
coordinates of these observers do not change over time.
The Schwarzschild metric can be applied only to non-rotating black holes. A solution for rotating black holes was found by Kerr. As the Kerr metric is more complex, this paper will only consider the
Schwarzschild metric. (Most of the results given here are valid for Kerr's solution, also.)
Consider a black hole of mass M (in geometrized units), centered at a point r = 0. For r > 2M, the Schwarzschild metric is given in spherical coordinates (t, r, θ, φ) by
There are many interesting and surprising features of the geometry described by this metric. First I mention that the "radius" r = 2M is called the Schwarzschild radius or event horizon of the black
hole. The event horizon can be thought of as the boundary that separates the inside from the outside of a black hole.
As r approaches 2M, the coefficient of dt^2 approaches zero and the coefficient of dr^2 approaches infinity (since 1 − 2M/2M = 1 − 1 = 0). For a long time, it was not known whether the infinite
"singularity" at r = 2M was a real, physical singularity which could not be passed through by an infalling observer, or just a coordinate singularity. (A familiar example of a coordinate singularity
is the one at r = 0 in polar coordinates, where θ can take on any value and is therefore indeterminate. A transformation to Cartesian (rectangular) coordinates eliminates this singularity.) In
general, a coordinate singularity indicates a mathematical problem, not a real, physical problem, and can be eliminated by a coordinate transformation.
Although the singularity at r = 2M was long suspected to be a coordinate singularity, this was not proved until the late 1950s, when a coordinate transformation was found that eliminated the
singularity. Additional coordinate transformations have been discovered since. These will not be considered here, as they are mathematically complex.
In the subsequent analysis, we will often consider the perspective of an observer who is at rest at "infinity," that is, very far away from the black hole. For such an observer, the proper time and
the coordinate time are almost equal. To see this, consider the Schwarzschild metric, given above. As r approaches infinity, the coefficient of dt^2 approaches −1. The spatial coordinates r, θ, and φ
for a resting observer are constant over time, so the differentials dr, dθ, and dφ are zero. Therefore, dτ = |ds| ≈ dt. Hence, the proper time Δτ of an observer at rest far from a black hole is
approximately equal to the coordinate time Δt.
By setting ds^2 = 0 in the Schwarzschild metric, we can study the behavior of a light ray near the event horizon, as seen by an observer at rest at "infinity." We can simplify the analysis by
assuming the ray travels along a radial null geodesic, either directly toward the black hole or directly away from it. Then the θ and φ coordinates of the light ray do not change. This implies that
dθ and dφ are zero, so we can eliminate the right-hand term in the Schwarzschild metric. Setting ds^2 = 0, we have
so that
Now as r approaches 2M, dt/dr approaches infinity. This is a time dilation effect. Any message sent via light signal from near the event horizon (r = 2M) to an observer far from the black hole will
be stretched out or dilated. The closer the emitter of the light signal is to the event horizon, the more stretched out the message will appear to the far-away observer. The frequency of the light
signal decreases, or redshifts, because lower frequency light carries less information per unit of time (the far-away observer's wristwatch time). The closer to the event horizon the light signal is
sent from, the greater is the redshift observed from far away. When the emitter is very close to the event horizon, the observed redshift is so great that the light signal disappears altogether. For
this reason, the event horizon is sometimes called the infinite redshift horizon.
Taking the reciprocal, we see that dr/dt approaches 0 as r approaches 2M. Since we solved the Schwarzschild metric for a radial lightlike geodesic (by setting ds^2 = 0), dr/dt corresponds to the
speed of light. Hence the speed of light approaches 0 as r approaches 2M, relative to an observer far from the black hole. At r = 2M, an outward-directed light ray is frozen in time and space, and
never reaches an observer at r > 2M.
Since we said that the speed of light is constant in special relativity, one might ask how the speed of light can vary in general relativity. The answer is that gravitational fields change the
geometry of spacetime, and the speed of light is fundamentally tied to the nature of the spacetime geometry the light is passing through.
According to general relativity, matter, energy and pressure cause spacetime to bend, stretch or compress. Light rays follow geodesics through this bent, stretched or compressed spacetime. The
warping of spacetime warps the paths of the light rays.
Relative to an observer at rest far from a black hole, space is compressed (contracted) near the event horizon and time is stretched out (dilated). Each meter of space is shorter compared to the
space far from a black hole, and each meter of time is longer. Relative to the far-away observer, a light ray, traveling through one meter of space in one meter of time, travels a short distance in a
long time. To the far-away observer, the light ray has slowed down.
Although the speed of light can vary in general relativity, it is still always the case that material objects cannot attain or exceed this speed.
Inside the Black Hole
Now let's consider the Schwarzschild solution for 0 < r < 2M (inside a black hole). A small but very important change must be made to the metric for this case. When r > 2M, the coefficient (1 − 2M/r)
is positive. However, for 0 < r < 2M, this coefficient is negative. In order to work with positive coefficients for this case, we use
The metric then becomes
Notice how the minus sign has moved from the t coordinate to the r coordinate. This means that inside the event horizon, r is the timelike coordinate, not t. In relativity, the paths of material
particles are restricted to timelike world lines. Recall the discussion of timelike separation earlier in this paper (2). It is the coordinate with the minus sign that determines the meaning of
"timelike." According to relativity, inside a black hole time is defined by the r coordinate, not the t coordinate. It follows that the inevitability of moving forward in time becomes, inside a black
hole, the inevitability of moving toward r = 0. This swapping of space and time occurs at r = 2M. Thus, r = 2M marks a boundary, the point where space and time change roles. For the observer inside
this boundary, the inevitability of moving forward in time means that he must always move inward toward the center of the black hole at r = 0. All timelike and lightlike world lines inside r = 2M
lead inevitably to r = 0 (the end of time!) Because it is not possible for any particle or photon inside r = 2M to take a path where r remains constant or increases, the boundary r = 2M is called the
event horizon of the black hole. No observer inside the event horizon can communicate with any observer outside the event horizon. The event horizon can be thought of as a one-way boundary.
Earlier, we showed that the speed of light approaches zero near the event horizon, relative to an observer far from the black hole. This means that the far-away observer can never see an infalling
observer reach or cross the event horizon, because any light radiating from the infalling observer slows down and redshifts, with the redshift approaching infinity as the infalling observer nears the
event horizon. Does this mean that the infalling observer does not actually reach or cross the event horizon? No. The infalling observer does in fact cross the event horizon. Remember that the
singularity at r = 2M (the event horizon) was shown to be a coordinate singularity, not a real, physical singularity. Using transformed coordinates, it can be shown that the infalling observer passes
from r > 2M to r = 0 in a finite amount of time (his proper time, or the interval along his world line).
Furthermore, it can be shown that the maximum amount of proper time from r = 2M to r = 0 available to an observer who has fallen through the event horizon, even if he has at his disposal a rocket of
unlimited power, is given by
Δτ ≤ π M meters,
where M is the geometrized mass used in the Schwarzschild metric. M is related to the Newtonian mass m by
M = Gm/c^2 ,
where G is the gravitational constant in SI units. The mass of the Sun is 1,477 meters in geometrized units.
Let's look at a real-life example. Astronomers believe there is a supermassive black hole at the center of our galaxy, with a mass about 4.3 million times the mass of the Sun. The tidal force near
the event horizon of such a large black hole is weak. (The tidal force, or tidal acceleration gradient, is the difference in the gravitational acceleration between two points in a non-uniform
gravitational field. The smaller the black hole, the larger this gradient is near the event horizon, because the curvature of spacetime is greater. An astronaut approaching a stellar-sized black hole
only a few times the mass of the Sun would be torn apart by the tidal force before reaching the event horizon.) Since the tidal force near the supermassive black hole is weak, it is possible that an
astronaut, if well-protected from radiation, could survive to cross the event horizon and continue inward. Let's calculate the maximum time this astronaut could avoid reaching the center of the black
hole. (For simplicity, we'll assume the black hole is not rotating, so the above formula can be used.)
Δτ ≤ π M = π × 4,300,000 × 1,477 meters = 2.0 × 10^10 meters = 67 seconds.
Our intrepid astronaut has only about one minute to explore the black hole! The Schwarzschild radius of this black hole is
r = 2M = 2 × 4,300,000 × 1,477 m = 12.7 million km.
The Center of the Black Hole
What happens at r = 0? In the Schwarzschild metric, the expressions 2M/r approach infinity as r approaches 0. This is a real, physical singularity, not a coordinate singularity. All the mass of a
Schwarzschild black hole is concentrated at r = 0, a point of infinite density, where space and time come to an end. The presence of real singularities in solutions of the Einstein field equations
suggests that general relativity is an incomplete theory of gravity. Physicists are leery of theories that predict infinities. None of the dials on their gauges can register infinite values! It seems
likely that general relativity does not accurately describe what happens at r = 0.
Where might general relativity have a problem? One of the assumptions underlying general relativity is that spacetime is continuous. This can be seen in the usage of differential geometry as the
mathematical basis of the theory. Differentials measure infinitesimally small distances, which only makes sense if spacetime is continuous. It may be that spacetime is actually discrete, or
quantized, with a smallest possible size. Physicists are exploring this idea as they work on new theories of quantum gravity. If this idea is correct, then the differential equations of general
relativity are only approximations of reality. These approximations are valid in the large, even down to the atomic level, but at some point they break down. The smallest possible unit of spacetime
is thought to be near the Planck length, an incredibly tiny 10^−35 m. This is twenty orders of magnitude smaller than the nucleus of an atom. Such small sizes are not currently accessible to science.
To the precision our instruments can now measure, general relativity has been found to be accurate.
A viable theory of quantum gravity must combine general relativity with quantum mechanics. Such a theory must match the predictions of general relativity on large scales, since these predictions have
proven to be accurate. Quantum gravity can deviate significantly from general relativity only on the smallest of scales. The core concept of general relativity is that gravity is spacetime curvature.
A future theory of quantum gravity must honor this concept. Somehow, a quantum of gravity (a graviton) must be related to a quantum of spacetime. A successful theory of quantum gravity will be a
successful theory of quantized spacetime (or vice versa). The harmonious merging of quantum mechanics with general relativity is perhaps the greatest problem facing theoretical physicists today. If
and when a way is found to accomplish this task, we may learn what really happens at the center of the black hole!
Real black holes almost certainly rotate. Such black holes can be modeled by the Kerr metric, a more complex metric whose detailed analysis is outside the scope of this paper. The interested reader
should consult Taylor and Wheeler's fine book, Exploring Black Holes: Introduction to General Relativity. The Kerr black hole does not have a point singularity at r = 0. Instead, the singularity has
a ring structure, and can be avoided by most trajectories. In theory, a particle that avoids the ring singularity can pass through r = 0 to a region where r < 0. It is unlikely that such a region
physically exists. It is more likely that general relativity does not correctly describe the nature of real black holes at r = 0. A future theory of quantum gravity should provide us with a more
realistic description of the center of the rotating black hole.
Physical Interpretation of the Event Horizon
We found earlier that the Schwarzschild metric has a coordinate singularity at the event horizon, where the coordinate time t becomes infinite. Yet a calculation using transformed coordinates shows
that an infalling observer reaches and passes through the event horizon in a finite amount of time (his proper time). How can we understand the nature of a place where time seems to be both finite
and infinite?
Far from the event horizon, the coordinate t approximates an observer's proper time or wristwatch time. This leads us to think of the coordinate t as representing time. This is not true at the event
horizon, however. As an infalling observer nears the event horizon, the coordinate t has less and less to do with time as he perceives it – that is, his proper time. In order to understand how time
is perceived by the infalling observer, we need to focus on his proper time and ignore the coordinate t.
Even though we can never actually see someone fall through the event horizon (due to the infinite redshift), he really does. As the free-falling observer passes through the event horizon, any
inward-directed photons emitted by him continue inward toward the center of the black hole. Any outward-directed photons emitted by him at that instant are forever frozen at the event horizon. So the
far-away observer cannot detect any of these photons, whether directed inward or outward.
Life Inside the Black Hole
Some have speculated that our universe might exist inside a gigantic black hole. Let's explore this idea further, in order to gain more insight into what the interior of a black hole is really like
(and for the fun of it). If our universe is inside a giant black hole, one might ask where the event horizon is. Is there any path we can take that will bring us closer to the event horizon?
According to general relativity, if our universe is inside a black hole, every point in our universe is moving closer to the center of this black hole, and away from the event horizon. There is no
(spatial) direction that will bring us closer to the event horizon. As it is difficult to visualize a four-dimensional curved surface, subtracting a dimension or two makes it easier. Imagine a giant
sphere, and a point on the interior surface of this sphere. This point detaches from the inner surface and moves toward the center, at the same time expanding into a disk. This expanding disk
represents our universe expanding in space as it moves through time. In this model let's suppose that our universe formed on the event horizon of the giant black hole, represented by the surface of
the sphere. We suppose that the Big Bang occurred at the event horizon of the black hole. See figure 6. The expanding disk is a two-dimensional representation of the three spatial dimensions of our
universe. (We could label these spatial dimensions θ, φ, and t.) Every point in our universe (the disk) is moving away from the inner surface of the sphere (the event horizon) toward the center of
the sphere (the singularity of the giant black hole). The dimension through which this disk is moving is a timelike dimension (which we could label r). For every point on the disk (our universe at a
point in time), the event horizon lies in the past and the singularity of the black hole lies unseen in the future. All timelike and lightlike world lines in our universe lead from the event horizon
to the singularity of the black hole. To travel to the event horizon would be to travel backward in time. Therefore, there is no path we can take that will bring us closer to the event horizon.
In this imaginary model, the only point of the spacetime of our universe that is connected to the event horizon of the giant black hole is the point in space and time at which the Big Bang occurred.
With a powerful enough telescope, one can, in theory, look in any direction and see the Big Bang (or at least 380,000 years after the Big Bang, when the universe became transparent). One can look
around in any direction and see the Big Bang, yet one cannot travel toward it, because it lies in the past. This is the way of things inside any black hole. Even a super-powerful rocket cannot
prevail against the gentle timelike acceleration toward the singularity at the center of a black hole. The event horizon lies in the past of any observer inside a black hole, and the central
singularity lies in his future.
If we lived inside a gigantic black hole, could we detect it? If we had sensitive enough instruments, it might be possible to detect the tidal acceleration gradient, at least over astronomical
distances. This might be mistaken for a slight variation in the strength of gravity over very large distances. Also, the expansion of the universe might eventually slow down and reverse, as we move
closer to the central singularity of the black hole. In this model, our universe would eventually shrink down to a single point (the Big Crunch).
Recommended Reading:
• Einstein's Theory of Relativity: M. Born (Dover, 1965).
• A History of Mathematics: C. B. Boyer (Wiley, 1991).
• Relativity: The Special and the General Theory: A. Einstein (Crown, 1952).
• The Fabric of the Cosmos: B. Greene (Random House, 2004).
• Quantum Non-Locality and Relativity (Third Edition): T. Maudlin (Wiley-Blackwell, 2011).
• A First Course in General Relativity: B. F. Schutz (Cambridge, 1990).
• Exploring Black Holes: Introduction to General Relativity: E. F. Taylor and J. A. Wheeler (Addison Wesley Longman, 2000).
• Black Holes and Time Warps: Einstein's Outrageous Legacy: K. Thorne (W. W. Norton, 1995).
I can be contacted at jhaldenwang@cox.net. | {"url":"http://www.jimhaldenwang.com/black_hole.htm","timestamp":"2014-04-17T01:17:48Z","content_type":null,"content_length":"60536","record_id":"<urn:uuid:5769452f-598d-42bb-86ac-85d6025236be>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
NN2012 Denicol Abstract
Derivation of fluid dynamics from the Boltzmann equation, G. Denicol, Frankfurt University, Germany − We present a general derivation of relativistic fluid dynamics from the relativistic Boltzmann
equation using the method of moments. The main difference between our approach and the traditional 14-moment approximation is that we do not close the fluid-dynamical equations of motion by
truncating the expansion of the distribution function. Instead, we keep all the terms in the moment expansion and truncate the exact equations of motion for the moments according to a systematic
power counting scheme in Knudsen and Reynolds number. We apply this formalism to obtain an approximate expression for the non-equilibrium single-particle momentum distribution function. This result
is essential to improve the freeze-out description used in the fluid-dynamical modeling of relativistic heavy ion collisions. In order to investigate the implications of our new formalism, we compute
the distribution function of a simple system composed of pions, Kaons and nucleons and compare it to the one obtained via the 14-moment approximation. | {"url":"http://cyclotron.tamu.edu/nn2012/Abstracts/Parallel/RHIC4/Denicol.html","timestamp":"2014-04-18T01:21:06Z","content_type":null,"content_length":"22301","record_id":"<urn:uuid:95c597e0-5e6e-4e29-9756-9e0a86922a36>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cantor Ternary Set
To construct a Cantor set, start with the interval [0,1] and throw away the middle third (1/3,2/3). You now have two pieces, [0,1/3] ∪ [2/3,1]. From each of these, throw away the middle third,
leaving four pieces. From each of those, throw away the middle third. Etc...
What is left? Each step removes 1/3 of the points you have. After n steps, the total length of the intervals in the set is (2/3)^n. Since the Cantor set can be covered with intervals whose total
length is arbitrarily small (let n go to infinity), it has measure zero, or zero length.
However, despite having no length, the Cantor set has an infinite number of points. The endpoints of every removed middle-third interval for example, are part of the Cantor set, and there are an
infinite number of these.
Is there anything else in the Cantor set? Consider the numbers in [0,1] when expressed in base-3. Any number which does not contain the digit "1" in its base-3 representation is part of the Cantor
set; any number which does contain "1" is not in the Cantor set. Therefore, numbers such as 1/4 (0.020202020202... in ternary) which will never be endpoints, are nonetheless in the Cantor set.
In fact, it is possible to map the Cantor set to the real numbers [0,1] in one-to-one correspondence! Note that when expressed in ternary, the Cantor set contains every sequence of the two digits 0
and 2. Now replace the 2's with 1's and consider those sequences as binary expansions...
A consequence of this is that the Cantor set is uncountable.
Cantor set Sierpinski triangle fractal dimension Phaser
How to survive against humans Yankee Hotel Foxtrot perfect set uncountable
set theory fractal gasket Sierpinski carpet Cantor dust
Smale Horseshoe fractional dimension measure zero Menger sponge
critical sets cardinality Georg Cantor continuum
Bertrand Russell George Washington's 1792 State of the Union Address Notelet Nodelet Squawkbox Client Borel measure
Log in or register to write something here or to contact authors. | {"url":"http://everything2.com/title/Cantor+Ternary+Set","timestamp":"2014-04-16T07:26:26Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:0c332d22-0ddf-435d-87b8-8ae11597441f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Part III – Compound Interest and Consumer Debt
By The Banker | Blog Posts, Inequality, Personal Finance, Wall Street
14 Feb 2013
Part III – Compound interest and Consumer Debt
Please see earlier posts Part I – Why don’t they teach this in school And Part II – Compound interest and Wealth
So in the last post I wrote about the the incredible power of compound interest, and the possibility it suggests about wealth creation over time.
Unfortunately, there’s also bad news.
On the debt side of things, how much does your credit card company earn if you carry just an average of a $5,000 credit card balance, paying, say, 22% annual interest rate (compounding monthly) for
the next 10 years?
In your mind you owe a balance of only $5,000, which is not a huge amount, especially for someone gainfully employed. After all, $5,000 is just a quick Disney trip, or a moderately priced ski-trip,
or that week in Hawaii. You think to yourself, “how bad could it be?”
The answer, including the cost of monthly compounding[1], is $44,235, or about 9 times what it appears to cost you at face value.[2]
I hate to be the Scrooge, but the power of compound interest transformed that moderate credit card balance of $5,000 into an extraordinarily expensive purchase.[3]
Compound interest: Why the poor stay poor and the rich stay rich
To take another example, let’s think of compound interest on credit cards for the average American household.
Let’s say you are an average American household, and you carry an average balance of $15,956 in credit card debt.
Also, as an average American household, let’s assume you pay an average current rate of 12.83%.[4]
Finally, let’s assume you carry this average balance for 40 years, between ages 25 and 65. How much did your credit card company make off of you and your extreme averageness?
Answer: $2,629,618.64[5]
So, in sum, your credit card company will earn from the average American household carrying a credit card balance for 40 years, $2.6 million. [6]
If you’re wondering why rich people tend to stay rich, and poor people tend to stay poor, may I offer you Exhibit A:
Compound Interest.
Now, your math teacher might not have done this demonstration for you in junior high, because he didn’t know about it. Mostly, I forgive him. Although not completely.
You can be damned sure, however, that credit cards companies know how to do this math. THIS MATH IS THEIR ENTIRE BUSINESS MODEL.
Which same business model would work a lot less well if everyone knew how to figure this stuff out on his own.
Hence, my theory about the Financial Infotainment Industrial Complex suppressing the teaching of compound interest. They don’t want you to learn how to figure out this math on your own.[7]
and Video Posts
[1] But importantly, excluding all late fees, overbalance fees or penalty rates of interest.
[2] We get this result using the same formula, although Yield is divided by 12 to account for monthly compounding, and the N reflects the number of compounding periods, which is 120 months. So the
math is: $5,000 * (1+.22/12)^120
[3] Have you ever wanted to take a $45K vacation to Hawaii and pretend you’re a high roller? Congratulations! By carrying that $5K balance for 10 years, you did it! You took a $45,000 Hawaiian
vacation. You’re a high roller! Yay!
[5] We express this again dividing yield by 12 to account for monthly compounding, and raising it to the power of 480 months, the number of compounding periods. Hence the math is $15,956 * (1+.1283/
[6] I’m assuming for the purposes of this calculation that the debt balance stays constant for 40 years, but your household pays interest on the balance. In calculating this result, please note I
have framed the question in terms of “How much does the credit card company earn” off of your household carrying this average balance for 40 years. Which is not the same question as “How much do you
pay as a household?” Embedded in my assumptions, and the compound interest formula, is the idea that the credit card company can continue to earn a fixed 12.83% on money you pay them. Which I think
is a fair way of analyzing how much money they can earn off your balance. Since there are no shortages of other household credit card balances for the credit card company to fund at 12.83%, I
believe this to be the most accurate way of calculating the credit card company’s earnings on your balance.
[7] Here’s where, for the sake of clarifying sarcasm on the internet – which sometimes doesn’t translate well on the electronic page – I should point out that I’m (mostly) kidding about the
suppression of the compound interest formula. Among the main reasons I started Bankers Anonymous was that the dim dialogue we have about finance as a society allows conspiracy theories to grown in
darkness. Just as pre-scientific societies depend on magic to explain mysterious phenomena, I think financially uninformed societies gravitate toward conspiracies to explain complex financial
events. As a former Wall Streeter who does not actually ascribe to conspiracy theories, I feel some obligation ‘to amuse and inform’ and thereby reduce the amount of conspiracy-mongering. So, I
don’t really think there’s a conspiracy here. As far as you know. Or maybe, that’s just what I want you to think.
Post read (12052) times.
Thanks for visiting Bankers Anonymous. Be sure to sign-up for my newsletter so you never miss what's happening on my site. You can also connect with me on Facebook and Twitter to keep the
conversation going.
Tags: compound interest, credit cards, discounted cash flows, financial infotainment industrial complex, high rolling hawaiian vacations, useful math
11 Comments
1. Hey Mike, As I have mentioned before, my wife and I have eight children (20 – 32) and non of them have a clue of this subject as it relates to their lives except that three have found a way to
get out of credit card debt and do not want to go back there again. Two have gotten a sniff of compound interest after they have purchased cars on time. I am with you as far as your sense that
the financing industry despises disclosure. I believe that the two kids that viewed the total pay out information as a result of their car purchase started to get it. Anyway keep up the good work
and maybe children at ages earlier than high school can learn to work the HP12C and get it using monopoly money in some form of game.
□ Or I’ll publish the book sometime soon so you can buy it for them!
2. Maybe I’m an idiot but I can’t make the math run on the credit card debt. If I enter 5000 x 1.22^10 it gives me 36,000 odd. What am I doing wrong?
3. Ah shit just realised why I screwed it up. Monthly interest, right?
□ monthly compounding! 5000 * (1.22/12)^120. Divide by 12 because its monthly, raise to the power of 120 because that’s the number of compounding periods. I”m glad you’re working out the math!
4. Thanks! I would have realized from the footnote but my browser doesn’t like the links so I just get a few humourous/informative snippets at the end of each post!
5. This is highly misleading. The author has the interest rate compound by assuming that the lender makes the same interest rate going forward. Of course, if the borrow pays the interest currently,
there isn’t any compounding of the interest, either monthly or for 45 years. This is a slight of hand and very, very deceptive in my opinion. If you pay the interest, it costs you the interest
rate times the balance times 45 years. There won’t be any compounding.
□ You’re right that the borrower only pays the interest that he pays. Which is why I phrased it as “This is how much the credit card companies make,” rather than “This is how much you pay.” I
think it is fair to assume credit card companies can earn the same interest going forward from other borrowers. But that is open to interpretation and disagreement. So, you’ve made a fair
6. Finally, let’s assume you carry this average balance for 40 years, between ages 25 and 65. How much did your credit card company make off of you and your extreme averageness?
Answer: $2,629,618.64
What? The family is paying over $50,000 a year in interest alone?
Something is fishy here.
□ Not that the family paid that much out of pocket, but rather the credit card company can earn, through compounding and reinvestment of its cash in other credit card balances, that much money
over time
7. […] Also, as an average American household, let’s assume you pay an average current rate of 12.83%.[4] […] | {"url":"http://www.bankers-anonymous.com/blog/part-iii-compound-interest-and-consumer-debt/","timestamp":"2014-04-18T02:58:10Z","content_type":null,"content_length":"48537","record_id":"<urn:uuid:dae9131f-a0a0-4def-97a2-b9efb12adefc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
topological quantum field theory
Results 1 - 10 of 90
- Jour. Math. Phys , 1995
"... For a copy with the hand-drawn figures please email ..."
- Adv. Math , 1996
"... We begin with a brief sketch of what is known and conjectured concerning braided monoidal 2-categories and their relevance to 4d TQFTs and 2-tangles. Then we give concise definitions of
semistrict monoidal 2-categories and braided monoidal 2-categories, and show how these may be unpacked to give lon ..."
Cited by 53 (9 self)
Add to MetaCart
We begin with a brief sketch of what is known and conjectured concerning braided monoidal 2-categories and their relevance to 4d TQFTs and 2-tangles. Then we give concise definitions of semistrict
monoidal 2-categories and braided monoidal 2-categories, and show how these may be unpacked to give long explicit definitions similar to, but not quite the same as, those given by Kapranov and
Voevodsky. Finally, we describe how to construct a semistrict braided monoidal 2-category Z(C) as the `center' of a semistrict monoidal category C, in a manner analogous to the construction of a
braided monoidal category as the center of a monoidal category. As a corollary this yields a strictification theorem for braided monoidal 2-categories. 1 Introduction This is the first of a series of
articles developing the program introduced in the paper `Higher-Dimensional Algebra and Topological Quantum Field Theory' [1], henceforth referred to as `HDA'. This program consists of generalizing
algebraic concep...
- Journal of Mathematical Physics , 1995
"... ABSTRACT: We investigate the possibility that the quantum theory of gravity could be constructed discretely using algebraic methods. The algebraic tools are similar to ones used in constructing
Topological Quantum Field theories. The algebraic structures are related to ideas about the reinterpretati ..."
Cited by 51 (3 self)
Add to MetaCart
ABSTRACT: We investigate the possibility that the quantum theory of gravity could be constructed discretely using algebraic methods. The algebraic tools are similar to ones used in constructing
Topological Quantum Field theories. The algebraic structures are related to ideas about the reinterpretation of quantum mechanics in a general relativistic context. I.
- J. Knot Theory Ramifications , 1999
"... In the course of the development of our understanding of topological quantum field theory (TQFT) [1,2], it has emerged that the structures of generators and relations for the construction of low
dimensional TQFTs by various combinatorial methods are equivalent to the structures of various fundamenta ..."
Cited by 48 (2 self)
Add to MetaCart
In the course of the development of our understanding of topological quantum field theory (TQFT) [1,2], it has emerged that the structures of generators and relations for the construction of low
dimensional TQFTs by various combinatorial methods are equivalent to the structures of various fundamental objects in abstract algebra.
- Selecta Math. (N.S , 1999
"... ..."
, 2009
"... To each graph without loops and multiple edges we assign a family of rings. Categories of projective modules over these rings categorify U − q (g), where g is the Kac-Moody Lie algebra
associated with the graph. ..."
Cited by 41 (5 self)
Add to MetaCart
To each graph without loops and multiple edges we assign a family of rings. Categories of projective modules over these rings categorify U − q (g), where g is the Kac-Moody Lie algebra associated
with the graph.
"... Just as knots and links can be algebraically described as certain morphisms in the category of tangles in 3 dimensions, compact surfaces smoothly embedded in R 4 can be described as certain
2-morphisms in the 2-category of ‘2-tangles in 4 dimensions’. Using the work of Carter, Rieger and Saito, we p ..."
Cited by 35 (10 self)
Add to MetaCart
Just as knots and links can be algebraically described as certain morphisms in the category of tangles in 3 dimensions, compact surfaces smoothly embedded in R 4 can be described as certain
2-morphisms in the 2-category of ‘2-tangles in 4 dimensions’. Using the work of Carter, Rieger and Saito, we prove that this 2-category is the ‘free semistrict braided monoidal 2-category with duals
on one unframed self-dual object’. By this universal property, any unframed self-dual object in a braided monoidal 2-category with duals determines an invariant of 2-tangles in 4 dimensions. 1
- ADV. MATH , 1999
"... ..."
- in ``10th Brazilian Topology Meeting, Sa~ o Carlos, July 22 26, 1996,'' Mathematica Contempora^ nea , 1996
"... This series of lectures reviews the remarkable feature of quantum topology: There are unexpected direct relations among algebraic structures and the combinatorics of knots and manifolds. The 6j
symbols, Hopf algebras, triangulations of 3-manifolds, Temperley-Lieb algebra, and braid groups are rev ..."
Cited by 21 (2 self)
Add to MetaCart
This series of lectures reviews the remarkable feature of quantum topology: There are unexpected direct relations among algebraic structures and the combinatorics of knots and manifolds. The 6j
symbols, Hopf algebras, triangulations of 3-manifolds, Temperley-Lieb algebra, and braid groups are reviewed in the first three lectures. In the second lecture, we discuss parentheses structures and
2-categories of surfaces in 3-space in relation to the Temperley-Lieb algebras. In the fourth lecture, we give diagrammatics of 4 dimensional triangulations and their relations to the associahedron,
a higher associativity condition. We prove that the 4-dimensional Pachner moves can be decomposed in terms of singular moves, and lower dimensional relations. In our last lecture, we give a
combinatorial description of knotted surfaces in 4-space and their isotopies. MRCN: 57Q45 Key words: Reidemeister Moves, 2-categories, Movie Moves, Knotted Surfaces 1 1 Introduction In this series of
- Advances in Math. 146 , 1998
"... Crane and Frenkel proposed a state sum invariant for triangulated 4-manifolds. They defined and used new algebraic structures called Hopf categories for their construction. Crane and Yetter
studied Hopf categories and gave some examples using group cocycles that are associated to the Drinfeld double ..."
Cited by 20 (5 self)
Add to MetaCart
Crane and Frenkel proposed a state sum invariant for triangulated 4-manifolds. They defined and used new algebraic structures called Hopf categories for their construction. Crane and Yetter studied
Hopf categories and gave some examples using group cocycles that are associated to the Drinfeld double of a finite group. In this paper we define a state sum invariant of triangulated 4-manifolds
using Crane-Yetter cocycles as Boltzmann weights. Our invariant generalizes the 3-dimensional invariants defined by Dijkgraaf and Witten and the invariants that are defined via Hopf algebras. We
present diagrammatic methods for the study of such invariants that illustrate connections between Hopf categories and moves to triangulations. 1 Contents 1 Introduction 3 2 Quantum 2- and 3- manifold
invariants 4 Topological lattice field theories in dimension 2 . . . . . . . . . . . . . . . . . . . 4 Pachner moves in dimension 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Turaev-Viro inv... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1045586","timestamp":"2014-04-18T06:04:00Z","content_type":null,"content_length":"34352","record_id":"<urn:uuid:0f3f4fea-0ef9-43bb-9df7-9d39151b0adb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Inequality Question
July 30th 2009, 11:42 AM #1
Junior Member
Jul 2009
[SOLVED] Inequality Question
How is this solved?
$<br /> (x-1) \left( x+\frac{1}{2} \right) \geq 0<br />$
Also, I have no idea what I entered wrong with Latex.
Last edited by mr fantastic; July 31st 2009 at 04:09 AM. Reason: Fixed the latex
you should be able to determine the two values where the left side of the inequality equals 0.
you should also know that the graph of the function on left side is a parabola that opens up ... the graph should also tell you where the parabola is positive.
The "escape" is a back-slash, not a forward slash. I think you meant the following...?
$(x\, -\, 1)\left(x\, +\, \frac{1}{2}\right)\, \geq\, 0$
If so, note that this is a positive quadratic, so you know what shape it has. Look at the factors to determine where the graph crosses the axis, and then use your knowledge of the shape of the
graph to determine where the graph is at or above the x-axis.
July 30th 2009, 11:57 AM #2
July 30th 2009, 11:58 AM #3
MHF Contributor
Mar 2007 | {"url":"http://mathhelpforum.com/pre-calculus/96543-solved-inequality-question.html","timestamp":"2014-04-21T09:42:32Z","content_type":null,"content_length":"38475","record_id":"<urn:uuid:a5f4a3c4-ed0f-4927-a632-a602a0e3f25f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who consistently hit the longest home runs in 2011?
Jose Bautista may have hit the greatest number of home runs in 2011, and Prince Fielder may have hit the single longest dinger, but there’s another way of measuring home run “wow factor”—distance
Or to be more specific, among the biggest power threats in baseball, who had the longest home run distance average? One giant blast over the fence is impressive, but there’s just so much
entertainment value in watching a slugger crank out home runs into the third deck on a regular basis. To find out, I grabbed some data from the invaluable Hit Tracker Online and averaged out the home
run distances of the top 15 home run hitters last year.
Your distance champion? Not Bautista, Fielder, or even Pujols, but Mike Stanton of the Marlins, who averaged 417 feet in his 34 home runs on the year.
But why not go a bit further? Hit Tracker Online calculates a whole lot more than just distance, so I plotted the average home run flight path trajectory for each player and overlaid them all
together, with the longest and shortest home runs in 2011 for reference. Click to enlarge.
It’s important to note that this isn’t a proxy for batted ball speed. The bottom two in the table are both Yankees, due in part to the short home runs that sneak over the fence in Yankee Stadium’s
right field porch. Those short home runs will drag their distance average down, so it’s not a very good measure of quality of contact. What it is, though, is a fun measurement of being able to break
a windshield in the parking lot.
References & Resources
Hit Tracker Online. Every few months I’ll spend hours thumbing through it, and then wonder why I don’t do it more often.
1. David Wade said...
I don’t think Ike Davis and Justin Upton were slighted so much as they just weren’t among the top 15 home run hitters last year, which was the group Dan said he was looking at.
2. Dave Studeman said...
Seriously, people? When the article says this?
“I…averaged out the home run distances of the top 15 home run hitters last year.”
Nice job, Dan. Love the graphic.
3. ettin said...
I’m truly surprised that Mark Trumbo wasn’t on that list?
4. ettin said...
Nevermind…. but I bet if you added Trumbo you’d see his were among the longest hit.
5. Alan Nathan said...
Just curious about where on the site you find the actual trajectories.
6. Dan Lependorf said...
They don’t have them on the site. But with total distance, apex height, some algebra and a dab of calculus…
7. Alan Nathan said...
Dan…OK. My own technique would be to use the speed off bat and two angles, the total distance, and hang time as input to a calculation that would adjust a few parameters to get the full
trajectory. Unfortunately, Greg does not give the hang time online, so I would have to figure out how to make use of the apex instead. Somehow I suspect that it involves more than “a dab of
8. Dan Lependorf said...
Alan, I actually take a different approach entirely. I estimate each path as two parabolas, one from bat to apex, and one from apex to ground. Since we know the apex is the vertex of both
parabolas, the slope there is zero, meaning the derivative of the general parabola equation equals zero at the apex. Also, I assume the apex x-coordinate it’s directly proportional to the total
distance. It’s not exact, but it’s a good enough guess. Then, with those three pieces of information (apex coordinates, ground coordinates, and zero slope at vertex), we can plot a parabola.
9. Alan Nathan said...
Dan…I suppose your approximations are as good as any, given that Greg does not actually measure the apex, nor does he measure the initial velocity parameters. All he really measures is the
landing point and hang time. The rest is inferred from his fitting procedure, which utilize his own models for lift, drag, and the ball-bat collision. Given all those approximations, it probably
doesn’t make much sense to be any more precise than you have been.
The technique I have used in the past to calculate a trajectory uses the initial velocity vector (actually measured from either HITf/x or Trackman) and the landing point and hang time (from Greg
for home runs or from Trackman from other fly balls). As it turns out, those pieces of information do an excellent job constraining the full trajectory, as confirmed from experiments that obtain
the full trajectory from Trackman. In principle, FIELDf/x data could do the same thing, but I don’t know that anyone has done that kind of analysis yet.
10. Dbacks fan said...
I thought Justin Upton held the longest HR avg distance at 423.6ft last year.
11. DD said...
Ike Davis, who was injured about one month into the 2011 season, hit home runs of 411.1 feet average length. In 2010, his rookie season, his average was 415.1.
I am learning to translate the articles I read at Hardball Times. Recently a discussion of the most effective pitchers left out any mention of Mets pitcher R. A. Dickey, who was 6th in the NL in
WAR last year. Ike Davis was injured last year of course, but since he will probably figure in the longest home runs leaders in 2012, One would think he’d merit a mention.
But then, he’s a Met. As I said, I am learning to make the translation.
12. BostonDave said...
I just wanted to say that I just thought this article was fun!
13. Dale said...
Poor Justin. If he’d managed one more squeaker (like the Asdrubal 320 ft home run), he’d have averaged 420 feet and be lauded as the leader in “wow factor”.
14. Bing Tsang said...
The Jim Thome catwalk home run in Tampa Bay 3-4 years ago has to be among the all time dingers.
15. Alan Nathan said...
Rd Bing: Possibly, but I would bet that catwalk home runs are among the most difficult for getting an accurate distance, since the observation point (in the catwalk) is so far from ground level.
16. Brandon said...
Cool article. I really Ike the trajectory chart. I would complain about my favorite slugger being omitted but i don’t just look pictures, I actually read the article. ;D
17. Bojan said...
Good stuff Dan!
Leave a Reply Cancel reply | {"url":"http://www.hardballtimes.com/who-consistently-hit-the-longest-home-runs-in-2011/","timestamp":"2014-04-16T04:18:16Z","content_type":null,"content_length":"70988","record_id":"<urn:uuid:e60b6a40-7379-4e61-b5ce-6df06bf2a540>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. The definition of a rectangle is a four-sided figure or shape with four right angles that isn't a square.
An example of a rectangle is the shape of a an 8x10 picture frame.
This picture frame is a rectangle.
1. any four-sided plane figure with four right angles
2. any such figure or shape that is not a square; oblong
Origin of rectangle
French ; from Medieval Latin
; from Classical Latin
(see recti-) +
A four-sided plane figure with four right angles.
Origin of rectangle
French, from Medieval Latin
a right triangle
, from Late Latin
: Latin
; see
in Indo-European roots + Latin | {"url":"http://www.yourdictionary.com/rectangle","timestamp":"2014-04-17T03:58:28Z","content_type":null,"content_length":"47622","record_id":"<urn:uuid:67ef8010-f3f3-4e85-8e42-d9aa88d2e707>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Prediction of Concrete Temperature during Curing Using Regression and Artificial Neural Network
Journal of Engineering
Volume 2013 (2013), Article ID 946829, 5 pages
Research Article
The Prediction of Concrete Temperature during Curing Using Regression and Artificial Neural Network
^1Department of Geology, Engineering Faculty, Science and Research Branch, Islamic Azad University, Poonak Square, Tehran, Iran
^2Department of Mining Engineering, Engineering Faculty, Science and Research Branch, Islamic Azad University, Toward Hesarak, End of Ashrafi Esfahani, Poonak Square, Tehran 1477893855, Iran
Received 5 December 2012; Accepted 12 February 2013
Academic Editor: İlker B. Topçu
Copyright © 2013 Zahra Najafi and Kaveh Ahangari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Cement hydration plays a vital role in the temperature development of early-age concrete due to the heat generation. Concrete temperature affects the workability, and its measurement is an important
element in any quality control program. In this regard, a method, which estimates the concrete temperature during curing, is very valuable. In this paper, multivariable regression and neural network
methods were used for estimating concrete temperature. In order to achieve this purpose, ten laboratory cylindrical specimens were prepared under controlled situation, and concrete temperature was
measured by thermistors existent in vibrating wire strain gauges. Input data variables consist of time (hour), environment temperature, water to cement ratio, aggregate content, height, and specimen
diameter. Concrete temperature has been measured in ten different concrete specimens. Nonlinear regression achieved the determined coefficient () of 0.873. By using the same input set, the artificial
neural network predicted concrete temperature with higher of 0.999. The results show that artificial neural network method significantly can be used to predict concrete temperature when regression
results do not have appropriate accuracy.
1. Introduction
Temperature prediction in fresh concrete is of great interest for designers and contractors because cement hydration is an exothermic process and the heat generation may lead to very early onset of
thermal cracks in absence of any load [1]. Therefore, utilizing a method that estimates temperature during curing is very beneficial.
Cement hydration produces a rise in concrete internal temperature. Temperature rise varies by many parameters including cement composition, fineness and content, aggregate content and CTE
(coefficient of thermal expansion), section geometry, placement, and ambient temperatures [2]. After reaching the maximum temperature, the temperature of concrete decreases [3].
Pours with a large volume to surface area ratio are more susceptible to thermal cracking. Cements used for mass concrete should have a low C[3]S and C[3]A content to reduce excessive heat during
hydration. Cement with a lower fineness with slow hydration reduces temperature rise. Mass concrete mixtures should contain as low of a cement content as possible to achieve the desired strength.
This lowers the heat of hydration and subsequent temperature rise. A higher coarse aggregate content (70–85%) can be used to lower the cement content, reducing temperature rise. The CTE (coefficient
of thermal expansion) of the coarse aggregate has the main influence on the CTE of the concrete. Lower CTE aggregates tend to have a higher thermal conductivity; thus, heat is released fast from the
core. Lower ambient temperatures produce less temperature rise. Lower volume to surface ratio produces less temperature rise. has a large effect on temperature rise. The lower is, the less
temperature rises [2].
Measuring concrete temperature during curing requires instrument and high costs. The used concrete temperature prediction methods commonly consist of The Portland Cement Association (PCA) method,
graphical method of ACI , Schmidt’s method [4], and ConcreteWorks software package [5].
The PCA Method calculates 10°F temperature rise for every 100lb of cement, provides no information on time of maximum temperature, does not allow the quantification of temperature differences, and
assumes that the least dimension of the concrete member is at least 1.8m (6ft). Graphical method of ACI uses charts and equations based on empirical data and assumptions are for boundary
conditions. Generally, this method underestimates maximum temperature and is poor predictor of time to achieve maximum temperature. Schmidt’s method is little guidance for boundary conditions and
difficult to model. Moreover, it can be complicated and should be performed by an experienced engineer [5]. In addition to defects of three above-mentioned methods, they do not predict continuous
concrete temperature. ConcreteWorks Software package used for predicting continuous concrete temperature, needs to measure amounts of concrete air content, slump, specified final compressive strength
(), coefficient of concrete thermal expansion and thermal properties. This type of measuring spends too much time and cost. Thus, using quick and easy method for prediction of continuous concrete
temperature, which measures input parameters in an easy and inexpensive way, could be very useful.
The aim of this study is predicting the temperature during concrete curing by use of time (), environment temperature, water to cement ratio, aggregate content, diameter, and specimen height as
variables. The required data are a result of laboratory experiment. Multivariate regression (SPSS software) and artificial neural network (MATLAB) have been used for prediction.
2. Experimental Procedures
In order to predict temperature during concrete curing, it is required to measure the temperature continuously using the thermistors, which are located inside the concrete samples. The necessary data
is obtained from ten experiments carried out on different cylindrical concrete specimens in the Institute of Geotechnical Engineering and Mine Surveying of Technical University of Clausthal, Germany.
Different types of vibrating wire strain gauges were installed in each concrete specimen. The vibrating wire strain gauges are equipped with thermistors, and the concrete temperature was measured by
it. During different stages of concreting in the concrete was appropriately compacted by a manual vibrator.
The measuring began right after specimen concreting and during the curing process. Temperature was recorded until 30 hours after concreting, which the temperature changes were rather stopped.
In order to predict the temperature more accurately, measured concrete temperature in specimens with similar strain gauges possibility was utilized as concrete temperatures.
The type of cement used in this study and produced by German Deuna Co. is a Portland cement (CEM , ).
The characteristics of specimens are presented in Table 1. The used aggregates in all specimens are coarse and silica type. Specimen no. 9 was put in cold weather (−2°C until +1.84°C) after
concreting and during the curing process. For specimens with water to cement ratio of 50% and specimen no. 9 (concreting in cold weather), 30mL plasticizer was used for each kilogram of cement.
The measured temperature changes during curing for specimens are presented in Figure 1.
3. Data Analysis and the Results
3.1. Multivariable Regression
In this study, both linear and nonlinear regressions were used to develop equations between concrete temperature and input variables. The stepwise variable selection procedure was applied to prepare
equations. The statistical parameters of the input variables are shown in Table 2.
By using the least square mathematical method, the intercorrelations of time (), environment temperature, water to cement ratio (), aggregate content, specimen height, and diameter with concrete
temperature were calculated at 0.486, 0.704, 0.181, −0.617, 0.032, and 0.228, respectively. The results show that with the increase of environment temperature and time lapse, concrete temperature
rises, and with the increase of aggregate content, concrete temperature decreases. The effect of other parameters on concrete temperature is not significant.
The linear equation between input variables and concrete temperature is as follows: In addition, the nonlinear equation between parameters is as follows: in which , and are time (), environment
temperature (°C), aggregate amount (Kg), water to cement ratio, concrete specimen diameter (mm), and height (mm), respectively.
The distribution of the difference between concrete temperature predicted from (1) and (2), and actual determined amounts are shown in Figures 2 and 3. The results indicate that (2) can have a
significant estimation of concrete temperature during curing for concrete made from CEM , .
3.2. Artificial Neural Network Procedure
Among the existing numerous neural networks (NNs) paradigms, feed-forward artificial neural networks (FANNs) are the most popular due to their flexibility in structure, good presentational
capabilities, and large number of available training algorithms [6–8].
The basic structure of a multilayer feed-forward network model can be made of one input layer, one or more hidden layers, and one output layer [9].
Neural network training can be made more efficient by specific preprocessing. In this paper, all the input and output parameters were preprocessed by normalizing inputs and targets; therefore, in the
preprocessing stage, their mean and standard deviation are 0 and 1, respectively. Consider the following: in which is an actual parameter, mean is actual parameters mean, is actual parameters
standard deviation, and is a normalized parameter [10].
In this part of the study, ANN model is presented to predict concrete temperature during curing. Multilayer feed-forward network model has been trained with BP (back propagation) training algorithm.
Different neural networks were designed, and the best parameters value was obtained by trial and error. However, the main aim is to acquire a neural network with the smallest dimensions and the least
errors. The most appropriate results have been obtained from chosen network model in which hyperbolic tangent sigmoid and linear functions were used as an activation function for the hidden and
output layer neurons. According to (1), the selected variables were determined as the best variables for predicting concrete temperature. Therefore, those variables, which were used as input to ANN
for the improvement of concrete temperature prediction, are listed in Table 3.
The data in the model were separated into three: training, validation, and test sets in which the test set was used after training. The validation and training process were stopped after 245 epochs
for model.
The performance function is the mean square error (MSE), the average-squared error between the network predicted outputs and the target outputs, which are equal to 0.00044 for training. The
correlation coefficients for the validation and training stages are presented in Table 4.
Figures 4, 5, and 6 show a graphical comparison of the determined experimental temperature and those predicted by artificial neural network in the validation, training, and test process for model.
The distribution of the difference between predicted temperature by ANN and actual values in the test process is presented in Figure 7.
It was observed that concrete temperature prediction using ANN procedure could be more acceptable and satisfactory than others.
4. Conclusions
(i)Concrete temperature during curing was studied by experimenting on ten cylindrical concrete specimens under controlled situations in which the concrete temperature was measured by strain gauges
(which are equipped with thermistor). Data recording began right after specimen concreting and continued until 30 hours after.(ii)The model used for concrete specimens, which were prepared with
Portland cement (CEM , ), was utilized for estimating concrete temperature using stepwise regression and artificial neural network methods.(iii)The inter correlation between input variables and
concrete temperature showed that higher environment temperature and time in concrete can result in higher concrete temperature and higher aggregate content in concrete results in lower-concrete
temperature. No other parameters were significant.(iv)The linear and nonlinear equations can estimate the concrete temperature with correlation coefficients () of 0.814 and 0.873, respectively.(v)The
artificial neural network procedure can predict the concrete temperature with correlation coefficient of 0.999. Therefore, the obtained results are much better than multivariate regression.(vi)The
results show that artificial neural network is a reliable method to predict concrete temperature during curing.
Conflict of Interests
Hereby, the authors disclosure that this paper was just a contribution to the advancement of science, and they just used SPSS and MATLAB softwares as the mathematical methods for prediction. The
authors do not have a direct financial relation with the commercial identity mentioned in the paper (SPSS and MATLAB softwares).
1. P. F. Siew, T. Puapansawat, and Y. H. Wu, “Temperature and heat stress in a concrete column at early ages,” ANZIAM Journal, vol. 44, no. E, pp. C705–C722, 2003.
2. R. Moser, “Mass Concrete, CEE8813A—Material science of concrete. 2. Lecture Overview,” http://people.ce.gatech.edu/~kk92/massconcrete.pdf.
3. H. Weigler and S. Karl, Junger Beton: Beanspruchung-Festigkeit-Verformung, vol. 40, Betonwerk Fertigteil-Technik, 1974.
4. K. A. Riding, J. L. Poole, A. K. Schindler, M. C. G. Juenger, and K. J. Folliard, “Evaluation of temperature prediction methods for mass concrete members,” ACI Materials Journal, vol. 103, no. 5,
pp. 357–365, 2006. View at Scopus
5. ConcreteWorks, IHEEP Conference, San Antonio, Tex, USA, 2009.
6. C. T. Leondes, Neural Network Systems Techniques and Applications: Algorithms and Architectures, Academic Press, New York, NY, USA, 1998.
7. R. P. Lippmann, “An introduction to computing with neural nets,” IEEE ASSP Magazine, vol. 4, no. 2, pp. 4–22, 1987. View at Scopus
8. D. Sarkar, “Methods to speed up error back-propagation learning algorithm,” ACM Computing Surveys, vol. 27, no. 4, pp. 519–542, 1995. View at Publisher · View at Google Scholar
9. F. Özcan, C. D. Atiş, O. Karahan, E. Uncuoǧlu, and H. Tanyildizi, “Comparison of artificial neural network and fuzzy logic models for prediction of long-term compressive strength of silica fume
concrete,” Advances in Engineering Software, vol. 40, no. 9, pp. 856–863, 2009. View at Publisher · View at Google Scholar · View at Scopus
10. H. Demuth and M. Beale, Neural Network Toolbox for Use with MATLAB, Handbook, 2002. | {"url":"http://www.hindawi.com/journals/je/2013/946829/","timestamp":"2014-04-16T22:14:23Z","content_type":null,"content_length":"81005","record_id":"<urn:uuid:f270f778-fbe0-4e5f-8fc5-f2e404b3161c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Docutils: Documentation Utilities
On 10/22/11 9:43 AM, Guenter Milde wrote:
> On 2011-10-20, Paul Tremblay wrote:
>> On 10/20/11 6:14 AM, Guenter Milde wrote:
>>>>>>>> MathML is pretty essential for XML. Can it be put in the XML
>>>>>>>> writer?
> ...
>>> The idea would be to define a new transform
>>> (transforms.math.latex2mathml, in a file
>>> docutils/docutils/transforms/math.py say) that would
>>> replace the content of math and math-block nodes.
>>> The code would be a mixture of examples from other transforms and the
>>> visit_math() method in the html writer. (to avoid duplicating code,
>>> once it is in place and tested, the html writer should be modified to
>>> use it as well)
>> Following your directions, I created math.py in the docutils/transform
>> directory. To the __init__ .py in writers, I added:
>> from docutils.transforms import math
> ...
>> def get_transforms(self):
>> return Component.get_transforms(self) + [
>> universal.Messages,
>> universal.FilterMessages,
>> universal.StripClassesAndElements,
>> math.Math_Block,
> this should go into /docutils/writers/docutils_xml.py, otherwise it
> affects all writers (inheriting from docutils.writers.Writer).
>> math.py looks like this;
>> """
>> math used by writers
> I'd use something like """math related transformations""" as docstring
> for the transforms.math module.
>> from docutils import writers
>> from docutils.transforms import writers
>> """
> What is this for?
>> __docformat__ = 'reStructuredText'
>> from docutils import nodes, utils, languages
>> from docutils.transforms import Transform
>> from docutils.math.latex2mathml import parse_latex_math
>> class Math_Block(Transform):
> Do we need separate classes for Math_Block vs. Math_Role or could these be
> put into one class?
> Considering that `transforms.math` might be used for several math-related
> transforms (equation numbering comes to my mind), I'd use a more telling
> name, `LaTeXmath2MathML`, say.
>> """
>> Change the text in the math_block from plain text in LaTeX to
>> a MathML tree
>> """
>> default_priority = 910 # not sure if this needs to be loaded
>> earlier or not
>> def apply(self):
>> for math_block in self.document.traverse(nodes.math_block):
>> math_code = math_block.astext()
>> mathml_tree = parse_latex_math(math_code, inline=False)
>> # need to append the mathml_tree to math_block
>> I have a few questions.
>> (1) How do you get just the text from a node.Element? In my code, the
>> math_block.astext actually returns a text representation of the node,
>> including the elements tags, etc. I looked everywhere in
>> docutils/nodes.py for a method to get just text, but could not find one.
>> Somehow, feeding the string with the tags to parse_latex_math worked
>> anyway (following the example in the html writer).
> Strange. How can I reproduce this?
> I did a small test inserting
> print node.astext().encode('utf8')
> in the visit_math_block() method of the html writer and did get just the
> content, no tags.
>> (2) How do I append the resulting tee to the math_block? I tried
>> math_block.append() and other methods, but it seems the latext2mathml.py
>> returns a different type of tree then that already created.
> I think so. Remember that latext2mathml is taken from a user-contributed
> add-on in the sandbox and is only intended to produce an MathML
> representation to put into HTML pages.
>> I could convert the mathml tree to an XML string and then create a tree
>> from that, and then append the tree? I'm just not sure how to do this.
> I see several ways forward from here:
> * your proposal (convert to string and parse this to a compatible tree).
> Is there a XML parser in the minidom module?
> * modify latex2mathml to use "compatible" tree nodes based on Docutils'
> nodes.
>> (3) How do I make this transformation optional, depending on an options
>> by the user. The user might have put asciimath in the math_block
>> element, in which case it should not be transformed by the
>> latex2mathml.py modulel.
> Here, you can look at examples for customizable transforms. E.g. the
> sectnum_xform setting is defined in frontend.py and works on the
> SectNum(Transform) in transforms/parts.py.
> Günter
Okay, I've followed all of your suggestions.
docutils/writers/docutils.xml now has the following changes:
from docutils.transforms import math
import math
class Writer(writers.Writer, Component):
subclassing Component in order
to add the transformation
supported = ('xml',)
"""Formats this writer supports."""
settings_spec = (
'"Docutils XML" Writer Options',
'Warning: the --newlines and --indents options may adversely
affect '
'whitespace; use them only for reading convenience.',
(('Generate XML with newlines before and after tags.',
{'action': 'store_true', 'validator':
('Generate XML with indents and newlines.',
{'action': 'store_true', 'validator':
('Omit the XML declaration. Use with caution.',
{'dest': 'xml_declaration', 'default': 1, 'action':
'validator': frontend.validate_boolean}),
('Omit the DOCTYPE declaration.',
{'dest': 'doctype_declaration', 'default': 1,
'action': 'store_false', 'validator':
('Convert LaTeX math in math_block and math to MathML',
Add an option for --latex-mathml
{'dest': 'latex_mathml', 'default':False,
'action': 'store_true', 'validator':
('Convert ASCII math in math_block and math to MathML',
Add an option for ASCII math
{'dest': 'ascii_mathml', 'default':False,
'action': 'store_true', 'validator':
def get_transforms(self):
return Component.get_transforms(self) + [
add 2 new writers
The file docutils/transforms/math.py, looks like this:
# $Id: writer_aux.py 6433 2010-09-28 08:21:25Z milde $
# Author: Lea Wiemann <LeWiemann@...>
# Copyright: This module has been placed in the public domain.
math used by writers
__docformat__ = 'reStructuredText'
from docutils import nodes, utils, languages
from docutils.transforms import Transform
from docutils.math.latex2mathml import parse_latex_math
from xml.dom.minidom import parse, parseString, Node
import sys
class LaTeXmath2MathML(Transform):
Change the text in the math_block and math from plain text in LaTeX to
a MathML tree
default_priority = 910 # not sure if this needs to be loaded
earlier or not
def apply(self):
latex_mathml = self.document.settings.latex_mathml
if not latex_mathml:
for math_block in self.document.traverse(nodes.math_block):
math_code = math_block.astext()
mathml_tree = parse_latex_math(math_code, inline=False)
math_xml = ''.join(mathml_tree.xml())
except SyntaxError, err:
err_node = self.document.reporter.error(err,
new_math_block = nodes.Element(rawsource=math_code)
new_math_block.tagname = 'math_block'
convert_string_to_docutils_tree(math_xml, new_math_block)
for math in self.document.traverse(nodes.math):
math_code = math.astext()
mathml_tree = parse_latex_math(math_code, inline=True)
math_xml = ''.join(mathml_tree.xml())
except SyntaxError, err:
err_node = self.document.reporter.error(err,
new_math = nodes.Element(rawsource=math_code)
new_math.tagname = 'math'
convert_string_to_docutils_tree(math_xml, new_math)
class Asciimath2MathML(Transform):
Change the text in the math_block and math from plain text in ASCII to
a MathML tree
default_priority = 910 # not sure if this needs to be loaded
earlier or not
def apply(self):
ascii_mathml = self.document.settings.ascii_mathml
if not ascii_mathml:
import asciimathml
from xml.etree.ElementTree import Element, tostring
except ImportError as msg:
err_node = self.document.reporter.error(msg,
for math_block in self.document.traverse(nodes.math_block):
math_code = math_block.astext()
math_tree = asciimathml.parse(math_code)
math_tree.set('xmlns' ,'http://www.w3.org/1998/Math/MathML';)
math_xml = tostring(math_tree, encoding="utf-8")
math_xml = math_xml.decode('utf8')
new_math_block = nodes.Element(rawsource=math_code)
new_math_block.tagname = 'math_block'
convert_string_to_docutils_tree(math_xml, new_math_block)
for math in self.document.traverse(nodes.math):
math_code = math.astext()
math_tree = asciimathml.parse(math_code)
math_tree.set('xmlns' ,'http://www.w3.org/1998/Math/MathML';)
math_xml = tostring(math_tree, encoding="utf-8")
math_xml = math_xml.decode('utf8')
new_math = nodes.Element(rawsource=math_code)
new_math.tagname = 'math'
convert_string_to_docutils_tree(math_xml, new_math)
def convert_string_to_docutils_tree(xml_string, docutils_node):
minidom_dom = parseString(xml_string.encode('utf8'))
_convert_tree(minidom_dom, docutils_node)
def _convert_tree(minidom_node, docutils_node):
for child_node in minidom_node.childNodes:
if child_node.nodeType == Node.ELEMENT_NODE:
tag_name = child_node.nodeName
node_text = ''
for grand_child in child_node.childNodes:
if grand_child.nodeType == Node.TEXT_NODE:
node_text += grand_child.nodeValue
if node_text.strip() != '':
Element = nodes.TextElement(text=node_text)
Element = nodes.Element()
Element.tagname = tag_name
attrs = child_node.attributes
if attrs:
for attrName in attrs.keys():
attrNode = attrs.get(attrName)
attrValue = attrNode.nodeValue
attr_string_name = attrNode.nodeName
Element[attr_string_name] = attrValue
if len(child_node.childNodes) != 0:
_convert_tree(child_node, Element)
I've done simple tests with the math.txt in
test/functional/input/data/math.txt, as well as with my own
math_ascii.rst file, and the code seems to work.
It obviously needs some documentation. Also, there is apparently a bug
with minidom when using python 3. I could write another simple function
to supplement _convert_tree(minidom_node, docutils_node):, except use
the xml.etree module, which is considered more up-to-date than minidom,
but which does not work with python older than 2.5. | {"url":"http://sourceforge.net/p/docutils/mailman/docutils-develop/?viewmonth=201110&viewday=23","timestamp":"2014-04-20T04:52:44Z","content_type":null,"content_length":"81523","record_id":"<urn:uuid:8b27124f-b787-44a7-854c-268ea9f77ead>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information loss, determinism and quantum mechanics
Seminar Room 1, Newton Institute
Since the probabilistic interpretation of Quantum Mechanics requires unitarity, conventional Quantum Mechanics does not allow for a notion such as information loss. However, it is possible to
interpret Quantum Mechanics in terms of an underlying deterministic theory, and it is here, in the classical sense, that information loss can be introduced. We argue that information loss then indeed
may be an essential ingredient of the theory, helping us to understand how the notion of holography can be reconciled with locality. Thus, determinism and information loss may become crucial
ingredients of Planck Scale Physics, and we explain why exactly this should lead to the unitary quantum mechanical framework of the Standard Model of the subatomic world. | {"url":"http://www.newton.ac.uk/programmes/QIS/seminars/2004121510001.html","timestamp":"2014-04-17T21:38:24Z","content_type":null,"content_length":"4936","record_id":"<urn:uuid:3a448575-e35b-420b-b21f-6f57acce271b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: defining "mathematics"
Matt Insall montez at rollanet.org
Sun Jan 2 10:10:11 EST 2000
-----Original Message-----
From: owner-fom at math.psu.edu [mailto:owner-fom at math.psu.edu]On Behalf Of
Vladimir Sazonov
Sent: Thursday, December 23, 1999 12:34 PM
To: fom at math.psu.edu
Subject: Re: FOM: defining "mathematics"
I prefer more general definition of mathematics (related with recent
postings of J. Mycielski; cf. also my reply to him and some other
postings to FOM):
Mathematics is a kind of *formal engineering*, that is engineering
of (or by means of, or in terms of) formal systems serving as
"mechanical devices" accelerating and making powerful the human
thought and intuition (about anything - abstract or real objects or
whatever we could imagine and discuss).
I like this ``definition'' also, as far as it goes. The problem I see in
all this is that the ``definitions'' all seem to appeal to terms not
previously defined. Thus I would not really call this a definition, because
too much of the description is undefined. In particular, how does one
define ``engineering'', or ``human thought and intuition''? I guess this
might make a good Webster's type of definition, but it is hardly
mathematical itself.
Finally, I would like to stress that mathematics actually deals
nothing with truth. (Truth about what? Again Platonism?) Of course
we use the words "true", "false" in mathematics very often.
But this is only related with some specific technical features of
FOL. This technical using of "truth" may be *somewhat* related
with the truth in real world. Say, we can imitate or approximate
the real truth. This relation is extremely important for possible
applications. But we cannot say that we discover a proper
"mathematical truth", unlike provability. This formalist point of
view is not related with rejection of intuition behind formal
systems. But the intuition in general is extremely intimate thing
and cannot pretend to be objective. Also intuition is *changing*
simultaneously with its formalization. (Say, recall continuous
and nowhere differentiable functions.) Instead of saying that
a formal system is true it is much more faithful to say that it is
useful or applicable, etc. Some other formalism may be more
useful. There is nothing here on absolute truth.
Okay, so if we do not deal with truth, then what would you say is the
``truth in the real world'' of the following statement: ``If f is a
continuous function defined on the real numbers, then f has the intermediate
value property.'' I submit that as mathematicians, we do, and should, care
about the ``truth in the real world'' of such a statement. The problem I
see is that it is either true or false, but not both, but the formalist
approach would have us believe that no one even knows what the statement
means. If this were correct about such statements as this, then do we, as
human beings (not, per se, as mathematicians in particular) know what
anything means? In fact, would you say, professor Sazonov, that there is no
such thing as ``truth in the real world''? For if it is because we
formalize Mathematics that we lose meaning, is it not the case that even the
very statements we make about the ``real world'' are formalizations, of a
sort, and so can be interpreted any way one may choose. After all, whether
we are doing mathematics or not, we are only putting marks on the page.
Thus, even the statement that `` mathematics actually deals nothing with
truth'' has no meaning outside the virtual marks on the virtual page on my
computer monitor. When you restrict mathematics to the tenets of pure
formalism, everything must be so restricted.
By the way, as an example of useful and meaningful formal system
I recall *contradictory* Cantorian set theory. (What if in ZFC or
even in PA a contradiction also will be found? This seems
would be a great discovery for the philosophy of mathematics!)
I think this would be a disaster. It is bothersome enough that
``Cantorian'' set theory (I think you actually mean Fregean set theory.
Cantor's approach was decidedly NOT formalistic.) is considered to be
contradictory. Why should ``philosophers of mathematics'' be so biased?
Let's find out what is true about the consistency of ZFC, PA, etc., and look
for contradictions, but I suggest we not hope for one. It is entirely
reasonable to believe that a system such as PA or ZFC is consistent, even
though current formal logic systems cannot determine whether it is
consistent or not. Why not accept the fact that we may not ever know that
PA is consistent in the same way that we know the group axioms are
consistent, but to still search for an answer to the question? I guess what
I'm trying to say is that as research programmes, searching for a
contradiction in modern mathematics is a fine programme, as is the attempt
to find a formalization of the mathematics that has been done to this day
which is provably consistent. Not only that, success in either of these
programmes would constitute a worthwhile contribution to mathematics and
metamathematics, and the philosophy of mathematics. But an even better
contribution would be to find an appropriate formal logic system in which
Gödel's argument cannot be carried out, because the notion of ``proof'' in
the given system is different enough from our previous notions to allow one
to prove the consistency of PA. The philosophical duty would then be to
explain why the new system is, or should be, acceptable to Mathematicians.
I am wary that this type of programme is doomed to failure, if only because
the current philosophical climate leans toward formalism in the area of
foundations of mathematics.
Matt Insall
montez at rollanet.org
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-January/003538.html","timestamp":"2014-04-21T13:26:50Z","content_type":null,"content_length":"9072","record_id":"<urn:uuid:1e9722fa-58d4-47f8-8d92-9716eab7b898>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Exponential Curve
Check out this post on Robert Talbert's Casting Out Nines.
He is referring to an essay by Peter Wood that predicts that education schools will be gone by 2036. It seems like there isn't a lot of support out there for the stats quo. I wonder if there will be
much change in the near future?
I gave my own comments on the issue on Joanne Jacob's blog.
If you are interested in elementary math education, I recommend the book Knowing and Teaching Elementary Mathematics, by Liping Ma. It shows some very scary stuff about elementary school teachers'
lack of mathematical knowledge. Most of the American elementary teachers they studied couldn't correctly divide fractions (let alone come up with appropriate story problems to illustrate the
As a relatively new teacher, it's hard for me to really know what to believe. In my math methods class, I was taught the socio-constructivist philosophy. I accepted it not blindly, but because I had
already been teaching for a couple of years (I took afternoon/evening classes over a period of 2 years while teaching on my intern credential), and the ideas really resonated with me, based on what I
saw from my students. One of the units I developed in the class was the slope unit that I mentioned in an earlier post, and when I implemented that unit, it worked much better with my students than
anything I had previously tried. We read a lot of research and several case studies, and there seemed to be a lot of evidence supporting the benefits of this approach. Plus, it just made a lot of
sense to me.
This philosophy is different than what has been described as the purely constructivist, or inquiry-based learning that many people seem to abhor. And it is quite different than the behaviorist
(teacher as source of all knowledge) philosohpy.
I have read many posts and articles from mathematicians who are opposed to the NCTM and its beliefs. But here is a quote from their standards document pertaining to high school level:
Because students' interests and aspirations may change during and after high school, their mathematics education should guarantee access to a broad spectrum of career and educational options. They
should experience the interplay of algebra, geometry, statistics, probability, and discrete mathematics. They need to understand the fundamental mathematical concepts of function and relation,
invariance, and transformation. They should be adept at visualizing, describing, and analyzing situations in mathematical terms. And they need to be able to justify and prove mathematically based
This seems reasonable to me, and I wonder if those of you who take issue with the NCTM's beliefs could comment specifically on what the concerns are. Is it the content of their standards, or the way
they are implemented, or something else?
I am bound by the CA content standards and STAR testing. I feel that most of the standards are things that a student taking algebra, for example, should know how to do. But there seems to be very
little emphasis on problem solving, critical thinking, and application. And the STAR tests themselves do not really assess these things, only the fundamental "tools" of algebra. As teachers know, if
it is not assessed, it will not get done - especially when teachers and schools are under the gun to raise test scores. So is the plan that students will master the tools of math in high school, and
then will somehow be able to become problem solvers in college? And what about the students who don't go to college?
What thoughts do people have on the CA standards (or, if you are familiar with another state's standards)? Are they a subset of a good math education? Do they mesh with the NCTM standards at all? Are
the state standards counterproductive? (As for the STAR testing, that will probably be a good topic for a later posting).
And finally, why is there such hostility over these issues? What I've read seems more like political partisanship, and less like people trying to collaboratively build a consensus as to how best to
teach our country's students.
In my note-taking post, I talked about structuring my summer Geometry class conceptually.
I am interested in doing the same thing for my Algebra 2 Honors class this fall. I want to figure out an overarching structure for the Algebra 2 standards. Here is my very initial idea for
- Manipulating Expressions
(simplifying, factoring, polynomial operations, etc.)
- Solving Equations
(solving by factoring, quadratic formula, completing the square, finding roots of a graph, using logarithms, etc.)
- Working with Inequalities
(absolute value inequalities, polynomial inequalities, graphing inequalities, linear programming)
- Graphing Functions
(quadratics, higher order polynomials, rationals, logs and exponentials, etc.)
- The Number System
(sets of numbers, properties)
Any thoughts? If you have experience with Algebra 2 concepts, how would you categorize them? This definitely needs revision and I'm sure I've forgotten things. Also, there is the problem of topics
that cross over sections. I am thinking about including some sort of cross-referencing strategy (but maybe that's too much). Good thing I have all summer to mull this one over.
So far, I've had one student tell me (unsolicited) that she really likes the use of the 3-column note taking system. That's not bad for only 3 days of use!
...didn't seem that tough at the time... I thought I was going to have a nice, easy summer. I don't know what I was thinking. Planning 5 hours of lecture, handouts, selected readings, sketchpad labs,
quizzes, and so on is pretty demanding. So far, it's taken me about 4 - 5 hours of planning a day. So, instead of leaving school at 5 like I was hoping, I'm there till 7:30 or 8. I'm thinking that I
will get more efficient at it as we get farther into the summer - well, here's hoping anyway. But it's going pretty well - the students have actually commented on the fact that the day goes by
relatively quickly.
The note taking system seems to be working pretty well so far. As the daily quizzes start happening (we've only had one so far), I'll have more of a sense as to whether they are able to process and
retain the vast amount of material I am trying to teach them.
We finally got a computer lab at school, and it's pretty cool. I am able to present on the projector while they work, and with Remote Desktop, I can take control of their machines and even send them
to the projector. Today, I sent a student's screen to the projector so he could show the class how he solved a problem (figuring out how to construct a pair of complementary angles). He talked it
through from his seat, using his mouse as a pointer - then, as I walked around, I saw other students begin to copy his technique and run with the problem. Technology is pretty nice when you have it,
and it works!
The nice thing about having a 5 hour class is that we can work on the same content in multiple ways during the same day - from guided exploration on sketchpad to lecture / reading to practice
problems, followed up with a formative assessment the following morning. I'm hoping that this reinforcement from multiple ways of presenting information will help them absorb it.
...just invert and multiply!
This seems to be one of the fundamental philosophical questions in math education. Do you teach tricks and rules, or do you let students explore and construct their own knowledge? Does it matter what
type of students you are working with? Behaviorist, Constructivist, Socio-Constructivist?
I, personally, believe in Socio-Constructivism. The idea is that the teacher creates a well-structured pathway for learning that takes students through the levels of understanding. Students start off
being given basic facts or information that they will need, and then are given time to work (individually or collaboratively) on expoloration or inquiry based activities. Through this process, they
begin to generate conjectures and create a quasi-mathematical understanding. Then, through class discussion and direct instruction, the teacher helps correct misunderstandings and formalize the
knowledge (i.e. algorithms, processes, etc.) The drawback to this, of course, is that it takes a lot more time - both to plan the materials, and actual class time. Is it worth it? That's the real
question. I believe it is, but I think there are those who disagree.
One of the classs that I have developed is called Numeracy, in which we put freshmen who test below 7th grade level when they arrive (this is typically 70 - 80% of the class). We start off at basic
operations and place value concepts - this takes the entire first semester. The second semester is all fractions. We spend a few weeks working with fraction circles, drawing fraction bars, using
reasoning, etc. to compare, order, and evaluate fractions. We work on determining if a fraction is closer to 0, 1/2, or 1 whole. We figure out how you can compare 7/8 and 8/9 by reasoning. Then we
spend 6 weeks adding and subtracting, with manipulatives, with pictures, and finally, with the algorithm. Then there are 6 weeks dedicated to multiplication and a few more for division. Yet at the
end of all this, I still have a large number of students who haven't learned to work with fractions fluently. Sometimes, there is the temptation to teach the rules and then practice them to death,
but in my heart I believe this won't work. Plus, if you blindly memorize the "flip and multiply" rule, you haven't really learned anything about division and you won't be able to apply that knowledge
to other situations (i.e. Algebra). Also, if you develop no context for your algorithm, you have no way of knowing if what you are doing makes any sense.
I think students have been trained to want algorithms in math. They resist exploration. A noisy class will quiet down and get to work when a worksheet is put in front of them - why? Even if it is not
being graded, they will rush to try and get answers down, regardless of whether they are learning or not. Like lemmings, they seem compelled to "finish the worksheet". It boggles the mind! "Mr.
Greene, just tell us the easy way! Stop asking us questions!" How many times a day do I hear that in Numeracy? Students know that drawing fraction circles, for example, will help them solve a
problem, but they would rather ask me for help, or just skip the question. When I force them to draw a picture, they begrudgingly do so, then look at their picture and say, "Oh, that's all you want
us to do? That's easy!" and proceed to answer the question with little difficulty. Yet, on the next question, the process will repeat itself. Patience is definitely a learned skill!
I am happy to say that, at the end of Numeracy, most of the students have learned that, when adding or subtracting fractions, you don't add across!
Any thoughts on the matter?
In trying to get up to speed on Geometry teaching, I have been reviewing quite a few different textbooks. Each one, of course, has its good and its bad, and this is relative to the audience of the
book (I believe). One reader emailed me to say that, in the course of supplementing her children's math education, came to detest the book that was being used (Key Curriculum Press: Discovering
Geometry) and recommends Geometry, 2nd Edition by Harold Jacobs for its logical development, clear definitions, and foundations built on postulates. These are actually two of the books that I have
been looking at, and I see some really good things in both of them.
We actually have a couple of shelves filled with the Harold Jacobs book which we purchased early on, and then found out that our students were unable to have much success with it. We are going to be
reevaluating our Geometry curriculum for the following year, including testing out some different texts. Keep in mind that we are beholden to the state standards and STAR testing just like any public
school... In all our decisions, we have to struggle between what is best for our students' mathematical development and access to college level work, and the spectre of the STAR test and API
rankings. believe it or not, these do not always coincide!
If you teach (or have taught) Geometry, what texts are you using? Why do you like/dislike them? What type of student population are you working with?
Tomorrow starts our very first summer Geometry course.
All of our incoming students take Algebra 1 as freshmen (even though many have "passed" already in 8th grade, the majority don't know anything about Algebra). For our students who are better or more
interested in math, this poses a problem, as it does not give them enough time to get to Calculus by their senior year.
Our original sequence was the standard Algebra 1, Geometry, Algebra 2, Precalculus. To get to Calculus as seniors, motivated students were able to take Algebra 2 as an intensive, 5-week course after
their sophomore year, and Precalculus as juniors. I taught the Calculus class for the first two years, and it was extremely difficult, because the students were definitely underprepared in their
algebra skills, and much of our time was spent relearning Algebra concepts.
This year, we have switched the sequence to (what we think is a more logical) Algebra 1, Algebra 2, Geometry, Precalculus. The summer class then becomes Geometry. This gives students an entire year
to work on their Algebra 2 skills instead of a ridiculously short 5 weeks. Of course, the same thing will happen in Geometry, but our feeling is that there are significantly fewer skills in Geometry
that are needed to be successful at higher level math. It's hard to make these sacrifices, but we have to do so all the time.
So, that being said, tomorrow I will start teaching the summer Geometry class. I've never taught Geometry at all, and I have to figure out how to boil down the essentials into 5 weeks of daily 5-hour
classes. (Whatever else happens, I must say I'm impressed with the students who have elected to spend their vacation taking this class! We talk about ganas a lot at DCP, and these kids really
exemplify that ideal!)
What successes / failures have you had (or heard of) in trying to bring low skilled, underserved, or underachieving students to higher levels of math?
This is a huge problem at our school! Our students have never learned how to effectively take notes in class, and then how to actually use them for completing homework and studying.
When I was in high school (and even college), all I did was write down what was on the board and important things said by the teacher, and it was clear to me how to use these notes to study from. I
don't know how or when I learned to do this - and, since I don't remember learning how to make sense of this, I don't have a good idea of how to teach it.
The "just write it down and study it" method does not work at all for most of my students. I can get them to write things down pretty well, but I have had no success with getting them to actually
keep and use these notes effectively. My students end up having binders stuffed with notes and handouts, but whenever they need to find something, they start at the beginning and flip randomly
through it until they either find it (rare) or give up (frequent).
This coming year, a colleague of mine and I have decided to have students organize their binders conceptually instead of chronologically. He will do this for Geometry and I will do this for Algebra
2. Our idea is that, at the beginning of the year, we will come up with a conceptual structure for the entire year, and students will be required to keep their binders structured that way. For
example, Geometry may have the following categories: Points, Lines, Planes, and Angles; Triangles; Quadrilaterals; General Polygons; Circles and Spheres; Logic and Proof; Synthesis. These will be the
sections in their binders. Additionally, there will be a summary sheet at the beginning divided into these same categories. At the end of each lesson, students will be asked to file their notes in
the appropriate section (this will be scaffolded away as the year progresses), and to make an entry on their summary sheets. Homework and other handouts will also be filed in the appropriate section,
next to any relevant notes.
We think that this will help students access their notes much more effectively - which will encourage them to actually use them for studying! For example, if they see a problem with a diagram of a
right triangle and a missing side length, they may not know what information they need, or when that information was taught (do we even remember when we taught what?) but they will know to look in
the "triangle" section. Then, if their notes are complete, it should be a lot easier to find something useful.
Any thoughts on this? Has anyone tried this? How do you get students to effectively keep and use their notes?
Hi! I have been teaching high school math for the last 6 years at Downtown College Prep, a charter school in San Jose, CA. Our students are primarily Latino, are far below grade level in their math
and reading skills, and will be the first in their families to go to college. We refer to our students as being on an exponential learning curve: the average level in math of our incoming freshmen is
5th grade, and we need to get them to a 12th grade level in 4 short years.
Everything that I know about teaching math comes from what I have learned over the years by reading, experimenting, and collaborating with colleagues. I search the web a lot for ideas, but it's hard
to find good, consolidated material that targets this population. Every so often, I find a good article in the NCTM magazine or a book that I can adapt, which provides a springboard for a great new
unit - but I know there are a ton of successful and innovative ideas and strategies out there. My hope in starting this blog is to try to start a forum in which math teachers can collaborate and
share their ideas for creatively and effectively teaching specific concepts and structuring their courses.
I plan on posting strategies that I am trying; but more importantly, I plan on posting questions I have, and am looking forward to the comments and discussion that will follow. If you have a question
or topic that you want feedback on, just email me and I'll post it for comments. Also, please forward the address of this blog to any math teachers that you know so the ideas can multiply. | {"url":"http://exponentialcurve.blogspot.com/2006_06_01_archive.html","timestamp":"2014-04-19T04:19:50Z","content_type":null,"content_length":"101006","record_id":"<urn:uuid:37f235c6-b7a0-4f4d-a98e-2b26b3b3d64c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Dynamics
B. Trajectory harmonic initial conditions
C. Potential energy surfaces and vibrational states
D. Mode populations
A. Hydrogen-atom kinetic energy distributions
B. Choice of coupled-surface trajectory and initial conditions algorithms
C. Effect of large initial excitations
D. Effect of the diabatic coupling
E. Intermode couplings
F. Reaction mechanism | {"url":"http://scitation.aip.org/content/aip/journal/jcp/130/23/10.1063/1.3132222","timestamp":"2014-04-16T20:56:28Z","content_type":null,"content_length":"136798","record_id":"<urn:uuid:57eb1669-005b-466a-83ef-ce3eada56f33>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
convert numbers from arabic to english
up vote 0 down vote favorite
i want to do the reverse of this function i want to convert numbers from arabic to english
<script type="text/javascript">
function numentofa(n) {
var digits = [],
do {
r = n % 10;
n = (n - r) / 10;
digits.unshift(String.fromCharCode(r + 1776));
} while (n > 0);
return digits.join('');
window.onload = aa();
function aa() {
var oooook = numentofa("121");
javascript jquery javascript-events
@zerkms i think this platform is for help to people with their coding problems so if you can help then you do if you can't then you not do this kind of comment – The Mechanic May 16 '13 at 5:23
@The Mechanic: you're absolutely right. We're here to help solving tasks, not to do other people's job for free. OP didn't provide anything, but a task for us to do. – zerkms May 16 '13 at 5:24
then you can ask for him if he'll provide then you'll help if not then you don't – The Mechanic May 16 '13 at 5:25
1 @The Mechanic: then I don't "what"? OP should have already gotten the point. – zerkms May 16 '13 at 5:27
@zerkms and i'll appreciate it thanks :-) – The Mechanic May 16 '13 at 5:35
show 2 more comments
closed as too localized by Will May 17 '13 at 13:48
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center
, please edit the question.
1 Answer
active oldest votes
Assuming that the number you wish to convert is already in a string, then something like the following snippet of code will work:
function convertDigitIn(enDigit){ // PERSIAN, ARABIC, URDO
var newValue="";
for (var i=0;i<enDigit.length;i++)
var ch=enDigit.charCodeAt(i);
if (ch>=48 && ch<=57
// european digit range
var newChar=ch+1584;
up vote 3 down vote newValue=newValue+String.fromCharCode(newChar);
return newValue;
orginally take from here Convert from English Digits to Arabic ones in html page
this function converts eng to Arabic but not arabic numbers to english my input is '۱۲۳۴۵' and output i want is 12345 – Mir Shakeel Hussain May 16 '13 at 6:07
b.t.w thankx @Mechanic – Mir Shakeel Hussain May 16 '13 at 6:07
add comment
Not the answer you're looking for? Browse other questions tagged javascript jquery javascript-events or ask your own question. | {"url":"http://stackoverflow.com/questions/16579474/convert-numbers-from-arabic-to-english/16579885","timestamp":"2014-04-23T16:58:53Z","content_type":null,"content_length":"65511","record_id":"<urn:uuid:9aaa4b94-51bc-420e-858b-79cf141c42b7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grouped Linear Regression with Covariance Analysis
Menu location: Analysis_Regression and Correlation_Grouped Linear_Covariance.
This function compares the slopes and separations of two or more simple linear regression lines.
The method involves examination of regression parameters for a group of xY pairs in relation to a common fitted function. This provides an analysis of variance that shows whether or not there is a
significant difference between the slopes of the individual regression lines as a whole. StatsDirect then compares all of the slopes individually. The vertical distance between each regression line
is then examined using analysis of covariance and the corrected means are given (Armitage and Berry, 1994).
• Y replicates are a random sample from a normal distribution
• deviations from the regression line (residuals) follow a normal distribution
• deviations from the regression line (residuals) have uniform variance
This is just one facet of analysis of covariance; there are additional and alternative methods. For further information, see Kleinbaum et al. (1998) and Armitage and Berry (1994). Analysis of
covariance is best carried out as part of a broader regression modelling exercise by a Statistician.
Technical Validation
Slopes of several regression lines are compared by analysis of variance as follows (Armitage, 1994):
- where SS[common] is the sum of squares due to the common slope of k regression lines, SS[between] is the sum of squares due to differences between the slopes, SS[total] is the total sum of squares
and the residual sum of squares is the difference between SS[total] and SS[common]. Sxx[j] is the sum of squares about the mean x observation in the jth group, SxY[j] is the sum of products of the
deviations of xY pairs from their means in the jth group and SYY[j] is the sum of squares about the mean Y observation in the jth group.
Vertical separation of slopes of several regression lines is tested by analysis of covariance as follows (Armitage, 1994):
- where SS are corrected sums of squares within the groups, total and between the groups (subtract within from total). The constituent sums of products or squares are partitioned between groups,
within groups and total as above.
Data preparation
If there are equal numbers of replicate Y observations or single Y observations for each x then you are best prepare and select your data using a group identifier variable. For example with three
replicates you would prepare five columns of data: group identifier, x, y1, y2, and y3. Remember to choose the "Groups by identifier" option in this case.
If there are unequal numbers of replicate Y observations for each x then you must prepare the x data in separate columns by group, prepare the Y data in separate columns by group and observation
(i.e. Y for group 1 observation 1… r rows long where r is the number of repeat observations). Remember to choose the "Groups by column" option in this case. This is done in the example below.
From Armitage and Berry (1994).
Test workbook (Regression worksheet: Log Dose_Std, BD 1_Std, BD 2_Std, BD 3_Std, Log Dose_I, BD 1_I, BD 2_I, BD 3_I, Log Dose_F, BD 1_F, BD 2_F, BD 3_F).
Three different preparations of Vitamin D are tested for their effect on bones by feeding them to rats that have an induced lack of mineral in their bones. X-ray methods are used to test the
re-mineralisation of bones in response to the Vitamin D.
For the standard preparation:
Log dose of Vit D
0.544 0.845 1.146
Bone density score
0 1.5 2
0 2.5 2.5
2.75 6 4
2.75 4.25 5
1.75 2.75 4
2.75 1.5 2.5
2.25 3 3.5
2.25 3
2.5 2
For alternative preparation I:
Log dose of Vit D
0.398 0.699 1.000 1.301 1.602
Bone density score
0 1 1.5 3 3.5
1 1.5 1 3 3.5
0 1.5 2 5.5 4.5
0 1 3.5 2.5 3.5
0 1 2 1 3.5
0.5 0.5 0 2 3
For alternative preparation F:
Log dose of Vit D
0.398 0.699 1.000
Bone density score
2.75 2.5 3.75
2 2.75 5.25
1.25 2.25 6
2 2.25 5.5
0 3.75 2.25
0.5 3.5
To analyse these data in StatsDirect you must first enter them into 14 columns in the workbook appropriately labelled. The first column is just three rows long and contains the three log doses of
vitamin D for the standard preparation. The next three columns represent the repeated measures of bone density for each of the three levels of log dose of vitamin D which are represented by the rows
of the first column. This is then repeated for the other two preparations. Alternatively, open the test workbook using the file open function of the file menu. Then select covariance from the groups
section of the regression and correlation section of the analysis menu. Select the columns marked "Log Dose_Std", "Log Dose_I" and "Log Dose_F" when you are prompted for the predictor (x) variables,
these contain the log dose levels (logarithms are taken because, from previous research, the relationship between bone re-mineralisation and Vitamin D is known to be log-linear). Make sure that the
"use Y replicates" option is checked when you are prompted for it. Then select the outcome (Y) variables that represent the replicates. You will have to select three, five and three columns in just
three selection actions because these are the number of corresponding dose levels in the x variables in the order in which you selected them.
Alternatively, these data could have been entered in just three pairs of workbook columns representing the three preparations with a log dose column and column of the mean bone density score for each
dose level. By accepting the more long winded input of replicates, StatsDirect is encouraging you to run a test of linearity on your data.
For this example:
Grouped linear regression
Source of variation SSq DF MSq VR
Common slope 78.340457 1 78.340457 67.676534 P < 0.0001
Between slopes 4.507547 2 2.253774 1.946984 P = 0.1501
Separate residuals 83.34518 72 1.157572
Within groups 166.193185 75
Common slope is significant
Difference between slopes is NOT significant
Slope comparisons:
slope 1 (Log Dose_Std) v slope 2 (Log Dose_I) = 2.616751 v 2.796235
Difference (95% CI) = 0.179484 (-1.576065 to 1.935032)
t = -0.203808, P = 0.8391
slope 1 (Log Dose_Std) v slope 3 (Log Dose_F) = 2.616751 v 4.914175
Difference (95% CI) = 2.297424 (-0.245568 to 4.840416)
t = -1.800962, P = 0.0759
slope 2 (Log Dose_I) v slope 3 (Log Dose_F) = 2.796235 v 4.914175
Difference (95% CI) = 2.11794 (-0.135343 to 4.371224)
t = -1.873726, P = 0.065
Covariance analysis
Source of variation YY xY xx DF
Between groups 17.599283 -3.322801 0.988515 2
Within 166.193185 25.927266 8.580791 8
Total 183.792468 22.604465 9.569306 10
Source of variation SSq DF MSq VR
Between groups 42.543829 2 21.271915 1.694921
Within 87.852727 7 12.55039
Total 130.396557 9
P = 0.251
Corrected Y means ± SE for mean x mean 0.85771:
Y'= 2.821356, ± 2.045448
Y'= 1.453396, ± 1.593641
Y'= 3.317784, ± 2.054338
Line separations (common slope =3.021547):
line 1 (Log Dose_Std) vs line 2 (Log Dose_I) Vertical separation = 1.367959
95% CI = -4.760348 to 7.496267
t = 0.527831, (7 df), P = 0.6139
line 1 (Log Dose_Std) vs line 3 (Log Dose_F) Vertical separation = -0.496428
95% CI = -7.354566 to 6.36171
t = -0.171164, (7 df), P = 0.8689
line 2 (Log Dose_I) vs line 3 (Log Dose_F) Vertical separation = -1.864388
95% CI = -8.042375 to 4.3136
t = -0.713594, (7 df), P = 0.4986
The common slope is highly significant and the test for difference between the slopes overall was non-significant. If our assumption of linearity holds true we can conclude that these lines are
reasonably parallel. Looking more closely at the individual slopes preparation F is almost shown to be significantly different from the other two but this difference was not large enough to throw the
overall slope comparison into a significant heterogeneity.
The analysis of covariance did not show any significant vertical separation of the three regression lines.
Copyright © 2000-2014 StatsDirect Limited, all rights reserved. Download a free trial here. | {"url":"http://www.statsdirect.com/help/content/regression_and_correlation/grouped_covariance.htm","timestamp":"2014-04-16T16:08:24Z","content_type":null,"content_length":"24190","record_id":"<urn:uuid:d40d1e2a-d677-4cf4-bf9a-b44ec87446e4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
2 solo x 12s d2
• Re: 2 solo x 12s d2
• Re: 2 solo x 12s d2
Just sold my solo x locally to me. Considering buying it back and running 2. Glws op
• Re: 2 solo x 12s d2
hit me up i have a 18 solo x 4123542700
• Re: 2 solo x 12s d2
i have a solo x 18 text me 4123542700
• Re: 2 solo x 12s d2
any interest in a pair of rockford t2 12s? would come with a custom box 7.5cf @30hz
• Re: 2 solo x 12s d2
• Re: 2 solo x 12s d2
• Re: 2 solo x 12s d2
ive changed a lot of opinions with these, here is a video
2 Rockford Fosgate T2 12's and MMATS 3000.1d - YouTube
• Re: 2 solo x 12s d2
• Re: 2 solo x 12s d2
R u interested in two 2011 12" Fi Qs? Still stiff I've got a WTT thread up with pics and info
• Re: 2 solo x 12s d2
Why are these marked as 'new'?
They've obviously been used at some point... Screw holes scratched up... missing motor boots...
• Re: 2 solo x 12s d2
I have 2 kx2500s also if anyone would be interrested in a complete package deal at a good price. One amp has a few scratches on it but works good the other is clean.
• Re: 2 solo x 12s d2
Inbox me the package prices
• Re: 2 solo x 12s d2
• Re: 2 solo x 12s d2
Will make a nice package deal. I got subs amps and batteries. | {"url":"http://www.caraudio.com/forums/subwoofer-classifieds/540981-2-solo-x-12s-d2-2-print.html","timestamp":"2014-04-18T23:55:26Z","content_type":null,"content_length":"16382","record_id":"<urn:uuid:8c1a5560-c288-417e-baae-bc3f5c2645eb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: 1-4 scale
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: 1-4 scale
From David Hoaglin <dchoaglin@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: 1-4 scale
Date Sun, 5 Aug 2012 12:59:34 -0400
Good points. Fortunately, the variables that Ebru is asking about are
potential predictor variables.
David Hoaglin
On Sun, Aug 5, 2012 at 1:52 PM, Richard Williams
<richardwilliams.ndu@gmail.com> wrote:
> At 10:57 AM 8/5/2012, David Hoaglin wrote:
>> Dear Ebru,
>> People often analyze data from Likert scales as equally spaced, so you
>> can use each of the eight items in your model as a numerical variable,
>> with values 1 to 4. You simply need to be aware that you are treating
>> the four categories as equally spaced.
> It is a debatable practice though. Consider the following (warm has 4
> values):
> use "http://www.indiana.edu/~jslsoc/stata/spex_data/ordwarm2.dta", clear
> reg warm yr89 male white age ed prst
> rvfplot
> According to the reference manual discussion of rvfplot, "In a well-fitted
> model, there should be no pattern to the residuals plotted against the
> fitted
> values...Any pattern whatsoever indicates a violation of the least-squares
> assumptions."
> Clearly, there is a pattern in the above rvfplot, i.e. you get 4 parallel
> straight lines. Further, it isn't unique to this example; any 4 category
> dependent variable will show the same thing.
> In fairness, if your dependent variable had 17 possible values, you would
> have 17 straight lines -- but your eye probably wouldn't detect that because
> everything would seem so cluttered. There is probably some point where there
> are enough possible values that violations of OLS assumptions aren't
> important, but I would be hesitant to say that point is met with a DV that
> only has 4 categories.
>> Earlier you asked about centering those variables. Centering will do
>> no harm. As far as the model is concerned, it affects only the
>> definition of the intercept. If you do decide to "center" the
>> variables, you may want to use one of the four values. If the data on
>> an item are not concentrated at one end, you could use 2 or 3 or
>> perhaps 2.5 as the centering constant. (In a 5-point Likert scale
>> with a neutral category at 3, using 3 would often be a reasonable
>> choice.)
>> When you have the results from the model with the eight separate
>> items, you may want to see whether the coefficients for the four items
>> within a heading are similar. If they are, and it makes sense, you
>> could consider replacing those four items with their sum (or average)
>> --- a composite score.
>> David Hoaglin
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-08/msg00226.html","timestamp":"2014-04-17T04:37:22Z","content_type":null,"content_length":"11360","record_id":"<urn:uuid:9adf84d4-67ca-4650-9f76-c9144e962940>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: May 2000 [00222]
[Date Index] [Thread Index] [Author Index]
Re: Parametric Solving Question From 14 Year Old
• To: mathgroup at smc.vnet.net
• Subject: [mg23531] Re: Parametric Solving Question From 14 Year Old
• From: Ronald Bruck <bruck at math.usc.edu>
• Date: Tue, 16 May 2000 22:30:02 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
In article <8fqr85$h40 at smc.vnet.net>, Alan <alana at mac.com> wrote:
: I am 14 and am wondering how to solve parametric equations directly
:without graphing in Mathematica? I am figuring out when a projectile in
:motion hits the ground only due to the even force of gravity acting upon
:it. The parametric equation is:
:I want to find the value x(t) and t when y(t)=0.
[Because the email above was sent with some 8bit characters
it is possible that the missing Degree mentioned below did not
get transmitted. -- moderator]
Be careful, you may not get what you expect--Mathematica computes
Cos[60] to be negative, because that's sixty RADIANS. Use 1.0472 (60
degrees in radians), or write it as 60 Degree (which will do the
To solve the second equation for t, do something like
x[t_] = 15 t Cos[60 Degree];
y[t_] = 15 t Sin[60 Degree] - 9.80665/2 t^2
(you can also refer to the 9.8066 as GravityAcceleration, if you load the
"Miscellaneous`StandardAtmosphere`" package). Then
sol = Solve[y[t] == 0,t]
{{t -> 0.}, {t -> 2.6493}}
The first solution is obvious, and the second is what you want.
To get the range, type
to get 19.8698. (Not a very powerful projectile.)
But at age 14, instead of using Mathematica, you should be solving the
quadratic equation by hand. You're much better off reading an algebra
book and playing with things by hand than using a CAS at this age.
When I was your age, I was solving cubic equations, because I read
"Algebra for the Practical Man", which showed how. I didn't understand
the REASON the solution worked, but understanding comes later. Nor will
I comment on how useful symbolic solutions to cubic equations are to the
"Practical" man...
--Ron Bruck
Due to University fiscal constraints, .sigs may not be exceed one | {"url":"http://forums.wolfram.com/mathgroup/archive/2000/May/msg00222.html","timestamp":"2014-04-18T18:39:28Z","content_type":null,"content_length":"36248","record_id":"<urn:uuid:cc1d3982-97ec-4e2f-85af-2bd27ee89741>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: Misfit (Robert A. Heinlein)
Contributed by Donald Hackler
This is a classic Heinlein story, focusing on his constant theme of totally competent and complete individual. The title character, one Andrew Jackson Libby, has reprise appearances in at least two
later novels, "Methusalah's Children" and "Time Enough for Love". Libby is portrayed as being one of those rare, but valuable, mathematical geniuses. The difference from the usual portrayal, is that
he may be a tad focused, but is still perfectly able to function in the world. In other words, he is not the typical one-dimensional mad scientist.
Of course, the details of Libby's thought processes are not explained (how could they be?). The other mathematically interesting thing here is a fairly good exposition of basic celestial mechanics (I
believe they are calling it astrodynamics, these days) and some other engineering mathematics. When you consider that the story was published in 1939, when space travel was widely derided as utter
bunk, this is quite an accomplishment.
I stumbled across this story in an anthology in the mid-60's, as I was just entering high school. I'd always done well in math (and in other subjects), but this story actually whetted my interest in
mathematics, in terms of its being useful for something besides passing a required course. In fact, I now make my living as a mathematician and engineering/scientific programmer. | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf365","timestamp":"2014-04-20T14:19:53Z","content_type":null,"content_length":"10375","record_id":"<urn:uuid:1103da4e-6cac-4f34-8101-3ad421d94cff>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from FOCS (part 5)
[Again, edited by Justin Thaler, who organized all this. The following was written by Michael Forbes of MIT.]
Title: Higher Cell Probe Lower Bounds for Evaluating Polynomials
Author: Kasper Green Larsen
Larsen gave a very understandable talk about data structure lower bounds, where he managed to survey the two relevant prior techniques and introduce his new technique. The lower bounds he discusses
are in Yao's cell-probe model. On a basic level, this model is to data structures as communication complexity is to algorithmic complexity. That is, the cell-probe model (and communication
complexity) are really only concerned with the ability of a data structure (or algorithm) to move information around. These models ignore computational factors. This makes their results (as lower
bounds) hold generally (even when we do care about computation), and also allows us to actually derive results (since getting unconditional lower bounds on computation is hard, but lower bounds for
information are possible).
Specifically, the cell-probe model thinks of a data structure as a random-access array of cells, each with a word of some fixed size. Each word can store some sum, or some sort of pointer to another
cell. To allow pointers to be meaningful, we allow the word size to be at least log(n), and typically O(log(n)) is the word size considered, as this models real-life usage of numbers. A data
structure then has two parts: (1) a preprocessing step for storing information into the cells, and (2) a method for probing these cells to support queries on the original information. In this model,
we will only charge for (possibly adaptive) probes to cells. Once we have a cell's information, we can perform arbitrary computation on it to determine the next probe.
The cost of any conventional data structure (as designed, say, in the word RAM model) can only decrease in the cell-probe model, as we consider computation as free. Thus, any lower bounds in the
cell-probe model will necessarily apply to any reasonable model for data structures one could consider.
The first technique (by Miltersen-Nisan-Safra-Wigderson) used to prove cell-probe lower bounds was based in communication complexity. As mentioned above, the data structure can be seen as divided:
the cells storing the information, and the probes used to answer queries on this information. Given this division, it seems natural to divide these two parts between two players in a communication
game, albeit one that is fairly asymmetric in the inputs the players have. That is, Alice has the query that we are trying to answer, and Bob has original information. They are tasked with outputting
the answer to Alice's query on Bob's data. Notice that there are two trivial protocols: Alice could send her query to Bob, or Bob could send all of the original information to Alice. These are
asymmetric, as typically Alice's query will have a very small description, while Bob has many bits of information.
A data structure can then be converted into a communication protocol as follows: Bob will alone construct the cells of the data structure, and Alice will send indices of the cells she wishes to
probe. Bob will return the values of those cells, and Alice will generate the next probe, and so on, until the query can be answered. With this transformation, we can now apply the techniques of
communication complexity to get the needed lower bounds.
While the above idea is a good one, it is not good enough for the regime that Larsen is considering. This regime is that of data structures that only have a poly(n) number of possible queries. A good
example of this is where the original information is a set of n points in the two-dimensional grid [n]x[n], where each point has a weight at most n. The data structure seeks to return the sum of the
weights of the points contained in a given geometric rectangle SxT, where S,T are intervals in [n]. There are n^4 possible rectangles and thus n^4 many queries. To compare, encoding the points takes
O(n log(n)) bits. Clearly, in this regime a data structure using n^4 cells can answer queries in O(1) probes. The more interesting question is the time/space trade-off: how many probes are needed
when we allow the data structure to only use as many bits as needed to describe the original information?
It turns out that in this regime, the above lower bound technique cannot give lower bounds better than a constant number of probes. This is provable, in the sense that we cannot prove good
communication lower bounds because there are good upper bounds: the above trivial protocols are too efficient in this setting.
A second technique (by Patrascu-Thorup), more geared to this regime, uses again the above communication framework, but now gives Alice a harder problem. Alice is given d queries to answer on the
single instance Bob holds. The reason this becomes interesting is because given a data structure for this problem, we can now develop relatively better communication protocols: Alice can run all d
queries in parallel. Doing this naively will not gain much, but Patrascu-Thorup observed that in one round of Alice sending the indices of the cells she wishes to probe, the order of these cells is
irrelevant, as Bob will simply return their contents anyways. Thus, Alice can then send fewer bits by encoding her messages appropriately. This approach turns out to yield lower bounds of the form lg
(n)/lg(lg(n)), and cannot do better for the same reason as with the first technique: the trivial protocols are too efficient.
Finally, the technique Larsen introduces diverges from the above themes. Panigrahy-Talwar-Wieder introduces a cell-sampling technique for proving lower bounds, and Larsen takes this further. So far,
he can only apply the idea to the polynomial evaluation problem. In this problem, we are given a degree n polynomial over a finite field of size n^2. We will support queries to the evaluation of this
polynomial to any of the n^2 points of the field. As before, we can do this in O(1) probes if we have n^2 space, and rather we wish to study the question in the O(n * polylog(n)) space regime. Larsen
shows a lower bound of log(n) probes in this regime, which notably is better than any of the previous techniques could hope to prove.
To prove this lower bound, consider any small probe data structure. If we subsample the cells of the data structure (delete any given cell independently and at random) then because any query can be
answered by few probes, there will be many queries that can still be answered from the sub-sampled cells. As degree n polynomials can be recovered from any n+1 evaluations, if the polynomial
evaluation data structure can still answer n+1 queries with the sub-sampled cells then this means that the sub-sampled cells contain at least as much information as the space of degree n polynomials.
However, if the number of probes is small enough, then we can subsample so few cells that this will reach a contradiction, as the number of sub-sampled cells will be too small to recover an arbitrary
degree n polynomial. Setting the parameters correctly then yields the lower bound.
No comments: | {"url":"http://mybiasedcoin.blogspot.com/2012/11/posts-from-focs-part-5.html","timestamp":"2014-04-16T22:13:19Z","content_type":null,"content_length":"78413","record_id":"<urn:uuid:fcb9ac35-4847-42d2-a5aa-f7c56c4762fb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Structural Analysis - mathematics of the future
Hi 21122012
I have a problem with
I have made a spreadsheet using Excel.
The first row shows values of 'r' from 0 to 1 in steps of 0.1
The second row shows values of f(r) = pi r^2
The third row uses the trapezium rule to calculate (approximately) the area of each section.
The fourth row adds these up to give the area under the graph of f(r)
Using the final value as a correct volume of a cone I calculated the height of the cone.
The fifth row shows the height for each r value.
The final row shows the volume, calculated using the correct formula.
As expected this value agrees with your area under graph calculation for the final column (P). This should happen because I used the coluimn P values to compute 'h'.
But it does not show that your area under graph figures are correct.
So the integration at the start of this post does not give correct volumes.
Why do you think this is ?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=247792","timestamp":"2014-04-18T08:19:20Z","content_type":null,"content_length":"64407","record_id":"<urn:uuid:8257f752-a0c7-4af5-85ff-633026c885cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dalworthington Gardens, TX Trigonometry Tutor
Find a Dalworthington Gardens, TX Trigonometry Tutor
...I can help with query strategies and with syntax of most query statements. I am a certified tutor in all math topics covered by the ASVAB. In my career I have used and taught many mathematical
15 Subjects: including trigonometry, chemistry, physics, calculus
...While in college, I spent 3 years tutoring high school students in math, from algebra to AP Calculus. I also tutored elementary students in reading and spent 6 months homeschooling first and
third grade. When I was in high school, I would help my classmates in every subject from English to government to calculus.
40 Subjects: including trigonometry, reading, chemistry, calculus
...I have taught Algebra I and Geometry, but am certified to teach any math in public schools. I am a Texas certified teacher in 4th grade through 12th grade mathematics. Please contact me so that
we can schedule your child's future tutoring sessions.I'm a Texas certified 4-12 mathematics teacher.
10 Subjects: including trigonometry, geometry, statistics, algebra 1
...I have a patient and clear way to explain things that most tutors can't do, even if they know the material. I always try to point out the common mistakes and review the important points. I can
take you through the short-cut way of doing things, make it simple, and most important - you remember it.
15 Subjects: including trigonometry, chemistry, physics, statistics
I have taught Mathematics at the High School level for the previous three years. Classes that I have taught include Algebra 2, Geometry, Precalculus, Calculus and a couple of Engineering courses.
The way that I tutor is by building students confidence in their abilities, starting with basic proble...
13 Subjects: including trigonometry, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Dalworthington_Gardens_TX_trigonometry_tutors.php","timestamp":"2014-04-21T05:10:17Z","content_type":null,"content_length":"24907","record_id":"<urn:uuid:fc740731-fbdf-4a87-bdc7-dd090a548f10>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Institute for Mathematics and its Applications (IMA)
- Advances in Random Matrix Theory
Random matrix theory (RMT) has seen dramatic progress in the last decade. The mathematical study of random matrices was initiated in the late 1940's, with important contributions by Wigner, and
developed by mathematical physicists, most notably Dyson, Mehta, Gaudin, Pastur in the 1960's. As a mathematical discipline, progress was less intensive until the early 1990's, when Tracy and Widom
made an explicit link of RMT with integrable systems. Shortly after, the theory of Riemann Hilbert problem was shown, by Deift, Zhou and co-workers, to shed light on asymptotics and in particular
universality results for classes of random matrices with explicit expressions for their joint distribution of eigenvalues. Arguably the crowning achievement of these developments was the proof, by
Baik, Deift and Johansson, of random matrix limits for the asymptotics of the length of the longest increasing subsequence in a random permutation. In another direction, the introduction of free
probability by Voiculescu shed light on combinatorial and algebraic aspects of RMT.
In the last decade, much effort has gone into expansions of the basic theory in directions where explicit expressions and tools like orthogonal polynomials are not available. The proposed course will
highlight some of these developments. The following topics will be covered, each in about 8 hours of lecture time.
1. Universality in Random Matrix Theory
2. Beta Ensembles
3. Multi-matrix models and expansions
4. Support properties for polynomials in independent matrices
Topic 1 has seen dramatic recent progress through work of Tau,Vu, Erdos, Schlein, Yau and others. Topic 2 gives an extension of classical random matrix models through tri-diagonal realizations
proposed by Dumitriu and Edelman. Topic 3 deals with the combinatorial applications of random (multi)-matrix models and the identification of coefficients in ther partition functions, a topic going
back to work of physicists like t'Hooft and others but now on firm mathematical ground. The last topic is influenced by work of Haagerup and collaborators, and has a strong complex-analytic flavor.
The speakers are:
• Greg W. Anderson (U. Minnesota) for topic #4.
• Alice Guionnet (ENS - Lyon) for topic #3.
• Balint Virag (U. Toronto) for topic #2.
• Ofer Zeitouni (U. Minnesota/Weizmann Institute) for topic #1.
In addition, shorter presentations (2-3 hours each) on material related to the main topics of the courses will be offered by guest lecturers. In order to keep abreast of current developments, the
list of speakers in those presentations will be decided at a later point. We anticipate 5-6 such speakers.
The course is intended for researchers at all levels who have had prior exposure to RMT and that are interested in learning recent techniques and results in RMT.
A partial bibliography for random matrix theory is available here | {"url":"http://www.ima.umn.edu/2011-2012/ND6.18-29.12/","timestamp":"2014-04-16T16:01:32Z","content_type":null,"content_length":"48372","record_id":"<urn:uuid:940785e8-c6ac-4a05-a3a8-d49146e75f3e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
The ESTEREL programming language: Design, semantics and implementation
Results 1 - 10 of 16
- Proceedings of the Ninth Annual IEEE Symposium on Logic in Computer Science , 1994
"... We develop a model for timed, reactive computation by extending the asynchronous, untimed concurrent constraint programming model in a simple and uniform way. In the spirit of process algebras,
we develop some combinators expressible in this model, and reconcile their operational, logical and denota ..."
Cited by 89 (10 self)
Add to MetaCart
We develop a model for timed, reactive computation by extending the asynchronous, untimed concurrent constraint programming model in a simple and uniform way. In the spirit of process algebras, we
develop some combinators expressible in this model, and reconcile their operational, logical and denotational character. We show how programs may be compiled into finite-state machines with loop-free
computations at each state, thus guaranteeing bounded response time. 1 Introduction and Motivation Reactive systems [12,3,9] are those that react continuously with their environment at a rate
controlled by the environment. Execution in a reactive system proceeds in bursts of activity. In each phase, the environment stimulates the system with an input, obtains a response in bounded time,
and may then be inactive (with respect to the system) for an arbitrary period of time before initiating the next burst. Examples of reactive systems are controllers and signal-processing systems. The
primary issu...
- Journal of Symbolic Computation , 1996
"... Synchronous programming (Berry (1989)) is a powerful approach to programming reactive systems. Following the idea that "processes are relations extended over time" (Abramsky (1993)), we propose
a simple but powerful model for timed, determinate computation, extending the closure-operator model for u ..."
Cited by 62 (11 self)
Add to MetaCart
Synchronous programming (Berry (1989)) is a powerful approach to programming reactive systems. Following the idea that "processes are relations extended over time" (Abramsky (1993)), we propose a
simple but powerful model for timed, determinate computation, extending the closure-operator model for untimed concurrent constraint programming (CCP). In (Saraswat et al. 1994a) we had proposed a
model for this called tcc--- here we extend the model of tcc to express strong time-outs: if an event A does not happen through time t, cause event B to happen at time t. Such constructs arise
naturally in practice (e.g. in modeling transistors) and are supported in synchronous programming languages. The fundamental conceptual difficulty posed by these operations is that they are
nonmonotonic. We provide a compositional semantics to the non-monotonic version of concurrent constraint programming (Default cc) obtained by changing the underlying logic from intuitionistic logic
to Reiter's default logic...
- IN PROCEEDINGS OF IJCAI-2001 , 2001
"... In the future, webs of unmanned air and space vehicles will act together to robustly perform elaborate missions in uncertain environments. We coordinate these systems by introducing a reactive
model-based programming language (RMPL) that combines within a single unified representation the flex ..."
Cited by 48 (20 self)
Add to MetaCart
In the future, webs of unmanned air and space vehicles will act together to robustly perform elaborate missions in uncertain environments. We coordinate these systems by introducing a reactive
model-based programming language (RMPL) that combines within a single unified representation the flexibility of embedded programming and reactive execution languages, and the deliberative reasoning
power of temporal planners. The KIRK planning system takes as input a problem expressed as a RMPL program, and compiles it into a temporal plan network (TPN), similar to those used by temporal
planners, but extended for symbolic constraints and decisions. This intermediate representation clarifies the relation between temporal planning and causal-link planning, and permits a single task
model to be used for planning and execution. Such a
, 1994
"... This paper explores Lhc expressive power of Lhc tcc paradigm. The origin of Lhc work in Lhc inLcgraLion of synchronous and consLrainL programming is described. The basic conceptual and
maLhcmaLical framework developed in Lhc spirk of Lhc model-based approach characLcrisLic of LhcorcLical compuLcr sc ..."
Cited by 34 (4 self)
Add to MetaCart
This paper explores Lhc expressive power of Lhc tcc paradigm. The origin of Lhc work in Lhc inLcgraLion of synchronous and consLrainL programming is described. The basic conceptual and maLhcmaLical
framework developed in Lhc spirk of Lhc model-based approach characLcrisLic of LhcorcLical compuLcr science is reviewed. Wc show LhaL a range of consLrucLs for expressing LimcouLs, prccmpLion and
oLhcr complicaLcd paLLcrns of Lcmporal acLivky arc expressible in the basic model and language-framework. Indeed, we present a single construct on processes, definable in the language, that can
simulate the effect of other preemption constructs
- In Symposium on Principles of Programming Languages , 1999
"... ) Vineet Gupta Radha Jagadeesan Prakash Panangaden y vgupta@mail.arc.nasa.gov radha@cs.luc.edu prakash@cs.mcgill.ca Caelum Research Corporation Dept. of Math. and Computer Sciences School of
Computer Science NASA Ames Research Center Loyola University--Lake Shore Campus McGill University Moffe ..."
Cited by 29 (1 self)
Add to MetaCart
) Vineet Gupta Radha Jagadeesan Prakash Panangaden y vgupta@mail.arc.nasa.gov radha@cs.luc.edu prakash@cs.mcgill.ca Caelum Research Corporation Dept. of Math. and Computer Sciences School of Computer
Science NASA Ames Research Center Loyola University--Lake Shore Campus McGill University Moffett Field CA 94035, USA Chicago IL 60626, USA Montreal, Quebec, Canada Abstract This paper describes a
stochastic concurrent constraint language for the description and programming of concurrent probabilistic systems. The language can be viewed both as a calculus for describing and reasoning about
stochastic processes and as an executable language for simulating stochastic processes. In this language programs encode probability distributions over (potentially infinite) sets of objects. We
illustrate the subtleties that arise from the interaction of constraints, random choice and recursion. We describe operational semantics of these programs (programs are run by sampling random
choices), deno...
"... A central challenge in computer science and knowledge representation is the integration of conceptual frameworks for continuous and discrete change, as exemplified by the theory of differential
equations and real analysis on the one hand, and the theory of programming languages on the other. We take ..."
Cited by 23 (3 self)
Add to MetaCart
A central challenge in computer science and knowledge representation is the integration of conceptual frameworks for continuous and discrete change, as exemplified by the theory of differential
equations and real analysis on the one hand, and the theory of programming languages on the other. We take the first steps towards such an integrated theory by presenting a recipe for the
construction of continuous programming languages --- languages in which state dynamics can be described by differential equations. The basic idea is to start with an untimed language and extend it
uniformly over dense (real) time. We present a concrete mathematical model and language (the Hybrid concurrent constraint programming model, Hybrid cc) instantiating these ideas. The language is
intended to be used for modeling and programming hybrid systems. The language is declarative --- programs can be understood as formulas that place constraints on the (temporal) evolution of the
system, with parallel compositio...
- Hybrid Systems II, volume 999 of LNCS , 1995
"... . We present a language, Hybrid cc, for modeling hybrid systems compositionally. This language is declarative, with programs being understood as logical formulas that place constraints upon the
temporal evolution of a system. We show the expressiveness of our language by presenting several examples, ..."
Cited by 20 (7 self)
Add to MetaCart
. We present a language, Hybrid cc, for modeling hybrid systems compositionally. This language is declarative, with programs being understood as logical formulas that place constraints upon the
temporal evolution of a system. We show the expressiveness of our language by presenting several examples, including a model for the paperpath of a photocopier. We describe an interpreter for our
language, and provide traces for some of the example programs. 1 Introduction and Motivation The constant marketplace demand of ever greater functionality at ever lower price is forcing the artifacts
our industrial society designs to become ever more complex. Before the advent of silicon, this complexity would have been unmanageable. Now, the economics and power of digital computation make it the
medium of choice for gluing together and controlling complex systems composed of electro-mechanical and computationally realized elements. As a result, the construction of the software to implement,
monitor, c...
- in: Proceedings of IJCAI-01 , 2001
"... Deductive mode-estimation has become an essential component of robotic space systems, like NASA’s deep space probes. Future robots will serve as components of large robotic networks. Monitoring
these networks will require modeling languages and estimators that handle the sophisticated behaviors of r ..."
Cited by 19 (14 self)
Add to MetaCart
Deductive mode-estimation has become an essential component of robotic space systems, like NASA’s deep space probes. Future robots will serve as components of large robotic networks. Monitoring these
networks will require modeling languages and estimators that handle the sophisticated behaviors of robotic components. This paper introduces RMPL, a rich modeling language that combines reactive
programming constructs with probabilistic, constraint-based modeling, and that offers a simple semantics in terms of hidden Markov models (HMMs). To support efficient realtime deduction, we translate
RMPL models into a compact encoding of HMMs called probabilistic hierarchical constraint automata (PHCA). Finally, we use these models to track a system’s most likely states by extending traditional
HMM belief update. 1
, 1995
"... We extend the model of [SJG94b] to express strong timeouts (and pre-emption): if an event A does not happen through time t, cause event B to happen at time t. Such constructs arise naturally in
practice (e.g. in modeling transistors) and are supported in languages such as Esterel (through instanta ..."
Cited by 8 (1 self)
Add to MetaCart
We extend the model of [SJG94b] to express strong timeouts (and pre-emption): if an event A does not happen through time t, cause event B to happen at time t. Such constructs arise naturally in
practice (e.g. in modeling transistors) and are supported in languages such as Esterel (through instantaneous watchdogs) and Lustre (through the "current" operator). The fundamental conceptual
difficulty posed by these operators is that they are non-monotonic. We provide a simple compositional semantics to the non-monotonic version of concurrent constraint programming (CCP) obtained by
changing the underlying logic from intuitionistic logic to Reiter 's default logic [Rei80]. This allows us to use the same construction (uniform extension through time) to develop Default Timed CCP
(Default tcc) as we had used to develop Timed CCP (tcc) from CCP [SJG94b]. Indeed the smooth embedding of CCP processes into Default cc processes lifts to a smooth embedding of tcc processes into
Default t...
- IN HENZINGER ALUR AND SONTAG, EDITORS, HYBRID SYSTEMS WORKSHOP, DIMACS. PROCEEDINGS OF HYBRID SYSTEMS III, LNCS 1066 , 1996
"... Synchronous programming. Discrete event driven systems [HP85, Ber89, Hal93] are systems that react with their environment at a rate controlled by the environment. Such systems can be quite
complex, so for modular development and re-use considerations, a model of a composite system should be built ..."
Cited by 7 (1 self)
Add to MetaCart
Synchronous programming. Discrete event driven systems [HP85, Ber89, Hal93] are systems that react with their environment at a rate controlled by the environment. Such systems can be quite complex,
so for modular development and re-use considerations, a model of a composite system should be built up from models of the components compositionally. From a programming language standpoint, this
modularity concern is addressed by the analysis underlying synchronous languages [BB91, Hal93, BG92, HCP91, GBGM91, Har87, CLM91, SJG95], (adapted to dense discrete domains in [BBG93]): -- Logical
concurrency/parallelism plays a role in determinate reactive system programming analogous to the role of procedural abstraction in sequential programming --- the role of matching program structure to
the structure of the solution to the problem at hand. -- Preemption --- the ability to stop a process in its tracks --- is a fundamental progr... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=103192","timestamp":"2014-04-19T19:46:26Z","content_type":null,"content_length":"40472","record_id":"<urn:uuid:e9c16c82-0c58-4a00-a88a-3e1a72a31759>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
This question has been answered [·] 9 replies
Do these sentences mean the same? If they both have the same meaning which of them sounds better?
1. This car is twice as cheaper as that car.
2. This car is half as expensive as that car.
1. Our house is twice as small as their house.
2. Our house is half as big as their house.
Thank you in advance.
Half is used with nouns which are measurable quantities that can be divided equally. Here is an example of half + adjective.
The optimist sees a glass as half full. The pessimist sees the same glass as half empty.
1. I have half as much money as he has [DEL:him:DEL].
2. I have twice as little money as him. >> This does not make sense. say instead: I have twice as much money as he has.
Alex+1. This car is twice as cheaper as that car.>> Incorrect. Use "cheap," not "cheaper."
2. This car is half as expensive as that car. >> OK.
Usually I would say "This car costs twice as much as that one." or "This car is half the price of that one." Cheap has the connotation of being poorly constucted, not just less expensive.
1. Our house is twice as small as their house.
2. Our house is half as big as their house. This one sounds better. But I would say "Our house is half the size of theirs."
Thank you very much for your answer.
In all your examples “half” is used with nouns.
"This car is half the price of that one."
"Our house is half the size of theirs."
Can give me some good examples where “half” would be use with adjectives?
And what can you say about these sentences?
1. I have half as much money as him.
2. I have twice as little money as him.
I’d be very obliged to you if you could explain me one more thing.
Q #1: Can I use “twice” if I want to say than one thing is smaller (shorter, lower, worse …) by half than the other one? Or I must use only “half”?
1. This house is twice as lower as that house.
2. This road is twice as short as that road.
If these sentences are incorrect could you correct them?
I have half as much money as he has him.
Q#2: You corrected “him” and put “he has”. Does it mean than “him” can’t be used in that sentence?
2. I have twice as little money as him. >> This does not make sense. say instead: I have twice as much money as he has.
Probably you wanted to say: “I have half as much money as he has.” ?
Alex+Probably you wanted to say: “I have half as much money as he has.” ?
No, the problem was in the original sentence which was not understandable. "twice" always means doubling in size, an increased quantity.
"Half" always means 50 percent, a decreased amount.
"Twice as little" makes no sense - you have to say "twice as much" or "half as much".
Alex+1. This house is twice as lower as that house.
2. This road is twice as short as that road.
1) You cannot use the comparative form in the construction "twice as, half as". You can say:
This house is lower than that house.
This house is half the height of that house.
This house is 50 percent lower than that house.
2) "Twice as short" does not make any sense. Twice means "more, larger". Short is not going in an increasing direction, it is going in the opposite direction. Also, we do not measure how short
something is. We do measure how long something is - its length. You cannot use "short" in these constuctions.
This road is half as long as that road.
This road is twice as long as that road.
Alex+Q#2: You corrected “him” and put “he has”. Does it mean than “him” can’t be used in that sentence?
"Him" is object case. Under strict grammar rules, there are 2 subjects.
I (subject) have half as much as he (subject) has.
In modern usage, this rule has been relaxing. You will frequently hear the object case (him / her / them) used.
AlpheccaStars, thank you for your answer.
As I understood structures with “twice” always mean “more”. Can I used “three (four, five..) times” in phrases that mean “less / fewer”
Example #1:
Road A is 300 km and road B is 100 km, therefore:
Road A is three times longer than road B. = Road A is thee times as long as road B.
Can I say: “Road B is three times shorter than road A.” or “Road B is three times as short as road A.“
Example #2 :
Jack has 10 apples and Ann has 2 apples, therefore:
Jack has five times more apples than Ann. = Jack has five times as many apples as Ann.
Can I say: “Ann has five times fewer apples as Jack.” or “Ann has five times as few apples as Jack.”
Show more
Interesting stuff
Live chat
Registered users can join here | {"url":"http://www.englishforums.com/English/TwiceAsAsHalfAsAs/wgjdc/post.htm","timestamp":"2014-04-20T21:00:49Z","content_type":null,"content_length":"41916","record_id":"<urn:uuid:90703e30-c134-41d6-a436-26ad54c8fcd3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 807
- in Arithmetic and Geometry , 1983
"... The goal of this paper is to formulate and to begin an exploration of the enumerative geometry of the set of all curves of arbitrary genus g. By this we mean setting up a Chow ring for the
moduli space M g of curves of genus g and its compactification.M 9, defining what seem to be ..."
Cited by 203 (0 self)
Add to MetaCart
The goal of this paper is to formulate and to begin an exploration of the enumerative geometry of the set of all curves of arbitrary genus g. By this we mean setting up a Chow ring for the moduli
space M g of curves of genus g and its compactification.M 9, defining what seem to be
"... We prove a localization formula for the virtual fundamental class in the general context of C∗-equivariant perfect obstruction theories. Let X be an algebraic scheme with a C∗-action and a
C∗-equivariant perfect obstruction theory. The virtual fundamental class [X] vir in ..."
Cited by 173 (25 self)
Add to MetaCart
We prove a localization formula for the virtual fundamental class in the general context of C∗-equivariant perfect obstruction theories. Let X be an algebraic scheme with a C∗-action and a
C∗-equivariant perfect obstruction theory. The virtual fundamental class [X] vir in
- Bull. Amer. Math. Soc. (N.S
"... Abstract. We describe recent work of Klyachko, Totaro, Knutson, and Tao, that characterizes eigenvalues of sums of Hermitian matrices, and decomposition of tensor products of representations of
GLn(C). We explain related applications to invariant factors of products of matrices, intersections in Gra ..."
Cited by 109 (3 self)
Add to MetaCart
Abstract. We describe recent work of Klyachko, Totaro, Knutson, and Tao, that characterizes eigenvalues of sums of Hermitian matrices, and decomposition of tensor products of representations of GLn
(C). We explain related applications to invariant factors of products of matrices, intersections in Grassmann varieties, and singular values of sums and products of arbitrary matrices. Contents 1.
Eigenvalues of sums of Hermitian and real symmetric matrices 2. Invariant factors 3. Highest weights 4. Schubert calculus
- J. Differential Geom , 2000
"... We briefly review the formal picture in which a Calabi-Yau n-fold is the complex analogue of an oriented real n-manifold, and a Fano with a fixed smooth anticanonical divisor is the analogue of
a manifold with boundary, motivating a holomorphic Casson invariant counting bundles on a Calabi-Yau 3-fol ..."
Cited by 105 (5 self)
Add to MetaCart
We briefly review the formal picture in which a Calabi-Yau n-fold is the complex analogue of an oriented real n-manifold, and a Fano with a fixed smooth anticanonical divisor is the analogue of a
manifold with boundary, motivating a holomorphic Casson invariant counting bundles on a Calabi-Yau 3-fold. We develop the deformation theory necessary to obtain the virtual moduli cycles of [LT],
[BF] in moduli spaces of stable sheaves whose higher obstruction groups vanish. This gives, for instance, virtual moduli cycles in Hilbert schemes of curves in P 3, and Donaldson – and Gromov-Witten
– like invariants of Fano 3-folds. It also allows us to define the holomorphic Casson invariant of a Calabi-Yau 3-fold X, prove it is deformation invariant, and compute it explicitly in some
examples. Then we calculate moduli spaces of sheaves on a general K3 fibration X, enabling us to compute the invariant for some ranks and Chern classes, and equate it to Gromov-Witten invariants of
the “Mukai-dual ” 3-fold for others. As an example the invariant is shown to distinguish Gross ’ diffeomorphic 3-folds. Finally the Mukai-dual 3-fold is shown to be Calabi-Yau and its cohomology is
related to that of X. 1
"... this paper we extend the results of [3, 4] to the whole variety G0 . We will try to make the point that the natural framework for the study of G0 is provided by the decomposition of G into the
disjoint union of double Bruhat cells G = BuB " B \Gamma vB \Gamma ; here B and B \Gamma are two opposit ..."
Cited by 90 (18 self)
Add to MetaCart
this paper we extend the results of [3, 4] to the whole variety G0 . We will try to make the point that the natural framework for the study of G0 is provided by the decomposition of G into the
disjoint union of double Bruhat cells G = BuB " B \Gamma vB \Gamma ; here B and B \Gamma are two opposite Borel subgroups in G, and u and v belong to the Weyl group W of G. We believe these double
cells to be a very interesting object of study in its own right. The term "cells" might be misleading: in fact, the topology of G is in general quite nontrivial. (In some special cases, the "real
part" of G was studied in [20, 21]. V. Deodhar [9] studied the intersections BuB " B \Gamma vB whose properties are very different from those of G .) We study a family of birational parametrizations
of G , one for each reduced expression i of the element (u; v) in the Coxeter group W \Theta W . Every such parametrization can be thought of as a system of local coordinates in G call these
coordinates the factorization parameters associated to i. They are obtained by expressing a generic element x 2 G as an element of the maximal torus H = B " B \Gamma multiplied by the product of
elements of various one-parameter subgroups in G associated with simple roots and their negatives; the reduced expression i prescribes the order of factors in this product. The main technical result
of this paper (Theorem 1.9) is an explicit formula for these factorization parameters as rational functions on the double Bruhat cell G . Theorem 1.9 is formulated in terms of a special family of
regular functions \Delta fl ;ffi on the group G. These functions are suitably normalized matrix coefficients corresponding to pairs of extremal weights (fl; ffi ) in some fundamental representation
of G. Again, we b...
, 2008
"... We describe all almost contact metric, almost hermitian and G2-structures admitting a connection with totally skew-symmetric torsion tensor, and prove that there exists at most one such
connection. We investigate its torsion form, its Ricci tensor, the Dirac operator and the ∇-parallel spinors. In p ..."
Cited by 89 (5 self)
Add to MetaCart
We describe all almost contact metric, almost hermitian and G2-structures admitting a connection with totally skew-symmetric torsion tensor, and prove that there exists at most one such connection.
We investigate its torsion form, its Ricci tensor, the Dirac operator and the ∇-parallel spinors. In particular, we obtain solutions of the type II string equations in dimension n = 5, 6 and 7.
, 1996
"... Contents 0. Introduction 1 1. Stable maps and their moduli spaces 10 2. Boundedness and a quotient approach 12 5. The construction of M g;n (X; fi) 25 6. The boundary of M 0;n (X; fi) 29 7.
Gromov-Witten invariants 31 8. Quantum cohomology 34 9. Applications to enumerative geometry 38 10. Varia ..."
Cited by 87 (12 self)
Add to MetaCart
Contents 0. Introduction 1 1. Stable maps and their moduli spaces 10 2. Boundedness and a quotient approach 12 5. The construction of M g;n (X; fi) 25 6. The boundary of M 0;n (X; fi) 29 7.
Gromov-Witten invariants 31 8. Quantum cohomology 34 9. Applications to enumerative geometry 38 10. Variations 43 References 46 0. Introduction 0.1. Overview. The aim of these notes is to describe an
exciting chapter in the recent development of quantum cohomology. Guided by ideas from physics (see [W]), a remarkable structure on the solutions of certain rational enumerative geometry problems has
been found: the solutions are coefficients in the multiplication table of a quantum cohomology ring. Associativity of the ring yields non-trivial relations among the enumerative solutions. In many
cases, these relations suffice to solve the enumerative problem. For example, let N d be the number of degree d, rational plane curves passing through 3d \Gamma 1 general points in P . Since there is
a un
- Invent. Math
"... 1.1. Topological classification of ramified coverings of the sphere. For a compact connected genus g complex curve C let f: C → CP 1 be a meromorphic function. We treat this function as a
ramified covering of the sphere. Two ramified coverings (C1; f1), (C2; f2) are called topologically ..."
Cited by 74 (2 self)
Add to MetaCart
1.1. Topological classification of ramified coverings of the sphere. For a compact connected genus g complex curve C let f: C → CP 1 be a meromorphic function. We treat this function as a ramified
covering of the sphere. Two ramified coverings (C1; f1), (C2; f2) are called topologically
- ACTA NUMERICA , 2006
"... ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=92834","timestamp":"2014-04-21T03:18:46Z","content_type":null,"content_length":"34351","record_id":"<urn:uuid:784430d6-5c9f-418a-80c5-2735d0d38681>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'The Impossible Sequence' Brain Teaser
The Impossible Sequence
Series teasers are where you try to complete the sequence of a series of letters, numbers or objects.
Puzzle ID: #13140
Category: Series
Submitted By: Piffle
What is the next number in the sequence?
Show Hint
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/13140/the-impossible-sequence.html","timestamp":"2014-04-20T06:30:36Z","content_type":null,"content_length":"22465","record_id":"<urn:uuid:86a6b251-76c2-48ce-8e98-bd5e4e4fffc6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
deriving the area and circumference of a circle
April 1st 2009, 04:12 PM
deriving the area and circumference of a circle
Could anyone show me the proof for circle area and circumference. I saw a animated argument that give me the basic theory on circle area but it went to fast to get a firm understanding.
April 3rd 2009, 04:07 PM
What, precisely, are you seeking to have proved? What formula or relationship?
What theorems, formulas, etc, are you allowed to use in your proof?
Thank you! :D
April 3rd 2009, 06:38 PM
further explaination
I have actually found the argument for the area of the circle. The one where you cut it up into tiny pieces and reassemble it as a square and then use the area of the rectangle $r\pi\times r$.
My question now is how do we come up with the circumference of the circle. I'm not looking for a formal proof neccessarily, just a practical explanation.
April 3rd 2009, 07:15 PM
mr fantastic
Read this: http://www.fiu.edu/~rudomine/pi.pdf | {"url":"http://mathhelpforum.com/geometry/81876-deriving-area-circumference-circle-print.html","timestamp":"2014-04-20T11:12:25Z","content_type":null,"content_length":"6131","record_id":"<urn:uuid:7ab37810-71d5-47ef-bcd4-226e7212524e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangle Inequality
Date: 09/14/97 at 14:05:03
From: Joe
Subject: IMP Math program, word problem
I am totally stuck on this problem, which is a Problem Of The Week in
IMP II.
Suppose that you have a straight pipe cleaner of a given length. The
two ends are points A and C.
Now suppose that the pipe cleaner is bent into 2 portions, with one
portion twice as long as the other. Label the place where it's bent
as B, so that the segment from B to C is twice as long as from A to B.
Then a fourth point, X, is chosen at random somewhere on the longer
section, and the pipe cleaner is bent at that portion, too.
Can the 3 segments of the pipe cleaner be made into a triangle by
changing the angles at the two bends? As you might suppose, the
answer depends on the location of point X.
My question to you is: if point X is chosen at random along the
section from B to C, what is the probability that the 3 segments of
the pipe cleaner can be made into a triangle?
Date: 09/14/97 at 18:48:00
From: Doctor Anthony
Subject: Re: IMP Math program, word problem
You use the property of a triangle that any two sides must be greater
than the third side.
|<--- 1 --->|<-------- 2 -------->|
x D 2-x
If you look at the above figure, suppose the extra bend is made at D,
a distance x from the point B, as shown. Then for a triangle to be
possible in this configuration with D closer to B than C we require
1 + x > 2-x
2x > 1 and so x > 1/2
Similarly, if D is closer to C than B the condition for a triangle to
be possible is:
x < 1 + (2-x)
2x < 3 so x < 3/2
Putting these two inequalities together we get
1/2 < x < 3/2
Now x can vary with uniform probability between 0 and 2, and its
permitted range, 1/2 to 3/2, is exactly half this range. So the
probability that a triangle can be formed is 1/2.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/56570.html","timestamp":"2014-04-17T16:30:51Z","content_type":null,"content_length":"6954","record_id":"<urn:uuid:cff7141f-e9e5-47a2-9025-a5645d93d1d9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact solutions have an important role in the study of nonlinear wave equations, particularly for understanding blow-up or dispersive behaviour, attractors, and critical dynamics, as well as for
testing numerical solution methods. This talk will illustrate the application of a novel symmetry-group method to obtain explicit solutions to semilinear Schrodinger equations in
Semilinear Schrodinger equations provide a model of many interesting types of nonlinear waves, e.g. laser beams in nonlinear optics, oscillations of plasma waves, and free surface water waves.
Many explicit new solutions are obtained which have interesting analytical behavior connected with blow-up and dispersion in such models. These solutions include new similarity solutions and
other new group-invariant solutions, as well as new solutions that are not invariant under any symmetries of the Schrodinger equation. In contrast, standard symmetry reduction methods lead to
nonlinear ODEs for which few if any explicit solutions can be derived by familiar integration methods.
We consider singular limits of the three-dimensional Ginzburg-Landau functional for a superconductor with thin-film geometry, in a constant external magnetic field. The superconducting domain has
characteristic thickness on the scale $\epsilon>0$, and we consider the simultaneous limit as the thickness $\epsilon\to 0$ and the Ginzburg-Landau parameter $\kappa\to\infty$. We assume that the
applied field is strong (on the order of $\epsilon^{-1}$ in magnitude) in its components tangential to the film domain, and of order $\log\kappa$ in its dependence on $\kappa$. We prove that the
Ginzburg-Landau energy $\Gamma$-converges to an energy associated with a two-obstacle problem, posed on the planar domain which supports the thin film. The same limit is obtained regardless of
the relationship between $\epsilon$ and $\kappa$ in the limit. Two illustrative examples are presented, each of which demonstrating how the curvature of the film can induce the presence of both
(positively oriented) vortices and (negatively oriented) antivortices coexisting in a global minimizer of the energy. This is joint work with Stan Alama and Bernardo Galv\~ao-Sousa.
In this talk, we will discuss global and blowup solutions of the quasilinear parabolic equation $u_t = \alpha(x,u, \nabla u) \Delta u + f(x, u, \nabla u)$ with homogeneous Dirichlet boundary
conditions. We will give sufficient conditions such that the solutions either exist globally or blow up in a finite time for any smooth initial values. In special cases, a necessary and
sufficient condition for global existence is given.
We prove some estimates for convex ancient solutions (the existence time for the solution starts from $-\infty$) to the generalized curve shortening flow(convex curve evolving in its normal
direction with speed equal to a power of its curvature, the power is assumed to be bigger than $\frac{1}{2}$). As an application, we show that, if the convex compact ancient solution sweeps the
whole space $\textbf{R}^{2}$, it must be a shrinking circle. By exploiting the affine invariance of the affine curve shortening flow (when the power equals to $\frac{1}{3}$), we are also able to
show that the only convex compact ancient solution must be a shrinking ellipse.
We present the construction of local minimizers to the Ginzburg-Landau functional of superconductivity in the presence of an external magnetic field. We investigate the existence of stable states
where the number of vortices $N$ is far from optimal(as dictated by the energy formulation), is prescribed and blows up as the parameter epsilon, inverse of the Ginzburg-Landau parameter kappa,
tends to zero. We treat the case of $N$ as large as log $\varepsilon,$ and a wide range of intensity of external magnetic field. This is joint work with Sylvia Serfaty.
We review some recent results about local Hardy spaces and discuss possible applications to nonlinear PDE.
I will study dynamics of interfaces in solutions of the equation $\varepsilon \Box u + \frac 1 \varepsilon f_\varepsilon(u)=0$, for $f_\varepsilon$ of the form $f_\varepsilon(u) = (u^2-1)(2u- \
varepsilon\kappa)$, for $\kappa\in \mathbb{R}$, as well as more general, but qualitatively similar, nonlinearities. I will show that for suitable initial data, solutions exhibit interfaces that
sweep out timelike hypersurfaces of mean curvature proportional to $\kappa$.
This is a joint work with Robert Jerrard (University of Toronto).
The talk is concerned with regularity of weak solutions to second order infinitely degenerate elliptic equations. One of the ways to describe the regularity is in terms of the operator being
subelliptic or hypoelliptic. A criteria of subellipticity for linear operators have been given by Fefferman and Phong in terms of subunit metric balls associated to the operator. In particular,
it follows that an infinitely degenerate operator cannot be subelliptic. Hypoellipticity is a weaker property, and for a certain class of such operators has been recently shown by Rios, Sawyer
and Wheeden in the a priori assumption that weak solutions are continuous. We use the subunit metric approach to show continuity of weak solutions to a certain class of degenerate quasilinear
equations. This together with the result by Rios et al completes the proof of hypoellipticity of a class of infinitely degenerate quasilinear operators.
We will discuss the regular set of weak solutions to strongly coupled elliptic systems. We do not assume that the solutions are bounded but BMO and the ellipticity constants can be unbounded.
In this talk, we consider Nicholson's blowflies equations. When the ratio of birth rate and death rate satisfies $p/d>e$, the equation losses its monotonicity, and the traveling waves are
non-monotone and oscillating when the delay time $r$ is big. The stability of such kind waves have been open and challenging as we know. In this talk, we prove that, when $e<\frac{p}{d} < e^2$,
for any delay time $r>0$, the traveling waves $\phi(x+ct)$ with $c>c_*>0$ ($c_*>0$ is the minimum wave speed) are asymptotically stable, when the initial perturbation is small enough; while, when
$p/d \ge e^2$, we prove that these oscillating traveling waves are stable only for a small delay time $r\ll 1$, and unstable for $r\gg 1$. All these theoretical results are also confirmed
numerically by some computing simulations.
This is a joint work with C.-K. Lin, C.-T. Lin and Y.-P. Lin.
Motivated by mappings of finite distortion, we consider degenerate p-Laplacian equations whose ellipticity condition is satisfied by the distortion tensor and the inner distortion function of
such a mapping. Assuming a certain Muckenhoupt type conditions on the weight involved in the ellipticity condition, we describe the set of continuity of solutions.
In this talk we will consider the subelliptic PDE $$-\Delta_Hu=|u|^\frac{4}{Q-2}u$$ on the Heisenberg group $\mathbf{H}^n$, where $\Delta_H $ is the sublaplacian operator on $\mathbf{H}^n$ and $Q
=2n+2$ is its homogeneous dimension. The differential operator is linear, second order, degenerate elliptic, and it is hypoelliptic being the sum of squares of (smooth) vector fields satisfying
the Hormander condition.
The critical growth $\frac{Q+2}{Q-2}$ in the nonlinearity of the equation is related to a loss of compactness of the continuous embeddings of suitable anisotropic Sobolev-type spaces in standard
$L^p$ spaces (on bounded domains of $\mathbf{H}^n$), which occurs at the critical exponent $p^*=\frac{2Q}{Q-2}$.
Such an equation is related for instance to problems of differential geometry on Cauchy--Riemann manifolds, such as the CR-Yamabe problem of prescribing the Tanaka-Webster scalar curvature on $\
mathbf{H}^n$ under a conformal change of the contact structure that identifies its CR structure.
We will show existence of infinitely many geometrically distinct sign changing solutions of the equation on $\mathbf{H}^n$ using variational techniques, exploiting the abundant symmetries of the
equation and some notions of differential geometry in order to circumvent the lack of compactness of the energy functional, that arises at the considered critical growth.
This is a joint work with P. Mastrolia (Università degli Studi di Milano)
The elliptic Monge-Ampère equation is a fully nonlinear Partial Differential Equation which originated in geometric surface theory, and has been applied in dynamic meteorology, elasticity,
geometric optics, image processing and image registration. Solutions can be singular, in which case standard numerical approaches fail.
We build a finite difference solver for the Monge-Ampère equation, which converges to the unique viscosity solution of the equation. Regularity results are used to select a priori between a
stable, provably convergent monotone discretization and an accurate finite difference discretization. The resulting nonlinear equations are then solved by Newton's method.
Computational results in two and three dimensions validate the claims of accuracy and solution speed. A computational example is presented which demonstrates the necessity of the use of the
monotone scheme near singularities.
I will discuss joint work with Young-Heon Kim on an optimal transport problem with several marginals on a Riemannian manifold, with cost function given by the average distance squared from
multiple points to their barycenter. Under a standard regularity condition on the first marginal, we prove that the optimal measure is unique and concentrated on the graph of a function over the
first variable, thus inducing a Monge solution. This result generalizes McCann's polar factorization theorem on manifolds from two to several marginals, in the same sense that a well known result
of Gangbo and Swiech generalizes Brenier's polar factorization theorem on $\mathbb{R}^n$.
In this work, we consider the KdV equation in the exponentially weighted spaces of Pego and Weinstein. We prove local well-posedness of the perturbation (weighted and unweighted) in the Bourgain
$X^{1,b}$ space, allowing us to recreate the Pego-Weinstein result via iteration. By combining this result with the $I$-method, we expect ultimately to obtain soliton stability for KdV with
initial data too rough to be in $H^1$.
Recent progress on the regularity of weak solutions to a class of degenerate quasi-linear second order equations with rough coefficients will be discussed. An equation in the class considered has
the form \begin{eqnarray} \text{Div}\left(A(x,u,\nabla u)\right) = B(x,u,\nabla u)\nonumber \end{eqnarray} where the functions $A$ and $B$ are assumed to satisfy specific structural conditions
related to those described by J. Serrin (1964) and N. Trudinger (1967). The main focus of the talk will be issues associated to the development of a Harnack inequality for weak solutions. There
are several equations of interest included in the class studied. The degenerate p-Laplacian is one such example.
We discuss preliminary work on two weight inequalities for Calderon-Zygmund singular integrals in Euclidean space, with emphasis on the energy condition. This is joint work with Chun-Yen Shen and
Ignacio Uriarte-Tuero.
In this talk we consider the Cauchy problem of the nonlinear Schrodinger equation with combined power type nonlinearities. By introducing a family of potential wells we obtain some sharp
conditions for global existence and finite time blow up of solutions. In the frame of this work, we do not require the negative initial energy, which solves some open problems existing in the
known literature. | {"url":"http://cms.math.ca/Events/summer13/abs/npd","timestamp":"2014-04-21T02:00:39Z","content_type":null,"content_length":"27538","record_id":"<urn:uuid:6618eb87-bf35-450f-8f16-9689917a2f12>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Placentia Math Tutor
Find a Placentia Math Tutor
...I have grown up in a christian home, and, since the age of nine, had a step father who pastors a local bible-believing church. I have served on the worship team and in church leadership since I
was twelve. I have had a passion for photography since my first exposure to it my junior year in high...
26 Subjects: including algebra 1, algebra 2, prealgebra, SAT math
...English, literature, writing, spelling, grammar, proofreading, reading, and vocabulary have always been my strong points. English and Literature were always my easiest subjects, even when I
started taking AP classes. I have passed all algebra tests by WyzAnt and I have previous experience in tutoring elementary children in math.
26 Subjects: including geometry, precalculus, English, linear algebra
With over 20 years as an Engineer, in the aerospace industry, and dual Master's degrees in Applied Mathematics and Mechanical Engineering, I have a knowledge base that is hard to beat. I'm
passionate about knowledge and will go above and beyond to help you understand not only the material, ...
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...I passed CSET math sections. I have five years' combined experience with all different levels of students. I recently obtained a teaching assistant position (fieldwork experience) at a high
school in Orange County.
15 Subjects: including calculus, chemistry, Korean, CBEST
...The school also used me to tutor high school students with special needs until they could pass the CA High School Exit Exam. I tutor all subjects from pre-school through middle school (8th
grade). I can also coach the high school exit exam. I have experience with all forms of learning challeng...
21 Subjects: including prealgebra, English, reading, elementary (k-6th) | {"url":"http://www.purplemath.com/placentia_ca_math_tutors.php","timestamp":"2014-04-19T17:15:43Z","content_type":null,"content_length":"23688","record_id":"<urn:uuid:1c76f903-c442-4b65-8a87-83f553bd2368>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the domain and range of f(x)?
`f(x) = (2x+3)/(x-1)`
f(x) = (2x+3)/(x-1) - Homework Help - eNotes.com
What is the domain and range of f(x)?
`f(x) = (2x+3)/(x-1)`
f(x) = (2x+3)/(x-1)
The domain is the set of values of x (the input values) allowing the function to work properly. In this case `f(x) = (2x+3)/(x-1)`
x cannot be 1 `x!=1` ( as this would render the denominator as zero which is undefined.)
`therefore x:{x in RR; x!=1}`
The range is the set of y values (output) that works in the given function. in this case there are no restrictions placed on y and
`therefore y:{yinRR}`
Domain `{x in RR, x!=1}`
Range`{y in RR}`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/f-x-2x-3-x-1-what-domain-range-f-x-456900","timestamp":"2014-04-25T04:15:20Z","content_type":null,"content_length":"25259","record_id":"<urn:uuid:94f1bfb4-5296-4b1d-b1b9-fd7ba37c7992>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing the Continent
Dividing the Continent
Long Division
However tricky the divide problem may prove to be, a correct algorithm surely does exist, since nature somehow solves the problem. If all else fails, one can emulate the natural algorithm. The idea
is to let raindrops fall on each grid point, and then follow the runoff as it drains toward lower elevations. The most thorough version of this algorithm pursues every downhill path. That is, if a
point has three lower neighbors, then the algorithm follows droplets that roll along each of the three downhill links. The flow stops when the droplet reaches a pit and has nowhere more to go.
Tracing such paths for all points on the continent should identify the great divide. The divide is just the set of points from which droplets can reach both the Atlantic and the Pacific basins. (This
is, after all, the definition of the divide.)
The rainfall algorithm works, at least for small test cases, but it is fabulously inefficient. An average point on the model landscape has two downhill neighbors, which means the number of paths to
be explored doubles at every step. If a typical path is just 20 steps long, the algorithm will have to map a million paths for each point.
This exponential explosion of pathways should not be necessary. Real water droplets don’t explore all possible routes to the sea; for the most part, they stick to the path of steepest descent. An
algorithm can do the same, which makes the computational burden much lighter. But there are other problems. The divide has to be defined somewhat differently—as a path threading between grid points
rather than a connected series of points. And there is the lake-bottom problem: A path of steepest descent seldom descends continuously from the divide all the way to the ocean. It’s not all downhill
from here.
Somewhere in North Dakota or Minnesota—near another divide that separates Hudson Bay from the Gulf of Mexico—I finally began to settle on an idea that might be called the global-warming algorithm. It
works like this: Given North America in a terrarium, start raising the sea level, and keep the floods coming until the Atlantic and the Pacific just touch. At this moment you have identified one
point—namely the lowest point—on the great divide. Now continue adding water, but as the sea level rises further don’t allow the two oceans to mix. (This would be a difficult trick in a physical
model, and it’s none too easy even in a computer simulation.) Note the succession of points on the land where east meets west, and mark them down as elements of the divide. When the last such point
is submerged, you have succeeded in dividing the continent.
Describing this process in terms of water filling a basin tends to conceal some of the nitty-gritty computational details. Real water is very good at flooding; it just knows how to do it and never
makes a mistake. Simulated water, on the other hand, must meticulously plot its every move. To raise the level one foot, you have to check every point adjacent to the current waterline and decide
which points will be newly submerged. Then you have to look at the neighbors of these selected points, and at their neighbors, and so on. There’s the potential for another exponential explosion here,
although with realistic landscapes it doesn’t seem to happen.
When I finally got a chance to write a program for this process, I found that the algorithm is exquisitely sensitive to the order of operations. Consider the situation just as the Pacific is about to
reach the lowest point on the divide. If the Atlantic has not been raised in synchrony, then the Pacific waters will pour over the saddle point and flood part of the eastern basin, shifting the
divide to an incorrect position. | {"url":"http://www.americanscientist.org/issues/pub/dividing-the-continent/6","timestamp":"2014-04-19T17:40:10Z","content_type":null,"content_length":"118521","record_id":"<urn:uuid:ce8a3aec-873f-4556-9a23-c2fd4fe7812b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
multivariate hypergeometric distribution
up vote 0 down vote favorite
hi I'm working on a problem where $\bf{X}=${$x_1,x_2,...,x_N$} has a multivariate hypergeometric distribution $p(\bf{X}=x)=\frac{\binom{N}{x_1}\binom{N}{x_2}\ldots\binom{N}{x_N}}{\binom{n}{T}}$.
I'm intersted on papers or books that deal with the probability of a certain type of vectors and order statistic of the k'th element or so...
st.statistics pr.probability
1 You are going to have to be more specific about what you want... «a certain type of vectors» is not going to ring a lot of bells! – Mariano Suárez-Alvarez♦ Sep 27 '10 at 18:46
I am not sure what exactly I want but I only want to see what can I do to deal with these kind of problem. For example if I want the probability (or an upper bound of it) of choosing a vectors
with the first 5 elements greater than 10. – nahum Sep 28 '10 at 10:32
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged st.statistics pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/40175/multivariate-hypergeometric-distribution","timestamp":"2014-04-18T03:01:47Z","content_type":null,"content_length":"48367","record_id":"<urn:uuid:b196e96d-5497-460d-819f-65e74a0f0fdd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Clemente Math Tutor
...I have spent several clinical hours in a variety of settings working with children with aspergers. I hold a masters in communication science disorders. I have spent several clinical hours in a
variety of settings working with children with autism.
35 Subjects: including algebra 2, precalculus, ACT Math, SAT math
I have a doctorate in Nuclear Physics and many years of engineering experience designing electronics for industry. For the last two years, I have been teaching at the Community College Level as an
Adjunct Professor. I have volunteer tutored math at a local high school and tutored privately.
8 Subjects: including calculus, algebra 1, algebra 2, geometry
...I emphasize that my students practice the concept with my guidance until it is understood. I also work on organization and test taking skills. I have tutored algebra/algebra 2 for 20 years.
18 Subjects: including prealgebra, trigonometry, physics, precalculus
...Focusing on Shaping the WILL Without shaking the SPIRIT 2. Identifying, recognizing and understanding the dominant learning style of the child. These learning styles include Tactile, Visual
(Spatial), Aural (Auditory/Musical), Verbal (Linguistic), Physical (Kinesthetic), Logical (Mathe...
18 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...Math helps your brain to think fast and accurate, and I also believe that students with poor math skills are more likely to get discourage in the academic life. My goal is to pass my knowledge
in a easy, non stressful way. I will use all the path possibles to make the student learn the topic and became proficient on it.
7 Subjects: including trigonometry, differential equations, linear algebra, algebra 1 | {"url":"http://www.purplemath.com/San_Clemente_Math_tutors.php","timestamp":"2014-04-21T07:38:26Z","content_type":null,"content_length":"23942","record_id":"<urn:uuid:76c07aa0-9d4d-40b5-b389-19132b3338d9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the volume of parallelopiped
December 27th 2011, 09:08 AM
Finding the volume of parallelopiped
I am having problems finding the Volume of a parallelepiped (prism) the known elements are its base that is a parallelogram which values are 8 and 2 and the angle of the parallelogram is 60
degrees and the big Diagonal is 10..
I am sorry for my bad english because i dont know the proper names of the shapes
December 27th 2011, 12:06 PM
Re: Finding the volume of parallelopiped
I am having problems finding the Volume of a parallelepiped (prism) the known elements are its base that is a parallelogram which values are 8 and 2 and the angle of the parallelogram is 60
degrees and the big Diagonal is 10..
I am sorry for my bad english because i dont know the proper names of the shapes
something is not quite correct with your problem statement ... a parallelogram with adjacent side lengths of 2 and 8 must have diagonals (both) of length less than 10.
note the triangle inequality ... the sum of any two sides of a triangle must be greater than the length of the third side.
December 27th 2011, 12:13 PM
Re: Finding the volume of parallelopiped
the diagonal which value is 10 is from bottom to the top (from one point of the first base to the opposite point on the second base)
December 27th 2011, 01:10 PM
Re: Finding the volume of parallelopiped
December 27th 2011, 06:33 PM
Re: Finding the volume of parallelopiped
volume of a parallelopiped is area of one base x height ( perpendicular distance between base and opposite face).There is sufficient info to calculate ti in this problem ( length of long diagonal
is not required whatever it may be)
December 27th 2011, 06:51 PM
Re: Finding the volume of parallelopiped
unfortunately, this measurement is not mentioned in the OP ...
the known elements are its base that is a parallelogram which values are 8 and 2 and the angle of the parallelogram is 60 degrees and the big Diagonal is 10
December 27th 2011, 10:27 PM
Re: Finding the volume of parallelopiped
I am having problems finding the Volume of a parallelepiped (prism) the known elements are its base that is a parallelogram which values are 8 and 2 and the angle of the parallelogram is 60
degrees and the big Diagonal is 10..
I am sorry for my bad english because i dont know the proper names of the shapes
The bad news first: This question can't be solved. There is at least one value missing.
I've attached a sketch showing 3 different parallelepipeds which all satisfy the given conditions.
December 28th 2011, 03:07 AM
Re: Finding the volume of parallelopiped
I found out today that this is really unsolveable asking some proffesors and contacting the person who wrote that book with that problem. So thanks to all that you tried to help me! | {"url":"http://mathhelpforum.com/geometry/194705-finding-volume-parallelopiped-print.html","timestamp":"2014-04-21T14:04:12Z","content_type":null,"content_length":"10461","record_id":"<urn:uuid:f3dc8fb1-bc85-4ca5-a511-d5d6e174067d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Google Treasure Hunt 4
June 17, 2008
Google is promoting a kind of competition, where a lot of problems were proposed and the related solutions suggests the use of any kind of computational resource (really, the problems obligate you to
use computers!). The problems are unique for each candidate, so it is impossible (maybe…) to “work in a group” in order to solve them. This competition is called “Google Treasure Hunt 2008” [http://
treasurehunt.appspot.com/]. The problem set presented (actually, they have only 4 problems) is very similar to the one we usually see at the programming contests, like the International Collegiate
Programming Contest (ICPC), promoted by Association for Computing Machinery (ACM) [www.acm.org]. I just solved the last Google problem (the fourth), and I’m detailing here how I implemented my
The proposed problem was the following:
“Find the smallest number that can be expressed as
the sum of 11 consecutive prime numbers,
the sum of 25 consecutive prime numbers,
the sum of 513 consecutive prime numbers,
the sum of 1001 consecutive prime numbers,
the sum of 1129 consecutive prime numbers,
and is itself a prime number.
For example, 41 is the smallest prime number that can be expressed as
the sum of 3 consecutive primes (11 + 13 + 17 = 41) and
the sum of 6 consecutive primes (2 + 3 + 5 + 7 + 11 + 13 = 41).”
In order to solve this proposed problem, I decided to use the C++ programming language, mainly because of the great data structures STL library offers (structures like sets, vectors, set operations,
etc.). Before everything, to know by advance how much prime numbers to use is the tricky part. By saying that the problems doesn’t specify a reasonable limit to the upper bound of the primes sequence
, to generate a table with all the primes to a considerable limit of 100.000 primes, for example, I decided to obtain a list with the primes to the maximum value of 100 mil [http://members.aol.com/
MICRO46/primes_1.zip], a total of exactly 78498 primes. The reason I got a complete table was that the method I made was not so fast… After that, I optimized the procedure to last only 0.5 seconds,
at worst, and wrote the result in a file. With the prime’s list in my hand, I ran the procedure below:
1) Calculate the sums of all the subsequences of primes in each of the following intervals size (11, 25, 513, 1001, 1129) – this implies in do the sum of all intervals with size 11, for example,
where I can get a sequence of 78488 intervals with size 11, and 77.370 subintervals of size 1129;
2) Do a set intersection, 2 by 2, with the resulting sets of the operation above (for example, the sum of all subintervals with size 11 define a subset) – do this in order to know the sums that are
common among the sums instance with differente intervals size;
3) Sort the resulting set and show the first value (the smallest value).
Now, the method I used for the steps 1 and 2:
sum_subintervals(const vector<long>& primes, unsigned long max,
set<long>* pivot)
unsigned long soma = 0, j = 0, old_sum = 0;
set<long> sums;
for (j = 0; j < max; j++)
soma += primes.at(j);
} // for
while ( (j < primes.size() ) )
old_sum = soma;
soma = old_sum - primes.at(j-max) + primes.at(j);
set<long> s;
if (pivot->size() != 0)
set_intersection(sums.begin(), sums.end(), pivot->begin(),
pivot->end(), inserter(s, s.end()) );
copy(s.begin(), s.end(), inserter(*pivot, pivot->begin()));
copy( sums.begin(), sums.end(), inserter(*pivot, pivot->end()) );
The code above is not a honey pie, but I can explain: what it do is to catch n-sized intervals (where n can be: 11, 25, 513, etc.) and sums all the elements in this interval. A very elementary
solution would be to sum directly all the values for each subinterval, and do this for each subinterval size. Computationally, indeed, the trivial solution repeats unnecessary sum operations (the
major part of the elements would be added two or more times). I started optimization removing invaluable sum operation. For example, the the first subinterval with size 11 is the following:
{2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31}
The next subinterval would be like this:
{3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37}
What are the differences between them? At the second one, we put the element 37 and removed the element 2. Expanding this rule, each subinterval adds to the sequence before the next prime element,
and removes the first element from the last sequence from the result. Applying this iteratively, it was possible to reduce the complexity of the solution by a exponential factor to a linear solution,
with an execution effort varying only over the size of the primes table.
Just to finalize, the simplest chunk of code to this solution shows the algorithm above running over all the constant fields of this problem:
set<long> all_sums;
int nums[5] = { 11, 25, 513, 1001, 1129 };
int i;
for (i = 0; i < 5; i++)
sum_subintervals(primes, nums[i], &all_sums);
if (all_sums.size() > 0)
cout << "Solution: " << *(all_sums.begin()) << endl;
I need not to sort the result, because the template class “set” from the Standard Template Library adds the elements in an ordered way in the structure. When is everything ok, the successful
submission to the Google Treasure Hunt site shows this:
"Correct answer: XXXXXXX
Your answer was: Correct
Your answer was correct! Congratulations, you're part-way there.
There will more questions over the coming weeks, and the first person to answer them all correctly will win a prize, so keep trying!"
Good luck! | {"url":"http://www.drdobbs.com/architecture-and-design/google-treasure-hunt-4/228700418","timestamp":"2014-04-18T16:23:04Z","content_type":null,"content_length":"140660","record_id":"<urn:uuid:7f337f1c-332a-4c11-a59a-846410766945>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
LDMOS devices modeling
Under construction...
This page contains some notes about LDMOS devices modeling; while the concepts discussed are general, the main focus is on obtaining models for low/medium power amplifiers for the HF/VHF amateur
radio bands.
At first, the small-signal model extraction is discussed and a method is presented that allows to obtain an estimate of the main parameters. The values obtained can then be used as a starting point
for an optimization procedure, in order to obtain a model which fits even better the measured data provided as input. Of course, to obtain a better model, a measure of the model quality has to be
defined first; as discussed in [1] several different choices are possible.
Some practical examples of commonly used RF LDMOS are also presented on their own pages (see menu on the left): these model were optimized using the S-parameters Error Vector Magnitude (EVM) as error
measure [1].
Later on also the extraction of a large-signal model, based on [5] and [6], will be presented.
Small-signal model
The following figure shows a typical MOS equivalent circuit
The Z-parameters of this equivalent circuits can be easily determined after some tedious calulations; the general idea is first to compute the Y parameters of the intrinsic part and then add the
contributions of the extrinsec resistors and capacitors [2]
There are a number of different methods to determine parameters values of the small signal model circuit ("parameters extraction"); one of the most general is described in [2]. The Z-parameters above
are rewritten as a ratio of polynomials whose coefficient can be determined by fitting these rational functions to the measured Z-parameters;
For the circuit above, still assuming
To be continued...
Large-signal model
Under construction...
[1] B. Grossman, E. Fledell and E. Acar, "Adventures in Measurement: A Signal Integrity Perspective to the Correlation Between Simulation and Measurement," IEEE Microwave Magazine, vol. 14, no. 3,
pp. 94-102, May 2013.
[2] L. Vestling, "Design and Modeling of High-Frequency LDMOS Transistors," Ph.D. Thesis, ISBN 91-554-5210-8, Uppsala University, Feb.2002.
[3] G. Dambrine, A. Capy, F. Heliodore and E. Playez "A new method for determining the FET small-signal equivalent circuit," IEEE Trans. Microwave Theory and Techniques, vol. 36, no. 7, pp.
1151-1159, 1988.
[4] S. Lee, H.K. Yu, C.S. Kim, J.G. Koo and K.S. Nam "A novel approach to extracting small-signal model parameters of silicon MOSFET's," IEEE Microwave and Guided Wave Letters, vol. 7, no. 3, pp.
75-77, 1997.
[5] C. Fager, J. C. Pedro, N. B. de Carvalho and H. Zirath "Prediction of IMD in LDMOS transistor amplifiers using a new large-signal model," IEEE Trans. Microwave Theory and Techniques, vol. 50, no.
12, pp. 2834-2842, Dec. 2002.
[6] S. Lai, C. Fager, D. Kuylenstierna and I. Angelov "LDMOS Modeling," IEEE Microwave Magazine, vol. 14, no. 1, pp. 108-116, Jan.-Feb. 2013. | {"url":"http://www.qsl.net/in3otd/electronics/LDMOS_models/LDMOS_modeling.html","timestamp":"2014-04-16T19:30:00Z","content_type":null,"content_length":"9912","record_id":"<urn:uuid:012bd12b-0bd8-463b-9342-58345c20d4e0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Re: Addition with Primality: Decidability and Heuristics
[FOM] Re: Addition with Primality: Decidability and Heuristics
Timothy Y. Chow tchow at alum.mit.edu
Thu Aug 5 14:40:39 EDT 2004
Dmytro Taranovsky <dmytro at mit.edu> wrote:
> Question: Is the first order theory of natural numbers under addition
> and primality predicate decidable?
Of course, if the answer is yes, then proving it is hopeless, so I assume
you suspect the answer is no?
> I conjecture that for every (a_1, a_2, ..., a_n), there are infinitely
> many n+1-tuples of primes in the form (m, m+a_1, m+a_2, ..., m+a_n) iff
> for every prime p<n+2, there is (an integer) q such that no member of
> {q, q+a_1, ..., q+a_n} is divisible by p.
I believe this is a special case of Schinzel's hypothesis.
> However, the function n --> the least positive integer m such that there
> are no primes strictly between m and m+n should have a superpolynomial
> rate of growth.
This is closely related to the claim that sequence A000230 in Sloane's
encyclopedia has superpolynomial growth.
ID Number: A000230 (Formerly M2685)
URL: http://www.research.att.com/projects/OEIS?Anum=A000230
Sequence: 2,3,7,23,89,139,199,113,1831,523,887,1129,1669,2477,2971,
Name: Smallest prime p such that there is a gap of 2n between p and next prime.
The difference, I think, is just that you want to replace "gap of 2n" here
by "gap of at least n."
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-August/008387.html","timestamp":"2014-04-20T13:22:29Z","content_type":null,"content_length":"4319","record_id":"<urn:uuid:40d38a7d-2f45-4598-ae8a-2a31aa229395>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word problem for dealing with systems of linear inequalities
October 29th 2008, 04:31 PM #1
Oct 2008
Word problem for dealing with systems of linear inequalities
Please help me by checking to see if I set this word problem up right. . .
Problem: You can work at most 20 hours next week. You need to earn at least $92 to cover your weekly expenses. Your dog-walking job pays $7.50 per hour and your job as a car wash attendant pays
$6 per hour. Write a system of linear inequalities to model the situation.
I got 20 > x + y and 92 < 7.5X + 6y
Is this correct?
Please help me by checking to see if I set this word problem up right. . .
Problem: You can work at most 20 hours next week. You need to earn at least $92 to cover your weekly expenses. Your dog-walking job pays $7.50 per hour and your job as a car wash attendant pays
$6 per hour. Write a system of linear inequalities to model the situation.
I got 20 > x + y and 92 < 7.5X + 6y
Is this correct?
October 29th 2008, 09:04 PM #2
Senior Member
Apr 2008 | {"url":"http://mathhelpforum.com/algebra/56450-word-problem-dealing-systems-linear-inequalities.html","timestamp":"2014-04-19T12:44:30Z","content_type":null,"content_length":"33022","record_id":"<urn:uuid:302bbb22-daa8-40e2-a1af-6d528844565a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Southfield, MI Algebra Tutor
Find a Southfield, MI Algebra Tutor
...I have taught online mathematics classes as well. My focus is on building the confidence of my students. It is then they can grow and develop to reach their maximum academically.
22 Subjects: including algebra 1, algebra 2, chemistry, calculus
...I especially enjoy helping those students who have always found mathematics to be difficult. I start off tutoring, with asking the student what do they want to achieve through tutoring. I will
give a pre-test, which gives me information on the student's strengths and weaknesses.
6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I generally work with reading music, counting, and posture; however work in areas that students request as well. I have recently graduated with my bachelor's in Electrical Engineering from
Oakland University. I currently work for Panasonic (several months), but previously worked for Faurecia Interior Systems for over 2 years.
50 Subjects: including algebra 1, algebra 2, chemistry, reading
...Doing practice ASVAB questions will only get you so far. What I offer is a basic assessment to gauge the level of understanding of each of the subjects so the proper starting point is realized
so future learning is sustainable and long lasting. The ACT Math section is nothing more than an IQ test in Math up to and including the Algebra levels of understanding.
16 Subjects: including algebra 2, algebra 1, Spanish, chemistry
...As a high school teacher in Detroit, I infect my students with my contagious enthusiasm for learning, critical thinking, and Math on a daily basis. Here are some direct quotes from one of my
past students: "Ms. Huang is always excited about teaching.
5 Subjects: including algebra 1, algebra 2, geometry, prealgebra
Related Southfield, MI Tutors
Southfield, MI Accounting Tutors
Southfield, MI ACT Tutors
Southfield, MI Algebra Tutors
Southfield, MI Algebra 2 Tutors
Southfield, MI Calculus Tutors
Southfield, MI Geometry Tutors
Southfield, MI Math Tutors
Southfield, MI Prealgebra Tutors
Southfield, MI Precalculus Tutors
Southfield, MI SAT Tutors
Southfield, MI SAT Math Tutors
Southfield, MI Science Tutors
Southfield, MI Statistics Tutors
Southfield, MI Trigonometry Tutors
Nearby Cities With algebra Tutor
Berkley, MI algebra Tutors
Beverly Hills, MI algebra Tutors
Bingham Farms, MI algebra Tutors
Farmington Hills, MI algebra Tutors
Farmington, MI algebra Tutors
Lathrup Village, MI algebra Tutors
Livonia, MI algebra Tutors
Oak Park, MI algebra Tutors
Redford Twp, MI algebra Tutors
Redford, MI algebra Tutors
Royal Oak Twp, MI algebra Tutors
Royal Oak, MI algebra Tutors
Southfield Township, MI algebra Tutors
Troy, MI algebra Tutors
West Bloomfield, MI algebra Tutors | {"url":"http://www.purplemath.com/southfield_mi_algebra_tutors.php","timestamp":"2014-04-19T12:41:20Z","content_type":null,"content_length":"24177","record_id":"<urn:uuid:7b9f111d-68b0-43b9-a001-6c31d44a4ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much profit?
March 15th 2010, 02:53 AM #1
Jan 2009
How much profit?
A girl bought a flower for $6, sold the flower for $7, purchased it back for $8 and sold it again for $9. How much profit did she make? Explain your answer carefully.
I figured out she made a $0 profit?
I am looking for an explanation for this, one thats better than my clumsy one
Hi studentme,
you can look at this in a couple of ways.
She bought the flower twice for a total of $(6+8)=$14
She sold it twice for a total of $(7+9)=$16
So her profit is $2
Or...... She buys it for $6, sells it for $7, making a profit of $1.
Then....She buys it again for $8 and sells it for $9, making another $1 profit.
Hello, studentme!
A girl bought a flower for $6, sold the flower for $7,
purchased it back for $8 and sold it again for $9.
How much profit did she make? Explain your answer carefully.
I figured out she made a $0 profit. . How?
. . $\underbrace{\text{Bought it for \6, sold it for \7,}}_{\text{\1 profit}}\; \underbrace{\text{bought it for \8, sold it for \9}}_{\text{\1 profit}}$
March 15th 2010, 03:36 AM #2
MHF Contributor
Dec 2009
March 15th 2010, 08:22 AM #3
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/math-puzzles/133880-how-much-profit.html","timestamp":"2014-04-21T10:29:41Z","content_type":null,"content_length":"37182","record_id":"<urn:uuid:3b911810-8d41-4874-a91c-0fb78c51e3d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Efficient Parallel Algorithm for Matrix-Vector Multiplication
B. A. Hendrickson, R. W. Leland, S. J. Plimpton, Int J of High Speed Computing, 7, 73-88 (1995).
The multiplication of a vector by a matrix is the kernel operation in many algorithms used in scientific computation. A fast and efficient parallel algorithm for this calculation is therefore
desirable. This paper describes a parallel matrix-vector multiplication algorithm which is particularly well suited to dense matrices or matrices with an irregular sparsity pattern. Such matrices can
arise from discretizing partial differential equations on irregular grids or from problems exhibiting nearly random connectivity between data structures. The communication cost of the algorithm is
independent of the matrix sparsity pattern and is shown to scale as O(N/sqrt(P) + log(P)) for an N x N matrix on P processors. The algorithm's performance is demonstrated by using it within the well
known NAS conjugate gradient benchmark. This resulted in the fastest run times achieved to date on both the 1024 node nCUBE~2 and the 128 node Intel iPSC/860. Additional improvements to the algorithm
which are possible when integrating it with the conjugate gradient algorithm are also discussed.
Return to Publications page | {"url":"http://www.sandia.gov/~sjplimp/abstracts/ijhsc95.html","timestamp":"2014-04-19T15:52:40Z","content_type":null,"content_length":"1518","record_id":"<urn:uuid:46997afd-1a7b-4ae5-807f-abbea298a76e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
DIY Electric Car Forums - View Single Post - 88 Fiero EV Build
Re: 88 Fiero EV Build
Thank you very much for your encouragement and support. So at 144V and my range, how would I calculate my AH for LiPo batteries?
Of course, I'd like the answer, but if someone could help me with the formula then I can make adjustments to my calcs later on.
For LiPo batteries, is it a linear line for distance calculations?
50AH = 10mi
100AH = 20mi | {"url":"http://www.diyelectriccar.com/forums/showpost.php?p=286044&postcount=7","timestamp":"2014-04-21T09:38:01Z","content_type":null,"content_length":"8528","record_id":"<urn:uuid:576e1efd-05fa-44c5-9a79-0fce4d6baa76>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Image selective smoothing and edge detection by nonlinear diffusion. II.
(English) Zbl 0766.65117
[For part I, see ibid. 29, No. 1, 182–193 (1992; Zbl 0746.65091).]
The authors study a class of nonlinear parabolic integro-differential equations for image processing. The diffusion term is modelled in such a way, that the dependent variable diffuses in the
direction orthogonal to its gradient but not in all directions. Thereby the dependent variable can be made smooth near an “edge”, with a minimal smoothing of the edge.
A stable algorithm is then proposed for image restoration. It is based on the “mean curvature motion” equation. Application of the solution is persuasively demonstrated for several cases.
65R10 Integral transforms (numerical methods)
45K05 Integro-partial differential equations
65R20 Integral equations (numerical methods)
49Q20 Variational problems in a geometric measure-theoretic setting
35K55 Nonlinear parabolic equations
35R10 Partial functional-differential equations
49J45 Optimal control problems involving semicontinuity and convergence; relaxation
49L25 Viscosity solutions (infinite-dimensional problems)
65M12 Stability and convergence of numerical methods (IVP of PDE)
94A08 Image processing (compression, reconstruction, etc.) | {"url":"http://zbmath.org/?q=an:0766.65117","timestamp":"2014-04-17T18:47:09Z","content_type":null,"content_length":"22777","record_id":"<urn:uuid:89661839-c602-45cd-97bc-4e2a7fa11cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning of Timed Systems
Abstract (Summary)
Regular inference is a research direction in machine learning. The goal of regular inference is to construct a representation of a regular language in the form of deterministic finite automaton (DFA)
based on the set of positive and negative examples. DFAs take strings of symbols (words) as input, and produce a binary classification as output, indicating whether the word belongs to the language
or not. There are two types of learning algorithms for DFAs: passive and active learning algorithms. In passive learning, the set of positive and negative examples is given and not chosen by
inference algorithm. In contrast, in active learning, the learning algorithm chooses examples from which a model is constructed.Active learning was introduced in 1987 by Dana Angluin. She presented
the L* algorithm for learning DFAs by asking membership and equivalence queries to a teacher who knows the regular language accepted by DFA to be learned. A membership query checks whether a word
belongs to the language or not. An equivalence query checks whether a hypothesized model is equivalent to the DFA to be learned.The L* algorithm has been found to be useful in different areas,
including black box checking, compositional verification and integration testing. There are also other algorithms similar to L* for regular inference. However, the learning of timed systems has not
been studied before. This thesis presents algorithms for learning timed systems in an active learning framework.As a model of timed system we choose event-recording automata (ERAs), a determinizable
subclass of the widely used timed automata. The advantages of ERA in comparison with timed automata, is that it is known priori the set of clocks of an ERA and when clocks are reset. The contribution
of this thesis is four algorithms for learning deterministic event-recording automaton (DERA). Two algorithms learn a subclass of DERA, called event-deterministic ERA (EDERA) and two algorithms learn
general DERA.The problem with DERAs that they do not have canonical form. Therefore we focus on subclass of DERAs that have canonical representation, EDERA, and apply the L* algorithm to learn
EDERAs. The L* algorithm in timed setting requires a procedure that learns clock guards of DERAs. This approach constructs EDERAs which are exponentially bigger than automaton to be learned. Another
procedure can be used to lean smaller EDERAs, but it requires to solve NP-hard problem.We also use the L* algorithm to learn general DERA. One drawback of this approach that inferred DERAs have a
form of region graph and there is blow-up in the number of transitions. Therefore we introduce an algorithm for learning DERA which uses a new data structure for organising results of queries, called
a timed decision tree, and avoids region graph construction. Theoretically this algorithm can construct bigger DERA than the L* algorithm, but in the average case we expect better performance.
Bibliographical Information:
School:Uppsala universitet
School Location:Sweden
Source Type:Doctoral Dissertation
Keywords:Datavetenskap; learning regular languages; timed systems; event-recording automata; Datavetenskap
Date of Publication:01/01/2008 | {"url":"http://www.openthesis.org/documents/Learning-Timed-Systems-432938.html","timestamp":"2014-04-19T17:05:27Z","content_type":null,"content_length":"10794","record_id":"<urn:uuid:10626480-9f45-4731-96a2-b28719c79781>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Agner Krarup Erlang (1878 - 1929)
Issue 2
May 1997
A.K. Erlang was the first person to study the problem of telephone networks. By studying a village telephone exchange he worked out a formula, now known as Erlang's formula, to calculate the fraction
of callers attempting to call someone outside the village that must wait because all of the lines are in use. Although Erlang's model is a simple one, the mathematics underlying today's complex
telephone networks is still based on his work.
He was born at Lønborg, in Jutland, Denmark. His father, Hans Nielsen Erlang, was the village schoolmaster and parish clerk. His mother was Magdalene Krarup from an ecclesiastical family and had a
well known Danish mathematician, Thomas Fincke, amongst her ancestors. He had a brother, Frederik, who was two years older and two younger sisters, Marie and Ingeborg. Agner spent his early school
days with them at his father's schoolhouse. Evenings were often spent reading a book with Frederik, who would read it in the conventional way and Agner would sit on the opposite side and read it
upside down. At this time one of his favourite subjects was astronomy and he liked to write poems on astronomical subjects. When he had finished his elementary education at the school he was given
further private tuition and succeeded in passing the Præliminæreksamen (an examination held at the University of Copenhagen) with distinction. He was then only 14 years old and had to be given
special entrance permission.
Agner returned home where he remained for two years, teaching at his father's school for two years and continuing with his studies. He also learnt French and Latin during this period. By the time he
was 16 his father wanted him to go to university but money was scarce. A distant family relation provided free accommodation for him while he prepared for his university entrance examinations at the
Frederiksborg Grammar School. He won a scholarship to the University of Copenhagen and completed his studies there in 1901 as an MA with mathematics as the main subject and astronomy, physics and
chemistry as secondary subjects.
Over the next 7 years he taught in various schools. Even though his natural inclination was toward scientific research, he proved to have excellent teaching qualities. He was not highly sociable, he
preferred to be an observer, and had a concise style of speech. His friends nicknamed him "The Private Person". He used his summer holidays to travel abroad to France, Sweden, Germany and Great
Britain, visiting art galleries and libraries. While teaching, he kept up his studies in mathematics and natural sciences. He was a member of the Danish Mathematicians' Association through which he
made contact with other mathematicians including members of the Copenhagen Telephone Company. He went to work for this company in 1908 as scientific collaborator and later as head of its laboratory.
Erlang at once started to work on applying the theory of probabilities to problems of telephone traffic and in 1909 published his first work on it "The Theory of Probabilities and Telephone
Conversations"^[1] proving that telephone calls distributed at random follow Poisson's law of distribution. At the beginning he had no laboratory staff to help him, so he had to carry out all the
measurements of stray currents. He was often to be seen in the streets of Copenhagen, accompanied by a workman carrying a ladder, which was used to climb down into manholes. Further publications
followed, the most important work was published in 1917 "Solution of some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges"^[2]. This paper contained formulae
for loss and waiting time, which are now well known in the theory of telephone traffic. A comprehensive survey of his works is given in "The life and works of A.K. Erlang"^[3].
Because of the growing interest in his work several of his papers were translated into English, French and German. He wrote up his work in a very brief style, sometimes omitting the proofs, which
made the work difficult for non-specialists in this field to understand. It is known that a researcher from the Bell Telephone Laboratories in the USA learnt Danish in order to be able to read
Erlang's papers in the original language.
His work on the theory of telephone traffic won him international recognition. His formula for the probability of loss was accepted by the British Post Office as the basis for calculating circuit
facilities. He was an associate of the British Institution of Electrical Engineers.
Erlang devoted all his time and energy to his work and studies. He never married and often worked late into the night. He collected a large library of books mainly on mathematics, astronomy and
physics, but he was also interested in history, philosophy and poetry. Friends found him to be a good and generous source of information on many topics. He was known to be a charitable man, needy
people often came to him at the laboratory for help, which he would usually give them in an unobtrusive way. Erlang worked for the Copenhagen Telephone Company for almost 20 years, and never having
had time off for illness, went into hospital for an abdominal operation in January 1929. He died some days later on Sunday, 3rd February 1929.
Interest in his work continued after his death and by 1944 "Erlang" was used in Scandinavian countries to denote the unit of telephone traffic. International recognition followed at the end of World
War II^[4].
[1] "The Theory of Probabilities and Telephone Conversations", Nyt Tidsskrift for Matematik B, vol 20, 1909.
[2] "Solution of some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges", Elektrotkeknikeren, vol 13, 1917.
[3] "The life and works of A.K. Erlang", E. Brockmeyer, H.L. Halstrom and Arns Jensen, Copenhagen: The Copenhagen Telephone Company, 1948.
[4] Proceedings of the C.C.I.F. ("Le comité consultatif international des communications téléphoniques à grande distance"), Montreux, 1946. | {"url":"http://plus.maths.org/content/agner-krarup-erlang-1878-1929","timestamp":"2014-04-18T13:27:58Z","content_type":null,"content_length":"29392","record_id":"<urn:uuid:4db64891-5969-4677-b7c8-d10af1dd0804>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find the first term of the arithmetic sequence having the data given. an = 48, Sn = 546, n = 26
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f7bcb39e4b0587805a13f6f","timestamp":"2014-04-18T20:56:49Z","content_type":null,"content_length":"34820","record_id":"<urn:uuid:23c8045b-0086-45d4-a063-fe97772bbd63>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Analysis Question
I'm having trouble with this showing question. Any help would be much appreciated. thanks!
Setting $z= x + i\cdot y$ and $f(z)= u(x,y) + i\cdot v(x,y)$, $f(*)$ is analytic in a region $A$ of the complex plane if and only if in that region is... $\frac{\partial u}{\partial x}=\frac{\partial
v}{\partial y}$ $\frac{\partial u}{\partial y}=- \frac{\partial v}{\partial x}$ (1) But $f(*)$ is a real function in $A$ so that is $v(z)=0$ and therefore for the (1) is $\frac{\partial u}{\partial
x} = \frac{\partial u}{\partial y}=0$... Kind regards $\chi$$\sigma$
how did you arise to f(*) is a real function in A so that v(z) = 0 ? thanks for the reply! | {"url":"http://mathhelpforum.com/calculus/96394-complex-analysis-question.html","timestamp":"2014-04-20T19:47:53Z","content_type":null,"content_length":"36831","record_id":"<urn:uuid:5cfc6eee-c23b-4c02-821b-48288c2babf8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
We had understood the mathematical scheme, and we also had understood that certainly we need the discrete energy levels, and the quantum jumps, and so on. But we could not even explain how such a
thing as an orbit of an electron in a cloud chamber comes about, because they would see the orbit, but still we had no notion of the orbit in our mathematical scheme.
And at that time I remembered a long discussion which I had with Einstein about a year [before] - it was my first meeting with Einstein - I had given a talk on quantum mechanics in the Berlin
colloquium. And Einstein had taken me to his room, and he first asked me about this idea which I had said in my lecture, that one should only use observable quantities in the mathematical scheme. And
he said, he understood the ideas of Mach, Mach's philosophy, but whether I really believed in it, he couldn't see. Well, I told him that I had understood that he has produced his theory of relativity
just on this philosophical basis, as everybody knew. Well, he said, that may be so, but still it's nonsense. And that of course was quite surprising to me.
Then he explained that what can be observed is really determined by the theory. He said, you cannot first know what can be observed, but you must first know a theory, or produce a theory, and then
you can define what can be observed....
And could it not be the other way around? Namely, could it not be true that nature only allows for such situations which can be described with a mathematical scheme? Up to that moment, we had asked
the opposite question. We had asked, given the situations in nature like the orbit in a cloud chamber, how can it be described with a mathematical scheme? But that wouldn't work, because by using
such a word like "orbit", we of course assumed already that the electron had a position and had a velocity. But by turning it around, one could at once see that now it's possible, if I say nature
only allows such situations as can be described with a mathematical scheme, then *you can say, well, this orbit is really not a complete orbit. Actually, at every moment the electron has only an
inaccurate position and an inaccurate velocity, and between these two inaccuracies there is this uncertainty relation. And only by this idea it was possible to say what such an orbit was.
From "The Development of the Uncertainty Principle", an audiotape produced by Spring Green Multimedia in the UniConcept Scientist Tapes series, © 1974. | {"url":"http://www.aip.org/history/heisenberg/voice1.htm","timestamp":"2014-04-18T08:03:42Z","content_type":null,"content_length":"6054","record_id":"<urn:uuid:ee99633b-91f4-4747-aa75-82ee41214401>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kneading theory
From Scholarpedia
Kneading theory is a tool, developed by Milnor and Thurston, for studying the topological dynamics of piecewise monotone self-maps of an interval.
Associated to such an interval map \(f\) is its kneading matrix \(N(t)\ ,\) whose entries are elements of \({\mathbb Z}[[t]]\ ,\) the ring of formal power series with integer coefficients. This
matrix contains information about important combinatorial and topological invariants of \(f\ .\)
Milnor and Thurston's work was eventually published in Milnor and Thurston 1988, although the majority of their article had been widely circulated in preprint form since 1977. The use of symbolic
dynamics in the study of interval maps, which is the starting point of their work, was developed earlier (see for example Parry 1966, Metropolis Stein and Stein 1973).
Quite often notation different from Milnor and Thurston's is used, see Alternative notation.
The unimodal case
The theory is most easily understood in the special case of unimodal maps, which is also the most common area of application. In this section \(f:[0,1]\to[0,1]\) is a fixed continuous map (so the
dependence of objects on \(f\) will not be explicitly noted), with the properties that
• there is some \(c\in(0,1)\) such that \(f\) is strictly increasing on \([0,c]\) and strictly decreasing on \([c,1]\ ,\) and
• \(f(0)=f(1)=0\ .\)
A rich source of examples is provided by the logistic family \(f_\mu(x)=\mu x(1-x)\ ,\) where \(0<\mu\le 4\ .\)
The cutting invariant and the lap invariant: topological entropy
Let \(\Gamma\) be the set of preimages of \(c\ ,\) i.e. \[\Gamma = \{x\in[0,1]\,:\,f^i(x)=c \mbox{ for some }i\ge 0\}.\] \(\Gamma\) can be written as the disjoint union of the sets \(\Gamma_i\) (\(i\
ge 0\)), where elements \(x\) of \(\Gamma_i\) satisfy \(f^i(x)=c\) but \(f^j(x)\not=c\) for \(j<i\ .\) Let \(\gamma_i\) denote the cardinality of \(\Gamma_i\) (so \(\gamma_i\le 2^i\) for all \(i\)).
The cutting invariant of \(f\) is the formal power series \[\gamma(t) = \sum_{i=0}^\infty \gamma_i t^i \, \in{\mathbb Z}[[t]].\]
Constructing formal power series from sequences of integers in this way will be a common process: where appropriate, these formal power series will be regarded as complex power series without further
comment. A closely related construction is of the lap invariant \(\ell(t)\ :\) let \(\ell_i\) denote the number of monotone pieces (or laps) of \(f^i\) for \(i\ge 1\ ,\) and write \[\ell(t) = \sum_{i
=0}^\infty \ell_{i+1} t^i\, \in{\mathbb Z}[[t]].\] Since \(\ell_i=1+\sum_{j=0}^{i-1}\gamma_j\) it follows that \[\tag{1} \ell(t)=\frac{1}{1-t}\left(1+\gamma(t)\right).\]
Let \(s=\limsup_{i\to\infty}\ell_i^{1/i}\in[1,2]\ ,\) the reciprocal of the radius of convergence of \(\ell(t)\ ,\) and hence also of \(\gamma(t)\ .\) Misiurewicz and Szlenk 1977 show that the
topological entropy \(h(f)\) of \(f\) is given by \(h(f)=\log s\ .\) This quantity \(s\) is called the growth number of \(f\ .\) It can readily be shown that the sequence \((\ell_i^{1/i})\)
converges, so that in fact \(s=\lim_{i\to\infty}\ell_i^{1/i}\ .\)
The kneading determinant
Let \(x\in[0,1]\setminus\Gamma\) (that is, \(x\) is not a preimage of \(c\)). Define the kneading coordinate of \(x\) to be the sequence \(\theta(x)\in\{+1,-1\}^{\mathbb N}\) by \[\theta_0(x)= \begin
{cases} +1, & \mbox{if }x<c\\ -1, & \mbox{if }x>c \end{cases} \qquad \mbox{and} \qquad \theta_i(x)=\theta_{i-1}(x)\theta_0(f^i(x)) \mbox{ for }i\ge1. \] (More succinctly, \(\theta_i(x)=+1\) or \(-1\)
according as \(f^{i+1}\) is locally increasing or locally decreasing at \(x\ .\))
Construct a corresponding power series \(\theta(x,t)=\sum_{i=0}^\infty \theta_i(x) t^i\in{\mathbb Z}[[t]]\ .\)
Any unimodal map \(f\) as defined above has \[\tag{2} \theta(0,t)=\frac{1}{1-t}\quad\mbox{ and }\quad\theta(1,t)=\frac{-1}{1-t}\ .\]
• For \(x=0\ ,\) \(\theta_0(f^i(x))=\theta_0(0)=+1\) for all \(i\ge 0\ ,\) so that \(\theta_i(0)=+1\) for all \(i\ge 0\ ;\)
• for \(x=1\ ,\) \(\theta_0(x)=-1\) and \(\theta_0(f^i(x))=\theta_0(0)=+1\) for all \(i>0\ ,\) so that \(\theta_i(x)=-1\) for all \(i\ge 0\ .\)
Now for any \(x\in[0,1]\ ,\) define elements \(\theta(x^+)\) and \(\theta(x^-)\) of \(\{+1,-1\}^{\mathbb N}\) by \[\theta(x^+)=\lim_{y\searrow x}\theta(y)\qquad\mbox{and}\qquad\theta(x^-)=\lim_{y\
nearrow x}\theta(y),\] where the limits are taken through elements \(y\) of \([0,1]\setminus\Gamma\ .\) (These limits can be taken with respect to the product topology on \(\{+1,-1\}^{\mathbb N}\ ,\)
but the situation is very simple. For each \(i\ ,\) let \(z>x\) be the smallest element of \((x,1] \cap \bigcup_{j\le i}\Gamma_j\) (or \(z=1\) if this set is empty): then \(\theta_i(y)\) is constant
for \(y\in(x,z)\ ,\) and \(\theta_i(x^+)\) is this constant value.)
Construct corresponding power series \(\theta(x^+,t)\) and \(\theta(x^-,t)\) by \(\theta(x^\pm,t)=\sum_{i=0}^\infty \theta_i(x^\pm)t^i\in{\mathbb Z}[[t]]\ .\)
The kneading determinant \(D(t)\) of \(f\) (also called the kneading invariant in the unimodal context) is then defined by \[D(t)=\theta(c^-,t)\in{\mathbb Z}[[t]].\] It expresses the discontinuity of
the kneading coordinate across \(x=c\ .\)
Example Let \(f(x)=3.84\,x(1-x)\) (which has a stable period 3 orbit). The orbit of \(c=1/2\) satisfies \(f^{3i+1}(c)>c\ ,\) \(f^{3i+2}(c)<c\) and \(f^{3i+3}(c)<c\) for all \(i\ge 0\ .\) Hence
(abbreviating \(+1\) and \(-1\) to \(+\) and \(-\)) \[ \begin{array}{rcl} \left(\theta_0(f^i(c^-))\right)_{i\ge 0} &=& (+, -, +, +, -, +, +, -, +, +, -, +, +, \ldots),\quad\mbox{and so}\\ \theta(c^-)
&=& (+, -, -, -, +, +, +, -, -, -, +, +, +, \ldots), \quad\mbox{and}\\ D(t) &=& 1-t-t^2-t^3+t^4+t^5+t^6-t^7-t^8-t^9 + \cdots \,\, = (1-t-t^2)/(1+t^3). \end{array} \]
Order \(\{+1,-1\}^{\mathbb N}\) lexicographically with \(+1 \prec -1\ :\) that is, if \(\theta\not=\phi\in\{+1,-1\}^{\mathbb N}\ ,\) then \(\theta \prec \phi\) if and only if \(\theta_n = +1\ ,\)
where \(n\) is least such that \(\theta_n\not= \phi_n\ .\) Then it can easily be shown that \[ x<y \,\,\implies\,\, \theta(x^-) \preceq \theta(y^-). \] That is, the function \(x\mapsto \theta(x^-)\)
(and similarly the function \(x\mapsto\theta(x^+)\)) is increasing.
A non-trivial interval \([a,b]\) on whose interior these functions are constant (that is, whose interior is disjoint from \(\Gamma\)) is called a homterval. \(f\) can only have a homterval \([a,b]\)
if either every point of \([a,b]\) is in the basin of a periodic orbit, or if \([a,b]\) is a wandering interval: the latter possibility cannot occur if \(f\) is \(C^2\) and has non-flat turning point
(de Melo and van Strien 1993, see also Guckenheimer 1979, Misiurewicz 1981).
The relationship between the kneading determinant and the cutting invariant
The fundamental relationship between the kneading determinant \(D(t)\) and the cutting invariant \(\gamma(t)\) is \[\tag{3} D(t)\gamma(t)=\frac{1}{1-t}. \]
To see why this holds, observe first that \(D(t)=\theta(c^-,t)=-\theta(c^+,t)\) (since "\(f(c^+)=f(c^-)\)"). That is, the discontinuity in \(\theta(x,t)\) across \(x=c\) is \(-2D(t)\ .\) It follows
that the discontinuity across an element \(x\) of \(\Gamma_i\) is \(\theta(x^+,t)-\theta(x^-,t)=-2t^i D(t)\ .\)
Fix \(n\ge 0\ .\) Then \(\theta_n(x^-)\) is constant on the open intervals whose endpoints are the points of \(\bigcup_{i=0}^n\Gamma_i\ .\) Hence \[\theta_n(1)-\theta_n(0)=\sum_{i=0}^n\sum_{x\in\
Gamma_i}\left(\theta_n(x^+)-\theta_n(x^-)\right).\] Now the left hand side of this equation is \(-2\) (cf. (2)), while \(\theta_n(x^+)-\theta_n(x^-)=-2\theta_{n-i}(c^-)\) for \(x\in\Gamma_i\ .\)
\[1=\sum_{i=0}^n\sum_{x\in\Gamma_i}\theta_{n-i}(c^-) = \sum_{i=0}^n\gamma_i\theta_{n-i}(c^-).\] However \(\sum_{i=0}^n\gamma_i\theta_{n-i}(c^-)\) is the coefficient of \(t^n\) in \(D(t)\gamma(t)\ ,\)
which establishes the result.
Example Let \(f(x)=3.84\,x(1-x)\ ,\) so that \(D(t)=(1-t-t^2)/(1+t^3)\) as above. It follows by (3) that \(\gamma(t)=(1+t^3)/(1-2t+t^3)\ ,\) and (1) then gives \[ \ell(t) = \frac{2}{1-t}\left(\frac
{1-t+t^3}{1-2t+t^3}\right) = 2+4t+8t^2+16t^3+30t^4+54t^5+94t^6+\cdots. \] Hence, for example, \(f^6\) has 54 laps.
Topological entropy
Theorem \(\quad\) Let \(f\) be a unimodal map. Then its topological entropy \(h(f)\) is positive if and only if \(D(t)\) has a zero in \(|t|<1\ .\) In this case, \(h(f)=\log\frac{1}{r}\ ,\) where \(r
\) is the smallest zero of \(D(t)\) in \([0,1)\ ,\) and \(D(t)\) has no zeros in \(|t|<r\ .\)
This result follows immediately from (3). By Misiurewicz and Szlenk 1977, \(h(f)=\log \frac{1}{r}\ ,\) where \(\gamma(t)\) has radius of convergence \(r\ .\) Clearly \(D(t)\) has radius of
convergence 1. Hence \(D(t)\) and \(\gamma(t)\) are analytic in \(|t|<r\ ,\) so \(D(t)\) can have no zeros in \(|t|<r\ .\) If \(\frac{1}{r}>1\ ,\) then \(\gamma(t)\) has a pole at \(t=r\ ,\) since
all of its coefficients are positive. Letting \(t\to r\) from below in \(D(t)\gamma(t)=1/(1-t)\) (which is valid as an identity for complex power series in \(|t|<r\)) gives \(D(r)=0\ .\)
Let \(f(x)=3.84\,x(1-x)\ ,\) so that \(D(t)=(1-t-t^2)/(1+t^3)\) as above. Hence \(D(t)\) has a unique zero in \([0,1)\ ,\) which is \(r=(\sqrt{5}-1)/2\ ,\) and it follows that \(h(f)=\log(1/r)=\log
((\sqrt{5}+1)/2)\ .\)
In cases such as the above where the coefficients of \(D(t)\) form an eventually periodic sequence, the topological entropy can be calculated by standard Markov partition techniques, but the theorem
has wider applicability. It can also be used as a practical means of estimating the topological entropy of unimodal maps: for example, Figure 2 shows a graph of growth number against parameter\(\mu\)
in the logistic family, calculated using this method.
The Artin-Mazur zeta function
Suppose that \(f\) has only finitely many periodic orbits of each period. The Artin-Mazur zeta function of \(f\) (Artin and Mazur 1965) is the formal power series \[\zeta(t)=\exp\,\sum_{k\ge 1}n(f^k)
\frac{t^k}{k},\] where \(n(f^k)\) denotes the number of fixed points of \(f^k\ .\) One of the most substantial results of Milnor and Thurston 1988 is a relationship between this zeta function and the
kneading determinant.
For the sake of simplicity, assume that \(f\) is differentiable, and that all but finitely many of its periodic orbits are unstable: these conditions hold, for example, if \(f\) is a rational
function, or if it has negative Schwarzian derivative (see sections 8-11 of Milnor and Thurston 1988 for ways in which this assumption can be relaxed).
Then \[\zeta(t)^{-1} = D(t) \prod_{P}\kappa_P(t),\] where the product is over the periodic orbits \(P\) of \(f\) which are not unstable together with the fixed point \(0\ ,\) and the polynomials \(\
kappa_P(t)\) are as follows:
• For the fixed point \(0\ ,\) \(\kappa_P(t)=(1-t)^2\) if the fixed point is stable (on the right), and \(\kappa_P(t)=1-t\) otherwise.
• For each other non-unstable periodic orbit \(P\) of period \(k\ ,\) \(\kappa_P(t)\) is
□ \(1-t^k\) if \(P\) is stable on one side only;
□ \(1-t^{2k}\) if \(P\) is stable and the derivative of \(f^k\) is negative at the points of \(P\ ;\)
□ \((1-t^k)^2\) if \(P\) is stable and the derivative of \(f^k\) is non-negative at the points of \(P\ .\)
In particular, \(\zeta(t)\) has radius of convergence \(r=1/s\ ,\) where \(s\) is the growth number of \(f\ .\)
Example Let \(f(x)=3.84\,x(1-x)\ ,\) so that \(D(t)=(1-t-t^2)/(1+t^3)\) as above. \(f\) has a stable period 3 orbit, at the points of which \(f^3\) has negative derivative (and there are no other
stable periodic orbits since \(f\) has negative Schwarzian derivative). The fixed point at \(x=0\) is unstable. Hence \[\zeta(t)^{-1}=\frac{1-t-t^2}{1+t^3}(1-t)(1-t^6)=1-2t+2t^4-t^6.\] The expression
\[t\zeta'(t)/\zeta(t)=\sum_{k\ge 1}n(f^k)t^k = \frac{2t(1+t+t^2-3t^3-3t^4)}{1-t-t^2-t^3+t^4+t^5}=2t+4{t}^{2}+8{t}^{3}+8{t}^{4}+12{t}^{5}+22{t}^{6}+30{t}^{7}+\cdots\] then makes it possible to
calculate \(n(f^k)\) for each \(k\ .\)
Semi-conjugacy to piecewise-linear models
Assume throughout this section that \(f\) has positive topological entropy, i.e. that it has growth number \(s>1\ .\)
By means of kneading theory, it is possible to construct explicitly a continuous increasing surjection \(\lambda:[0,1]\to[0,1]\) which semi-conjugates \(f\) to the tent map \(F_s:[0,1]\to[0,1]\)
defined (see Figure 3) by \[F_s(x)= \begin{cases} sx, & \mbox{if }x\le 1/2\\ s(1-x), & \mbox{if }x\ge 1/2. \end{cases} \] The semi-conjugacy \(\lambda\) is given by \[\tag{4} \lambda(x)=\frac{1}{2}\
left(1-(1-r)\,\theta(x^-, r)\right), \]
where \(r=1/s\) (note that the function \(x\mapsto \theta(x^-,r)\) is continuous, since \(D(r)=0\)).
Given a subinterval \(J=[a,b]\) of \([0,1]\ ,\) define \(\ell_i^J\) to be the number of laps of \(f^i|_J\) for each \(i\ge 1\ ,\) and let \(\ell^J(t)\) be the corresponding formal power series \[\ell
^J(t)=\sum_{i=0}^\infty \ell_{i+1}^J t^i\in{\mathbb Z}[[t]].\] Recall that \(\ell(t)=\ell^{[0,1]}(t)\) has radius of convergence \(r\ .\) Notice that \(0< \ell^J(t)\le \ell(t)\) for all \(t\in[0,r)\)
so that the singularity of \(\ell^J(t)/\ell(t)\) at \(t=r\) is removable. The limit \(L(J)=\lim_{t\nearrow r}\frac{\ell^J(t)}{\ell(t)}\) therefore exists and lies in \([0,1]\ .\)
Observe the following:
• If \(x\in(a,b)\ ,\) then \(L(J)=L([a,x])+L([x,b])\ .\)
(For \(\ell^J(t)-\left(\ell^{[a,x]}(t)+\ell^{[x,b]}(t)\right)\) is bounded in \([0,r)\ ,\) while \(\ell(t)\) has a pole at \(t=r\ .\))
• Let \(i\ge 1\ .\) If \((a,b)\) is disjoint from \(\Gamma_j\) for \(0\le ji\ .\) Hence \(\ell^J(t)=1+t+\cdots+t^{i-1}+t^i\ell^{f^i(J)}(t)\ ,\) and the result follows on dividing by \(\ell(t)\) and
taking the limit as \(t\nearrow r\ .\))
• \(L(J)\) depends continuously on the endpoints of \(J\ .\)
(By the first property above, it suffices to show that, given \(\epsilon>0\ ,\) \(L(J)<\epsilon\) for all sufficiently small \(J\ .\) Let \(i\) be large enough that \(r^i<\epsilon\ :\) then any \(J\)
whose interior is disjoint from the finite set \(\bigcup_{j=0}^{i-1}\Gamma_j\) satisfies \(L(J)=r^iL(f^i(J))\le r^i<\epsilon\ .\))
Now define \(\lambda:[0,1]\to[0,1]\) by \(\lambda(x)=L([0,x])\ .\) Clearly \(\lambda(0)=0\) and \(\lambda(1)=1\ ;\) and, by the properties above, \(\lambda\) is continuous and increasing.
Theorem \(\quad\) \(F_s\circ\lambda = \lambda\circ f\ ,\) and \(\lambda(c)=1/2\ .\)
For the proof, observe that:
• If \(x\in[0,c]\) then \((0,x)\) is disjoint from \(\Gamma_0=\{c\}\ ,\) and hence \(\lambda(x)=L([0,x])=rL(f([0,x]))=r\lambda(f(x))\ ,\) so
\[\lambda\circ f(x)=s\lambda(x).\]
• If \(x\in[c,1]\) then similarly \(\lambda(x)=L([0,c])+L([c,x]) = \lambda(c)+rL([f(x),f(c)])=\lambda(c)+r(\lambda(f(c))-\lambda(f(x)))\ .\) Hence
\[\lambda\circ f(x)=s(\lambda(c)-\lambda(x))+\lambda(f(c)).\]
Thus \(\lambda\circ f\) is continuous, and is of the form \(F\circ\lambda\ ,\) where \(F:[0,1]\to [0,1]\) has slope \(s\) for \(x\in[0,\lambda(c)]\ ,\) and slope \(-s\) for \(x\in[\lambda(c),1]\ .\)
Since \(F(0)=F(1)=0\ ,\) it follows that \(F=F_s\) and \(\lambda(c)=1/2\ .\)
To obtain the expression (4) for \(\lambda(x)\) (which is more accessible for calculations), define \(\gamma_i^J\) to be the cardinality of \(\Gamma_i\cap(a,b)\ ,\) and let \(\gamma^J(t)=\sum_{i=0}^\
infty \gamma_i^Jt^i\in{\mathbb Z}[[t]]\) be the corresponding formal power series. As in (1), it is straightforward that
\[\ell^J(t) = \frac{1}{1-t}\left(1+\gamma^J(t)\right),\] so that \(\lambda(x)\) can equivalently be defined as \(\lambda(x)=\lim_{t\nearrow r}\frac{\gamma^{[0,x]}(t)}{\gamma(t)}\ .\)
Analogously to (3), it can be shown that \(\theta(b^-,t)-\theta(a^+,t)=-2D(t)\gamma^J(t)\ .\) Hence \[\frac{\gamma^{[0,x]}(t)}{\gamma(t)} = \frac{\theta(x^-,t)-\theta(0,t)}{\theta(1,t)-\theta(0,t)},
\] from which (4) follows, since \(\theta(0,t)=1/(1-t)\) and \(\theta(1,t)=-1/(1-t)\) by (2).
Example Let \(f(x)=3.84x(1-x)\ .\) The graph of the function \(\lambda\) which semi-conjugates \(f\) to the tent map \(F_{(\sqrt{5}+1)/2}\) is shown in Figure 4. Notice that \(\lambda\) is locally
constant on the basin of attraction of the stable period 3 orbit of \(f\ .\)
See also Parry 1966 for an alternative approach to the proof of this result in the case of transitive maps and Alseda, Llibre and Misiurewicz 2000 in the general case.
A unimodal map \(f\) is renormalizable if there is a proper subinterval \(J\) of \([0,1]\) and an integer \(n>1\) such that \(f^n|_J\) is itself (topologically conjugate to) a unimodal map (see
Figure 5). Consideration of the longest possible sequence of renormalizations of \(f\) gives rise to a canonical decomposition \(\Omega(f)=\bigcup_{i=0}^p\Omega_i\) of the non-wandering set of \(f\)
into \(f\)-invariant basic sets (see Jonker and Rand 1981). The topological entropies \(h(f|_{\Omega_i})\) are reflected by zeros of \(D(t)\ :\) the following brief description summarizes results
from Jonker and Rand 1981.
A map \(f\) is renormalizable if and only if there is an element \(w\) of \(\{-1,+1\}^n\) for some \(n>1\) and non-negative integers \(b_0, a_1, b_1, a_2, \ldots\) (this may also be a finite sequence
whose last entry is \(\infty\)) such that \[\theta(c^-)=w(-w)^{b_0}w^{a_1}(-w)^{b_1}w^{a_2}\ldots,\] where \(-w\in\{-1,+1\}^n\) is the word obtained from \(w\) by interchanging \(+1\) and \(-1\ .\)
In this case, writing \(R_w\theta(c^-)\) for the sequence \((+1)(-1)^{b_0}(+1)^{a_1}(-1)^{b_1}(+1)^{a_2}\ldots\ ,\) \(R_wD(t)\) for the corresponding formal power series (which is the kneading
determinant of the renormalized map \(f^n|_J\)), and \(w(t)\) for the polynomial \(\sum_{i=0}^{n-1}w_it^i\ ,\) it can easily be seen that \[D(t)=w(t)R_wD(t^n),\] from which the following result can
be derived:
Theorem \(\quad\) Let \(\Omega_i\) be a basic set in the canonical decomposition of the non-wandering set of a unimodal map \(f\ ,\) with \(h(f|_{\Omega_i})=s_i>0\ .\) Then \(D(t)\) has a zero at \(t
=e^{-s_i}\ .\)
Note, however, that \(D(t)\) may have zeros in \((0,1)\) which do not correspond to the topological entropy on any basic set.
Example Let \(f(x)=3.85209\,x(1-x)\) (so that \(f\) has a stable period 15 orbit). Then \[\theta(c^-)=(+---++-+++---++)^\infty = (w(-w)^2w(-w))^\infty,\] where \(w=+--\ .\) Hence \(w(t)=1-t-t^2\ ,\)
\(R_wD(t)=(1-t-t^2+t^3-t^4)/(1-t^5)\ ,\) and \[ D(t)=\frac{1-t-t^2-t^3+t^4+t^5-t^6+t^7+t^8+t^9-t^{10}-t^{11}-t^{12}+t^{13}+t^{14}}{1-t^{15}} \] \[ \qquad\,\,=(1-t-t^2)\left(\frac{1-t^3-t^6+t^9-t^
{12}}{1-t^{15}}\right) = w(t)R_wD(t^3). \]
\(D(t)\) has two zeros in \((0,1)\ :\) at \((\sqrt{5}-1)/2\ ,\) a zero of \(w(t)\ ,\) corresponding to \(h(f)=h(f|_{\Omega_0})\ ;\) and at about \(0.871\ ,\) the cube root of a zero of \(R_wD(t)\ ,\)
corresponding to \(h(f|_{\Omega_1})\ .\)
The multimodal case
A continuous self-map \(f:I\to I\) of an interval \(I=[a,b]\) is piecewise monotone if there are elements \(c_1<c_2<\cdots<c_{\ell-1}\) of \((a,b)\) such that \(f\) is strictly monotone on each
interval \(I_1=[a,c_1]\ ,\) \(I_{n+1}=[c_n, c_{n+1}]\) (\(1\le n\le \ell-2\)), and \(I_\ell=[c_{\ell-1},b]\ .\)
It will always be assumed that \(\ell\) has been chosen to be as small as possible, so that each \(c_n\) is a local extremum, or turning point, of \(f\ .\)
The kneading theory for piecewise monotone maps is developed analogously to the unimodal case. The kneading coordinate of a point \(x\in I\) must identify which of the \(\ell\) laps of \(f\) each
iterate of \(x\) belongs to, and there are \(\ell-1\) turning points across which the discontinuity of the kneading coordinate needs to be specified: this information gives rise to an \((\ell-1)\
times\ell\) matrix with entries in \({\mathbb Z}[[t]]\ ,\) which is called the kneading matrix \(N(t)\) of \(f\ .\) The kneading determinant \(D(t)\) of \(f\) is the (suitably normalised) determinant
of an \((\ell-1)\times(\ell-1)\) submatrix of \(N(t)\ .\)
Additional minor complications compared to the unimodal case (with the definition of unimodal used above), are caused by the facts that the first lap of \(f\) may be either increasing or decreasing,
and that it is not assumed that the endpoints \(\{a,b\}\) of \(I\) form an \(f\)-invariant set.
The cutting invariant, lap invariant, and growth number
The lap invariant \(\ell(t)\) of \(f\) is defined exactly as in the unimodal case: \[\ell(t)=\sum_{i=0}^\infty \ell_{i+1}t^i\in{\mathbb Z}[[t]],\] where \(\ell_i\) is the number of laps of \(f^i\ .\)
Similarly, the cutting invariant \(\gamma(t)\) of \(f\) is defined by \[\gamma(t)=\sum_{i=0}^\infty \gamma_it^i\in{\mathbb Z}[[t]],\] where \(\gamma_i\) is the cardinality of the set \(\Gamma\) of
points \(x\in(a,b)\) with the property that \(f^i(x)\) is a turning point, but \(f^j(x)\) is not a turning point for \(j<i\ .\) The lap invariant and the cutting invariant are related by \[\ell(t)=\
frac{1}{1-t}\left(1+\gamma(t)\right),\] and hence they have the same radius of convergence \(1/s\ ,\) where \(s=\limsup_{i\to\infty}\ell_i^{1/i}=\lim_{i\to\infty}\ell_i^{1/i}\in[1,\ell]\ ,\) the
growth number of \(f\ .\) Misiurewicz and Szlenk 1977 show that the topological entropy \(h(f)\) of \(f\) is given by \(h(f)=\log s\ .\)
Slightly more information is provided by the formal power series \[\gamma^n(t) = \sum_{i=0}^\infty \gamma_i^n t^i\in{\mathbb Z}[[t]] \quad (1\le n\le\ell-1),\] where \(\gamma_i^n\) is the cardinality
of the set \(\Gamma_i^n\) of points \(x\in(a,b)\) with the property that \(f^i(x)=c_n\ ,\) but \(f^j(x)\) is not a turning point for \(j<i\ .\) Notice that \(\gamma(t)=\sum_{n=1}^{\ell-1}\gamma^n(t)\
The kneading matrix and kneading determinant
For each \(n\) with \(1\le n\le \ell\ ,\) let \(\epsilon_n=+1\) if \(f|_{I_n}\) is increasing, and \(\epsilon_n=-1\) if \(f|_{I_n}\) is decreasing. In particular, \(\epsilon_{n+1}=-\epsilon_n\) for \
(1\le n < \ell\ .\)
Let \(M\) be the free module with basis \(\{I_1,\ldots,I_\ell\}\) over \({\mathbb Z}\ .\) Given \(x\in[a,b]\setminus\Gamma\ ,\) let \(k(x)=\left(k_i(x)\right)_{i\ge0}\in\{1,\ldots,\ell\}^{\mathbb N}
\) be defined by the condition \(f^i(x)\in I_{k_i(x)}\) for each \(i\ge 0\ .\)
The kneading coordinate \(\theta(x)=\left(\theta_i(x)\right)_{i\ge 0}\in M^{\mathbb N}\) of \(x\) is then defined by \[\theta_i(x)=I_{k_i(x)}\prod_{j=0}^{i-1}\epsilon_{k_j(x)}\qquad (x\in[a,b]\
setminus\Gamma) .\] More succinctly, \(\theta_i(x)=\pm I_n\ ,\) where \(f^i(x)\in I_n\ ,\) the sign being \(+\) or \(-\) according as \(f^i\) is locally increasing or locally decreasing at \(x\ .\)
(Note the distinction with the unimodal case, where the sign of \(\theta_i(x)\) reflected the local behaviour of \(f^{i+1}\ .\))
Construct a corresponding formal power series \(\theta(x,t)=\sum_{i=0}^\infty \theta_i(x)t^i \in M[[t]]\ ,\) where \(M[[t]]\) is regarded as the free module with basis \(\{I_1,\ldots,I_\ell\}\) over
\({\mathbb Z}[[t]]\ .\)
For arbitrary \(x\in[a,b]\ ,\) define elements \(\theta(x^+)\) and \(\theta(x^-)\) of \(M^{\mathbb N}\) by \[\theta(x^+)=\lim_{y\searrow x}\theta(y)\qquad\mbox{and}\qquad\theta(x^-)=\lim_{y\nearrow
x}\theta(y),\] where the limits are taken through elements \(y\) of \([a,b]\setminus\Gamma\ .\) Construct corresponding formal power series \[\theta(x^\pm,t) = \sum_{i=0}^\infty \theta_i(x^\pm)t^i \
in M[[t]].\]
The kneading increments \(\nu_m(t)\) of \(f\) (for \(1\le m\le \ell-1\)) express the discontinuity of the kneading coordinate across the turning point \(c_m\ ,\) and are defined by \[\nu_m(t)=\theta
(c_m^+,t)-\theta(c_m^-,t)\in M[[t]].\]
The kneading matrix \(N(t)\) of \(f\) is the \((\ell-1)\times\ell\) matrix over \({\mathbb Z}[[t]]\) whose \((m,n)\) entry is the coefficient of \(I_n\) in \(\nu_m(t)\ :\) that is, \(\nu_m(t)=\sum_{n
=1}^\ell N_{mn}(t)I_n\ .\)
Notice that \[\nu_m(t) = I_{m+1}-I_m + 2\sum_{i=1}^\infty t^i\theta_i(c_m^+) = I_{m+1}-I_m +2\sum_{i=1}^\infty t^i I_{k_i(c_m^+)}\prod_{j=0}^{i-1}\epsilon_{k_j(c_m^+)}\] (since "\(f(c_m^+)=f(c_m^-)\)
"). It follows that \[\sum_{n=1}^\ell N_{mn}(t)(1-\epsilon_n t) = (1-\epsilon_{m+1}t)-(1-\epsilon_m t)+2\sum_{n=1}^\ell \sum_{i\ge 1:\,k_i(c_m^+)=n} t^i(1-\epsilon_n t)\prod_{j=0}^{i-1}\epsilon_{k_j
(c_m^+)}\] \[\qquad\qquad\qquad\qquad\quad = 2\left(-\epsilon_{m+1}t+\sum_{i=1}^\infty t^i\left[\prod_{j=0}^{i-1}\epsilon_{k_j(c_m^+)} - t\prod_{j=0}^{i}\epsilon_{k_j(c_m^+)}\right]\right)\] \[\qquad
\qquad\qquad\qquad\quad = 0\] for \(1\le m \le \ell-1\ .\) This linear relationship between the columns of \(N(t)\) implies that \((-1)^{n+1}D_n(t)/(1-\epsilon_n t)\) is independent of \(n\ ,\) where
\(D_n(t)\) is the determinant of the submatrix of \(N(t)\) obtained by deleting the \(n\)th column (\(1\le n\le \ell\)).
The kneading determinant \(D(t)\) of \(f\) is defined to be this common value, that is \[D(t) = D_1(t)/(1-\epsilon_1 t),\] where \(D_1(t)\) is the determinant of the submatrix of \(N(t)\) obtained by
deleting its first column.
This definition agrees with that given in Section 1 in the case where \(f\) is a unimodal map.
All of the results of the unimodal case described in Section 1, with the exception of the statement about renormalization, have direct analogues in the general case. The proofs are similar in spirit,
but generally rather more complicated. Full details can be found in Milnor and Thurston 1988.
The analogue of (3) is that the coefficients of \(I_n\) in \(\theta(b^-,t)-\theta(a^+,t)\) are given by the entries of the vector \((\gamma^1(t)\,\,\gamma^2(t)\,\,\cdots\,\, \gamma^{\ell-1}(t)) N(t)\
,\) that is \[\tag{5} \theta(b^-,t)-\theta(a^+,t) = \sum_{n=1}^\ell\sum_{m=1}^{\ell-1}\gamma^m(t)N_{mn}(t)I_n. \]
From this can be derived
Theorem \(\quad\) The topological entropy \(h(f)\) of \(f\) is positive if and only if \(D(t)\) has a zero in \(|t|<1\ .\) In this case, \(h(f)=\log\frac{1}{r}\ ,\) where \(r\) is the smallest zero
of \(D(t)\) in \([0,1)\ ,\) and \(D(t)\) has no zeros in \(|t|<r\ .\)
The relationship between the kneading determinant and the Artin-Mazur zeta function of \(f\) is also almost identical to that in the unimodal case, the only complication being a greater variety of
different possible behaviours of \(f\) on the endpoints of \(I=[a,b]\ .\) Recall that the condition that all but finitely many of the periodic orbits of \(f\) are unstable is guaranteed if \(f\) is a
rational function, or has negative Schwarzian derivative.
Theorem \(\quad\) Suppose that \(f\) is differentiable, has only finitely many periodic orbits of each period, and that all but finitely many of its periodic orbits are unstable. Let \(\zeta(t)\)
denote the Artin-Mazur zeta function of \(f\ .\) Then \[\zeta(t)^{-1} = D(t)\prod_P\kappa_P(t),\] where the product is over the periodic orbits \(P\) of \(f\) which are either not unstable, or are
contained in the endpoints of \(I\ ,\) and the polynomials \(\kappa_P(t)\) are as follows: if \(P\) has period \(k\) then
• If \(P\) is contained in the endpoints of \(I\ ,\) then \(\kappa_P(t)=(1-t^k)^2\) if \(P\) is stable (on one side), and \(\kappa_P(t)=1-t^k\) otherwise.
• For each other non-unstable periodic orbit \(P\ ,\) \(\kappa_P(t)\) is
□ \(1-t^k\) if \(P\) is stable on one side only;
□ \(1-t^{2k}\) if \(P\) is stable and the derivative of \(f^k\) is negative at the points of \(P\ ;\)
□ \((1-t^k)^2\) if \(P\) is stable and the derivative of \(f^k\) is non-negative at the points of \(P\) (other than any endpoints of \(I\) contained in \(P\)).
In particular, \(\zeta(t)\) has radius of convergence \(r=1/s\ ,\) where \(s\) is the growth number of \(f\ .\)
Just as unimodal maps with positive topological entropy can be semi-conjugated to tent maps, so piecewise monotone maps with positive topological entropy can be semi-conjugated to piecewise-linear
Theorem \(\quad\) Suppose that the growth number \(s\) of \(f\) satisfies \(s>1\ .\) Define a function \(\lambda:[a,b]\to[0,1]\) by \[\lambda(x) = \lim_{t\nearrow 1/s}\frac{\ell^{[a,x]}(t)}{\ell(t)}.
\] (Here \(\ell^J(t)=\sum_{i=0}^\infty \ell_{i+1}^J t^i\ ,\) where \(\ell_i^J\) is the number of laps of \(f^i|_J\) for a subinterval \(J\) of \(I\ .\)) Then \(\lambda\) is a continuous increasing
surjection, and there is a piecewise linear map \(F:[0,1]\to[0,1]\ ,\) for which each linear piece has slope \(\pm s\ ,\) such that \[F\circ\lambda = \lambda\circ f.\]
Note that \(F\) may have fewer laps than \(f\ .\)
Let \(f:[0,1]\to[0,1]\) be the bimodal map given by \(f(x)=1-7.03x+18.68x^2-12.65x^3\) (see Figure 6). The two turning points of \(f\) are \(c_1\simeq 0.2534\) and \(c_2\simeq 0.7311\ .\) \(f\) has a
stable fixed point \(P_1\simeq 0.2217\) and a stable period 3 orbit given approximately by \(P_3 = \{0.5753, 0.7295, 0.9016\}\ .\) Since \(f\) has negative Schwarzian derivative, there are no other
stable periodic orbits. The signs associated to the laps of \(f\) are given by \(\epsilon_1 = -1\ ,\) \(\epsilon_2 = +1\ ,\) and \(\epsilon_3 = -1\ .\) The endpoints \(0\) and \(1\) are exchanged by
\(f\ .\)
The turning point \(c_1\) is in the immediate basin of attraction of the fixed point \(P_1\ ,\) and the turning point \(c_3\) is in the immediate basin of attraction of the period 3 orbit \(P_3\ .\)
Hence \[ \begin{array}{rcll} k(c_1^-) &=& 1^\infty & \mbox{ so }\quad\theta(c_1^-,t) = \left(1-\frac{t}{1+t}\right)I_1, \\ k(c_1^+) &=& 2\,1^\infty & \mbox{ so }\quad\theta(c_1^+,t) = \left(\frac{t}
{1+t}\right)I_1 + I_2,\\ k(c_2^-) &=& 2\,(3\,2\,2)^\infty \quad & \mbox{ so }\quad \theta(c_2^-,t) = \left(1-\frac{t^2+t^3}{1+t^3}\right) I_2 + \left(\frac{t}{1+t^3}\right)I_3, \mbox{ and}\\ k(c_2^+)
&=& 3\,(3\,2\,2)^\infty & \mbox{ so }\quad\theta(c_2^+,t) = \left(\frac{t^2+t^3}{1+t^3}\right)I_2 + \left(1-\frac{t}{1+t^3}\right)I_3. \end{array} \]
The kneading increments are therefore given by \[ \begin{array}{rcl} \nu_1(t) &=& \theta(c_1^+,t)-\theta(c_1^-,t) = \left(-1+\frac{2t}{1+t}\right)I_1 + I_2,\\ \nu_2(t) &=& \theta(c_2^+,t)-\theta(c_2^
-,t) = \left(-1+\frac{2(t^2+t^3)}{1+t^3}\right)I_2 + \left(1-\frac{2t}{1+t^3}\right)I_3, \end{array} \] giving the kneading matrix \[ N(t) = \begin{pmatrix} -1 + \frac{2t}{1+t} & 1 & 0 \\ 0 & -1+\
frac{2(t^2+t^3)}{1+t^3} & 1-\frac{2t}{1+t^3} \end{pmatrix} . \]
The kneading determinant \(D(t)\) is the determinant of the matrix obtained by deleting the first column of \(N(t)\ ,\) divided by \(1+t\ ,\) that is \[ D(t)=\frac{1}{1+t}\left(1-\frac{2t}{1+t^3}\
right) = \frac{(1-t)(1-t-t^2)}{(1+t^3)(1+t)}. \]
The only zero of \(D(t)\) in \([0,1)\) is \(r=(\sqrt{5}-1)/2\ ,\) so \(f\) has growth number \(s=1/r=(\sqrt{5}+1)/2\ ,\) and topological entropy \(h(f)=\log\left(\frac{\sqrt{5}+1}{2}\right)\ .\)
The cutting invariant of \(f\) can be obtained after calculating \(\theta(1,t)-\theta(0,t)=\left(\frac{-1}{1-t}\right)I_1 + \left(\frac{1}{1-t}\right)I_3\ .\) (5) then gives \[ \begin{pmatrix} \gamma
^1(t) & \gamma^2(t) \end{pmatrix} \, \begin{pmatrix} -1 + \frac{2t}{1+t} & 1 & 0 \\ 0 & -1+\frac{2(t^2+t^3)}{1+t^3} & 1-\frac{2t}{1+t^3} \end{pmatrix} = \begin{pmatrix} \frac{-1}{1-t} & 0 & \frac{1}
{1-t} \end{pmatrix} . \] Therefore \(\gamma^1(t)=\frac{1+t}{(1-t)^2}\ ,\) \(\quad\gamma^2(t)=\frac{1+t^3}{(1-t)^2(1-t-t^2)},\quad\) and \[ \gamma(t) = \gamma^1(t)+\gamma^2(t) = \frac{2(1+t)}{(1-t)
(1-t-t^2)}. \]
The lap invariant \(\ell(t)\) is then given by (1) as \[ \ell(t) = \frac{1}{1-t}\left(1+\gamma(t)\right) = \frac{3+t^3}{(1-t)^2(1-t-t^2)} = 3+9t+21t^2+43t^3+81t^4+145t^5+\cdots. \] Thus, for example,
\(f^4\) has 43 laps.
The Artin-Mazur zeta function \(\zeta(t)\) of \(f\) is given by \[ \zeta(t)^{-1} = D(t)(1-t^2)^2(1-t^6) = (1-t)^2(1-t^2)(1-t^3)(1-t-t^2), \] since \(f'(P_1)<0\) (giving a factor \(1-t^2\)), \((f^3)'
\) is negative at the points of \(P_3\) (giving a factor \(1-t^6\)), and the endpoints \(\{0,1\}\) constitute an unstable period 2 orbit (giving a factor \(1-t^2\)). Hence \[ \frac{t\zeta'(t)}{\zeta
(t)} = 3t+7t^2+9t^3+11t^4+13t^5+25t^6+31t^7+\cdots. \] Thus, for example, \(f^5\) has 13 fixed points, comprised of 3 fixed points and 2 period 5 orbits of \(f\ .\)
The graph of the function \(\lambda:[0,1]\to[0,1]\) which semiconjugates \(f\) to a piecewise linear map is shown in Figure 7. For each \(x\ ,\) \(\lambda(x)\) can be approximated as follows:
• Calculate \(\theta(x,t)-\theta(0,t) = a_1(t)I_1+a_2(t)I_2+a_3(t)I_3\) up to the term in \(t^{50}\ .\)
• By (an analogue of) (5),
\[ \begin{pmatrix} \gamma^{[0,x],1}(t) & \gamma^{[0,x],2}(t) \end{pmatrix} \, \begin{pmatrix} -1 + \frac{2t}{1+t} & 1 & 0 \\ 0 & -1+\frac{2(t^2+t^3)}{1+t^3} & 1-\frac{2t}{1+t^3} \end{pmatrix} = \
begin{pmatrix} a_1(t) & a_2(t) & a_3(t) \end{pmatrix} \] (here \(\gamma^{[0,x],n}(t)=\sum_{i=0}^\infty \gamma_i^{[0,x],n}t^i\ ,\) where \(\gamma_i^{[0,x],n}\) is the number of points \(y\in(0,x)\)
such that \(f^i(y)=c_n\ ,\) but \(f^j(y)\) is not a turning point for \(j<i\)).
• Then \(\lambda(x)\) is obtained by evaluating the limit of \(\frac{\gamma^{[0,x],1}(t)+\gamma^{[0,x],2}(t)}{\gamma(t)}\) as \(t\to r=(\sqrt{5}-1)/2\ .\)
This function \(\lambda\) semi-conjugates \(f\) to the piecewise linear map \(F:[0,1]\to[0,1]\) which has \(F(0)=1\ ,\) \(F(1)=0\ ,\) slope \(\pm s\) in each linear piece, and turning points at \(C_1
=(3-\sqrt{5})/2\) and \(C_2 = 3(\sqrt{5}-1)^2/8\) (see Figure 8). Notice that \(C_1\) is a fixed point of \(F\ ,\) and \(C_2\) is a period 3 point of \(F\ .\)
Other directions
This article has summarised kneading theory as developed in Milnor and Thurston 1988. Since Milnor and Thurston's work, the theory has been extended to a variety of other contexts, and applied in
other ways. The following is a partial list.
• It is possible to characterize admissible matrices over \({\mathbb{Z}}[[t]]\ ,\) that is, those which can be realized as kneading matrices of some piecewise monotone interval map. Conditions for
a family \(f_\mu\) of piecewise monotone maps to be full (roughly, for the family to exhibit all admissible kneading matrices of the appropriate dimensions) are given by de Melo and van Strien
1993, Galeeva and van Strien 1996.
• The set of kneading coordinates (itineraries) which are compatible with a given kneading matrix can also be characterized. In a family of polynomial interval maps, orbits which are destroyed on
the interval migrate into the complex plane: the theory of complex dynamics plays a vital role in understanding the behaviour of such families. In another direction, pruning theory is an attempt
to understand the set of "kneading coordinates" exhibited by Hénon-type maps of the plane (see for example Cvitanović, Gunaratne and Procaccia 1988).
• Kneading theory can be extended in a natural way to piecewise monotone interval maps which are permitted to have a finite number of discontinuities. See for example Preston 1989 for a summary of
results, and Rand 1978 and Williams 1979 for the special case of Lorenz maps.
• Baladi and Ruelle 1994, Baladi 1995 introduce weighted kneading matrices for the computation of weighted zeta functions of the form
\[\zeta_g(t)=\exp\sum_{k\ge 1}\frac{t^k}{k}\sum_{x\in\mbox{Fix}(f^k)}\prod_{i=0}^{k-1}g(f^i(x)),\] where \(g:[a,b]\to{\mathbb C}\) is of bounded variation.
Alternative notation
Quite often a different notation is used (which results in a less algebraic approach). Namely, instead of formal power series one considers kneading sequences. Letters are assigned to turning points
and laps. The \(n\)-th term of the \(k\)-th sequence is the letter assigned to the set to which \(f^{n+1}(c_k)\) belongs (where \(c_k\) is the \(k\)-th turning point). The sequence terminates if \(f^
{n+1}(c_k)\) is a turning point itself. For a unimodal map the letters are usually \(L\) for the left lap, \(R\) for the right one, and \(C\) for the turning point; in this case there is only one
kneading sequence. More information can be found for instance in Collet and Eckmann 1980.
• Ll. Alsedà, J. Llibre and M. Misiurewicz Combinatorial dynamics and entropy in dimension one Second Edition, World Scientific (Advanced Series in Nonlinear Dynamics, vol. 5), Singapore 2000.
• J. Alves and J. Sousa Ramos Kneading theory for tree maps Ergodic Theory Dynam. Systems 24 (2004), no. 4, 957-985.
• M. Artin and B. Mazur On periodic points Ann. of Math. (2) 81 1965 82-99.
• M. Baillif Dynamical zeta functions for tree maps Nonlinearity 12 (1999), no. 6, 1511-1529.
• M. Baillif and A. de Carvalho Piecewise linear model for tree maps Internat. J. Bifur. Chaos Appl. Sci. Engrg. 11 (2001), no. 12, 3163-3169.
• V. Baladi and D. Ruelle An extension of the theorem of Milnor and Thurston on the zeta functions of interval maps Ergodic Theory Dynam. Systems 14 (1994), no. 4, 621-632.
• V. Baladi Infinite kneading matrices and weighted zeta functions of interval maps J. Funct. Anal. 128 (1995), no. 1, 226-244.
• P. Collet and J.-P. Eckmann Iterated Maps on the Interval As Dynamical Systems Birkhauser, Boston 1980.
• P. Cvitanović, G. Gunaratne and I. Procaccia Topological and metric properties of Hénon-type strange attractors Phys. Rev. A (3) 38 (1988), no. 3, 1503-1520.
• R. Galeeva and S. van Strien Which families of \(l\)-modal maps are full? Trans. Amer. Math. Soc. 348 (1996), no. 8, 3215-3221.
• J. Guckenheimer Sensitive Dependence to Initial Conditions for One Dimensional Maps Comm. Math. Phys. 70 (1979), no. 2, 113-160.
• L. Jonker and D. Rand Bifurcations in one dimension. I. The nonwandering set Invent. Math. 62 (1981), no. 3, 347-365.
• N. Metropolis, M. Stein and P. Stein On finite limit sets for transformations on the unit interval J. Combinatorial Theory Ser. A 15 (1973), 25-44.
• M. Misiurewicz Absolutely continuous measures for certain maps of an interval Inst. Hautes Études Sci. Publ. Math. No. 53 (1981), 17-51.
• M. Misiurewicz and W. Szlenk Entropy of piecewise monotone mappings Dynamical systems, Vol. II - Warsaw, pp. 299-310. Asterisque, No. 50, Soc. Math. France, Paris, 1977.
• W. Parry Symbolic dynamics and transformations of the unit interval Trans. Amer. Math. Soc. 122 1966 368-378.
• C. Preston What you need to know to knead Adv. Math. 78 (1989), no. 2, 192-252.
• D. Rand The topological classification of Lorenz attractors Math. Proc. Cambridge Philos. Soc. 83 (1978), no. 3, 451-460.
• R. Williams The structure of Lorenz attractors Inst. Hautes Études Sci. Publ. Math. No. 50 (1979), 73-99.
Internal references
• Tomasz Downarowicz (2007) Entropy. Scholarpedia, 2(11):3901.
• Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
• Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
• Roy Adler, Tomasz Downarowicz, Michał Misiurewicz (2008) Topological entropy. Scholarpedia, 3(2):2200.
Recommended reading
• W. de Melo and S. van Strien One-Dimensional Dynamics, Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 25, Springer, Berlin, 1993.
• J. Milnor and W. Thurston On iterated maps of the interval Dynamical systems (College Park, MD, 1986-87), 465-563, Lecture Notes in Math., 1342, Springer, Berlin, 1988. | {"url":"http://www.scholarpedia.org/article/Kneading_theory","timestamp":"2014-04-18T02:58:56Z","content_type":null,"content_length":"81195","record_id":"<urn:uuid:f0e1e9e2-71ee-4073-8473-2bff708bf4ed>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplicial "universal extensions", the hammock localization, and Ext
up vote 4 down vote favorite
Let $M,B$ be $R$-modules, and suppose we're given an n-extension $E_1\to\dots\to E_n$ of $B$ by $M$, that is, an exact sequence $$0\to M\to E_1\to\dots\to E_n \to B\to 0.$$
A morphism of $n$-extensions of $Y$ by $X$ is defined to be a hammock
$$\begin{matrix} &&A_1&\to&A_2&\to&A_3&\to&\ldots&\to &A_{n-2}&\to &A_{n-1}&\to& A_{n}&&\\ &\nearrow&\downarrow&&\downarrow&&\downarrow&&&&\downarrow&&\downarrow&&\downarrow&\searrow&\\ X&&\downarrow
&&\downarrow&&\downarrow&&&&\downarrow&&\downarrow&&\downarrow&&Y\\ &\searrow&\downarrow&&\downarrow&&\downarrow&&&&\downarrow&&\downarrow&&\downarrow&\nearrow&\\ &&B_1&\to&B_2&\to&B_3&\to&\ldots&\to
&B_{n-2}&\to &B_{n-1}&\to& B_{n}&&\end{matrix}$$
This detetermines a category $n\operatorname{-ext}(Y,X)$. Further, we're given an almost-monoidal sum on this category given by taking the direct sum of exact sequences, then composing the first and
last maps with the diagonal and codiagonal respectively. Taking connected components, we're left with a set $ext^n(Y,X)$, and the sum reduces to an actual sum on the connected components, which turns
$ext^n(Y,X)$ into an abelian group (and therefore an $R$-module).
It's well known that these functors, called Yoneda's Ext functors are isomorphic to the Ext functors $\pi_n(\underline{\operatorname{Hom}}(sM,sN))$ where $\underline{\operatorname{Hom}}(sM,sN)$ is
the homotopy function complex between the constant simplicial $R$-modules $sM$ and $sN$ (obtained by means of cofibrant replacement of $sM$ in the projective model structure, fibrant replacement of
$sN$ in the injective model structure, or by any form of Dwyer-Kan simplicial localization (specifically the hammock localization)).
In a recent answer, Charles Rezk mentioned that we can compute this in the case $n=1$ as $\pi_0(\underline{\operatorname{Hom}}(sM,sN[1]))$, where $sN[1]$ is the simplicial $R$-module with homotopy
concentrated in degree $n$ equal to $N$. That is, these are exactly the maps $M\to N[1]$ in the derived category.
It was also mentioned that for the case $n=1$, there exists a universal exact sequence in the derived category: $$0\to N\to C\to N[1]\to 0$$ where $C$ is weakly contractible such that every extension
of $N$ by $M$ arises as $$N\to M\times^h_{N[1]} C\to QM,$$ which is $\pi_0$-short exact. (And QM is a cofibrant replacement of $M$, for obvious reasons).
Why can we get $\pi_1$ of the function complex by looking at maps into $N[1]$? At least on the face of it, it seems like we would want to look at maps into $N[-1]$, that is, look at maps into the
"loop space", not the "suspension" (scare quotes because these are the loop and suspension functors in $sR\operatorname{-Mod}$, not in $sSet$.
What is this this "universal extension" $$0\to N\to C\to N[1]\to 0$$ at the level of simplicial modules?
Is there a similar "universal extension" for $n>1$? If so, what does it (one of its representatives) look like at the level of simplicial modules?
Given an $n$-extension $\sigma$ of $B$ by $M$, how can we produce a morphism in the derived category $B\to M[n]$ that generates an $n$-extension in the same connected component of $n\operatorname
Lastly, since Yoneda's construction of Ext looks suspiciously like Dwyer and Kan's hammock localization, and there is homotopy theory involved in the other construction, I was wondering if there was
any connection between the two. That is, I was wondering if there is another construction of Ext using DK-localization directly that shows why Yoneda's construction works.
homological-algebra cohomology homotopy-theory simplicial-stuff
add comment
1 Answer
active oldest votes
For your first question:
Given an $n$-extension $\sigma$ of $B$ by $M$, how can we produce a morphism in the derived category $M\to N[n]$ that generates an n-extension in the same connected component of $n
Do you mean a morphism $B\to M[n]$? In which case, isn't this a the standard construction? Let $E$ be the extension of $B$ by $M$, let $P$ be a projective resolution of $B$, and cover
the identity map of $B$ by a map of chain complexes $P\to E$. The resulting map $P_n\to M$ can be thought of as a morphism $B\to M[n]$ in the derived category, and it's the one that
represents the corresponding component of $n{-}ext(B,M)$.
As for your second question, there's certainly some connection. What you've described is a category $C=n{-}ext(X,Y)$, whose objects are $n$-extensions of $Y$ by $X$. Let $W=C$, and
perform the hammock construction $L_H(C,W)$ of the pair $(C,W)$; then $L_H(C,W)$ is a simplicially eniched category, which represents an $\infty$-groupoid since you inverted
up vote 6 down everything. The ext group is the set of equivalence classes of objects in $L_H(C,W)$.
vote accepted
Dwyer and Kan showed that if $C=W$, then the simplicial category $L_H(C,W)$ and the simplicial nerve $NC$ of $C$ represent the same $\infty$-groupoid. So $\mathrm{Ext}^n(X,Y)=\pi_0
What is the homotopy type of $NC$? It is supposed to be equivalent to the space of maps $\mathrm{map}(X,Y[n])$ in the $\infty$-category associated to the derived category. I'm unaware
of a reference where this is proved, however.
But take a look at Stefan Schwede's very nice paper on An exact sequence interpretation of the Lie bracket in Hochschild cohomology, which shows that $\pi_1(NC)=\mathrm{Ext}^{n-1}(X,Y)
$, and gets some interesting results from that. (He's looking at $A$-bimodule extensions of $A$ by $A$ (i.e., Hochschild cohomology), and he extracts the Gerstenhaber-algebra structure
on the ext-groups by means of homotopical constructions involving the $NC$.
Nifty, and thanks for the reference. – Harry Gindi Nov 21 '10 at 22:35
add comment
Not the answer you're looking for? Browse other questions tagged homological-algebra cohomology homotopy-theory simplicial-stuff or ask your own question. | {"url":"http://mathoverflow.net/questions/46802/simplicial-universal-extensions-the-hammock-localization-and-ext/46829","timestamp":"2014-04-17T15:27:35Z","content_type":null,"content_length":"57205","record_id":"<urn:uuid:15370f68-fc37-4d30-9823-b620b7722e0d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |