content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Wolfram Demonstrations Project
Parametrization of a Fractal Curve
The shown parametrization of some well-known fractal curves is interesting because the values of the parameters reflect the hierarchy of the curves.
A self-similar fractal is constructed by recursively applying a given generator at smaller and smaller scales. Start with a segment, apply the generator to this segment, then apply the generator to
every new smaller segment, and so on.
This sort of fractal exhibits a simple hierarchy in which the same shape repeats itself at every discrete scale.
If the generator contains points, then you can use a number , with , to localize a point on this generator.
Applying this procedure recursively times, you can locate any point of the fractal with a list of numbers (with ) giving the position of each new generator at the first scales of the fractal: .
Writing these digits in a row, you end up with a number written in base and containing digits:
Thus is a parameter between 0 and 1 that can have an infinite number of places in base (as ) and which uniquely specifies any point on the fractal curve. This can be used as a curvilinear coordinate
for the fractal, so that it is possible to write the fractal as a set of complex numbers parametrized by :
, where and are functions of the geometric properties of the generator (see [1] for details).
Hence the equality :
• shows the analogous roles played by and respectively for the parameter (as the base of the number) and the complex number (as the resolution of the curve);
• exhibits simultaneously the hierarchical structure of both (real and complex) structures; and
• gives a meaning to the fractal dimension as the ratio of the amounts of information contained in both descriptions of the fractal (real and complex ) at each scale.
This Demonstration illustrates this construction for three standard fractal curves.
You can choose the fractal among these three classic curves (the fractal dimension is given), choose the level of recursion, and then move the red point by varying the parameter .
The fractal is actually composed of points only, and the segments are merely conventional ways of joining the points at a finite scale. You can choose not to draw them with the "lines" checkbox,
giving the fractal the appearance of a cloud of points.
The parameter is displayed twice:
• in black on the left as a regular number expressed in a base that depends on the chosen fractal, and
• in color on the picture, where each digit is colored according to the scale it represents.
This parametrization is important in the framework of scale relativity [1,2], in which the quantum behavior is retrieved by abandoning the hypothesis of a differentiable space-time. A space-time
geodesic at quantum scale becomes fractal and is its proper time. See [1] for a detailed study of this parametrization.
[1] L. Nottale,
Fractal Space-Time and Microphysics: Towards a Theory of Scale Relativity
, Singapore: World Scientific, 1993.
[2] L. Nottale,
Scale Relativity and Fractal Space-Time: A New Approach to Unifying Relativity and Quantum Mechanics
, London: Imperial College Press, 2011. | {"url":"http://demonstrations.wolfram.com/ParametrizationOfAFractalCurve/","timestamp":"2014-04-19T15:17:51Z","content_type":null,"content_length":"49443","record_id":"<urn:uuid:c5d06c0a-5ea0-46d8-b5de-bf4c734729a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brookline Village Math Tutor
...I have a masters degree in math, but have not lost sight of the difficulties encountered in elementary math. Through getting a grade of "A" in at least four or five college courses I bring a
good understanding of the deeper principles that algebra grapples with, and I enjoy tutoring the subject ...
29 Subjects: including geometry, trigonometry, statistics, literature
...I was a GRE test-prep tutor at Kaplan. I've tutored several students in linear algebra since 2001. I first learned the subject as an undergraduate engineer at Penn State.
23 Subjects: including algebra 1, algebra 2, biology, calculus
Hello! My name is Susie and I have had many experiences with teaching and tutoring students from ages 8-20: I tutored math to 3rd grade students; taught 11th and 12th graders a business course;
taught creative writing to high school students in Ghana; and tutored an Afghan student English and writing to prepare her for her TOEFL exam. I am a graduate of Bentley University.
11 Subjects: including algebra 1, SAT math, algebra 2, Spanish
...Some have been on meds.; others not, some have been on school IEPs, some not, some have been high school students, others middle and elementary students. My tutoring work for the Lexington
public school system, run for most of the years by the Special Education department, also dealt with many s...
34 Subjects: including prealgebra, ACT Math, SAT math, English
...I also write a lot for my career in insurance. I have worked as an actuarial analyst and consultant for 15 years. I have passed all the beginning exams and VEEs.
90 Subjects: including SAT math, linear algebra, actuarial science, public speaking | {"url":"http://www.purplemath.com/brookline_village_ma_math_tutors.php","timestamp":"2014-04-19T12:06:30Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:5a5ddead-7ff4-44dc-a18c-1bbbafb60cd4>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nested loops ( Difficult )
Im having trouble figuring out a problem I was given:
" Write a C++ program to compute 20 students GPA. Each student is taking 4 courses. There are 3 exams per course. Compute average of each course for 3 exams; find average of 4 courses for each
student. The GPA should be calculated based on the following:
Letter grade of "A" if average >=90
Letter grade of "B" if average >=80
Letter grade of "C" if average >=70
Letter grade of "D" if average >=60
Letter grade of "F" if average >=50 "
Isn't the entire point of the "problem" to see if you can figure this out on your own?
Even if this wasn't assigned to you by your teacher, you would need to show us what code you've written thus far, and then we could assist.
Last edited on
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/81545/","timestamp":"2014-04-17T10:09:10Z","content_type":null,"content_length":"7200","record_id":"<urn:uuid:a1d02838-3780-41c1-a7a5-633417bb0305>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Two-Envelope Paradox: A Complete Analysis?
Department of Philosophy
University of Arizona
Tucson, AZ 85721
[[Doug Hofstadter introduced me to the two-envelope paradox in 1988. This paper corresponds to more or less the position I came up with then. I wrote this up in 1994 after a couple of papers on the
subject appeared in Analysis. I never published it, partly because it came to seem to me that this treatment resolves only part of the paradox: it resolves the "numerical" paradox but not the
"decision-theoretic" paradox. For a more recent treatment of the decision-theoretic paradox, see The St. Petersburg Two-Envelope Paradox.]]
A wealthy eccentric places two envelopes in front of you. She tells you that both envelopes contain money, and that one contains twice as much as the other, but she does not tell you which is which.
You are allowed to choose one envelope, and to keep all the money you find inside.
This may seem innocuous, but it generates an apparent paradox. Say that you choose envelope 1, and it contains $100. In evaluating your decision, you reason that there is a 50% chance that envelope 2
contains $200, and a 50% chance that it contains $50. In retrospect, you reason, you should have taken envelope 2, as its expected value is $125. If your sponsor offered you the chance to change your
decision now, it seems that you should do so. Now, this reasoning is independent of the actual amount in envelope 1, and in fact can be carried out in advance of opening the envelope; it follows that
whatever envelope 1 contains, it would be better to choose envelope 2. But the situation with respect to the two envelopes is symmetrical, so the same reasoning tells you that whatever envelope 2
contains, you would do better to choose envelope 1. This seems contradictory. What has gone wrong?
The paradox can be expressed numerically. Let A and B be the amounts in envelope 1 and 2 respectively; their expected values are E(A) and E(B). For all n, it seems that p(B>A|A=n) = 0.5, so that E(B|
A=n) = 1.25n. It follows that E(B)=1.25E(A), and therefore that E(B) > E(A) if either expected value is greater than zero. The same reasoning shows that E(A) > E(B), but the conjunction is
impossible, and in any case E(A) = E(B) by symmetry. Again, what has gone wrong?
This problem has been discussed in the pages of Analysis by Jackson, Menzies and Oppy [2], and by Castell and Batens [1], but for reasons that will become clear I think that their analyses are
incomplete and mistaken respectively, although both contain insights that are important to the resolution of the problem. I will therefore present my own analysis of the "paradox" below.
Some distractions inessential to the problem arise from the facts that in the real world, money comes in discrete amounts (dollars and cents, pounds and pence) and that there are known limits on the
world's money supply. We can remove these distractions by stipulating that for the purposes of the problem, the amounts in the envelopes can be any positive real number.
There are a number of steps in the resolution of the paradox. The first step is to note (as do the authors mentioned above) that the amounts in the envelopes do not fall out of the sky, but must be
drawn from some probability distribution. Let the relevant probability density function be g, where the probability that the smaller amount falls between a and b is integral[a,b] g(x) dx. We can
think of this distribution as either representing the chooser's prior expectations, or as the distribution from which the actual values are drawn. I will generally write as if it is the second, but
nothing much rests on this. To fix ideas, we can imagine that our sponsor chooses a random variable Z with probability density g, and then flips a coin. If the coin comes up heads, she sets A=Z and B
=2Z; if it comes up tails, she sets A=2Z and B=Z.
Recognizing the existence of a distribution immediately shows us that the reasoning that leads to the paradox is not always valid, as Jackson et al note. For example, if the distribution is a uniform
distribution over values between 0 and 1000, with amounts over 1000 being impossible, then if A > 500, it is always a bad idea to switch. It is therefore not true that for all distributions and all
values of n, p(B>A|A=n) = 0.5. In general, E(B|A=n) will not depend only on n; it will also depend on the underlying distribution.
In their analysis, Jackson et al are satisfied with this observation, combined with the observation that limitations on the worlds' money supply ensure that in practice the relevant distributions
will always be bounded above and below. The paradox does not arise for bounded distributions, as we saw above. When A is a medium value, there may be equal chances that B is larger or smaller, but
when A is large B is likely to be smaller, and when A is small B is likely to be larger, so the paradox does not get off the ground.
This practical observation is an insufficient response to the mathematical paradox, however, as Castell and Batens note. Unbounded distributions can exist in principle if not in practice, and
in-principle existence is all that is needed for the paradox to have its bite. For example, it might seem that if the distribution were a uniform distribution over the real numbers, then p(B>A|A=n) =
0.5 for all n. This would seem to have paradoxical consequences for mathematics, if not for the world's money supply.
This leads to the second step in the resolution of the paradox, which is that taken by Castell and Batens. (We will see that this step is ultimately inessential to the paradox's resolution, but it is
an important intermediate point of enlightenment.) There is in fact no such thing as a uniform probability distribution over the real numbers. To see this, let g be a uniform function over the real
numbers. Then integral[k,k+1] g(x)dx is equal to some constant c for all k. If c=0, then the area under the entire curve will be zero, and if c>0, then the area under the entire curve will be
infinite, both of which contradict the requirement that the integral of a probability distribution be 1. At one point Jackson et al raise the possibility of infinitesimal probabilities, but if this
is interpreted as allowing c to be infinitesimal, the suggestion does not work any better. To see this, note that if the distribution is uniform:
integral[0, infinity] g(x) dx
= integral[0,1] g(x)dx + integral[1,2] g(x)dx + integral[2,3] g(x)dx + ...
= integral[0,1] g(x)dx + integral[2,3] g(x)dx + integral[4,5] g(x)dx
= (integral[0,infinity] g(x)dx)/2
so that the overall integral must be zero or infinite. A uniform distribution over the real numbers can only be an "improper" distribution, whose overall integral is not 1.
The impossibility of a uniform probability distribution over the real numbers is reflected in the fact that every proper distribution must eventually "taper off": for all epsilon > 0, there must
exist k such that integral[k, infinity] g(x)dx < epsilon. It is very tempting to suppose that this "tapering off" supplies the resolution to the paradox, as it seems to imply that if A is near the
high end of the (proper) distribution, it will be more likely that B is smaller; perhaps sufficiently more likely to offset the paradoxical reasoning? This is the conclusion that Castell and Batens
draw. They offer a "proof" that the distribution must be improper for the paradoxical reasoning to be possible.
Unfortunately Castell and Batens' proof is mistaken, and in fact there exist proper distributions for which the paradoxical reasoning is possible. The error lies in their assumption, early in the
paper, that p(B>A|A=n) = g(n)/(g(n) + g(n/2)). This seems intuitively reasonable, but in fact p(B>A|A=n) = 2g(n)/(2g(n) + g(n/2)), which is significantly larger in general.
To see this, note that if A is in the range n +/- dx, then B is either in the range 2n +/- 2dx or in the range n/2 +/- dx/2. The probability of the first, relative to the initial distribution, is g
(n)dx; the probability of the second is g(n/2)dx/2. The probabilities that B is greater or less than A therefore stand in the ratio 2g(n):g(n/2), not g(n):g(n/2), as Castell and Batens suppose.
For example, given a uniform distribution between 0 and 1000, if A is around 100, it is in fact twice as likely that B is around 200 than that B is around 50. To dispel any lingering
counterintuitiveness, note that something like this has to be the case to make up for the fact that when A > 500, B is always less than A. To find a distribution where the chances of a gain and a
loss are truly equal for many n, we should turn not to a uniform distribution but to a decreasing distribution, where g(n/2) = 2g(n) for many n. An example is the distribution g(x) = 1/x, where we
cut off the distribution between arbitrary bounds L and U, and normalize so that it has an integral of 1. This distribution will have the property that for all n such that 2L < n < U/2, p(B>A|A=n) =
0.5. To illustrate this intuitively, note that for such a decreasing distribution, the prior probability that the smaller value is between 4 and 8 is the same as the probability that it is between 8
and 16, and so on, if L and U are appropriate. Given the information that 8 < A < 16, it is equally likely that B is in the range above or below.
This flaw in Castell and Batens' reasoning nullifies their proof that a distribution must be improper for the paradoxical reasoning to arise, but it does not yet show that the conclusion is false. It
remains open whether there is a proper distribution for which the paradoxical reasoning is possible. The bounded distribution above will not work, as its bound will block the paradoxical reasoning in
the usual fashion; and the unbounded distribution g(x) = 1/x is improper, having an infinite integral. But this can easily be fixed, by allowing the distribution to taper off slightly faster. In
particular, the distribution g(x) = x^(-1.5), cut off below a lower bound L and normalized, allows the paradox to arise. The distribution has a finite integral, and even though for most n, p(B>A|A=n)
< 0.5, it is still the case that for all relevant n, E(B|A=n) > n. To see this, note that if n < 2L, then E(B|A=n) = 2n; and if n >= 2L, then
p(B>A|A=n) : p(B < A|A=n)
= 2g(n):g(n/2)
= 2n^(-1.5):(n/2)^(-1.5)
= 1:sqrt(2).
The expected value E(B|A=n) is (2n+sqrt(2)n/2)/(1+sqrt(2)), which is about 1.12n. The paradox therefore still arises.
The distribution here may be unintuitive, but it is easy to illustrate a similar distribution intuitively. Take a distribution in which the probability of a value between 1 and 2 is c, the
probability of a value between 2 and 4 is just slightly less, say 0.9c, the probability of a value between 4 and 8 is 0.81c, and so on. This distribution has a finite integral, as the integral is the
sum of a decreasing geometric series; and it is sufficiently close to the case in which the probability of a value between 2^k and 2^(k+1) is constant that the paradoxical reasoning still arises.
Even though p(B < A|A=n) is now slightly less than 0.5, due to the incorporated factor of 0.9, it has decreased by a sufficiently small amount that E(B|A=n) remains greater than n. The case g(x) =
x^(-1.5) is just like this, except that the factor of 0.9 is replaced by a factor of 1/sqrt(2), which is around 0.7.
The paradox has therefore not yet been vanquished; there are perfectly proper distributions for which the paradoxical reasoning still applies. This leads us to the third and final step in the
resolution of the paradox. Note that although the distributions above have finite integrals, as a probability distribution should, they have infinite expected value. The expected value of a
distribution is integral[0,infinity] xg(x)dx. When g(x) = x^(-1.5) (cut off below L), the expected value is integral[L,infinity] x^(-0.5) dx, which is infinite. But if the expected value of the
distribution is infinite, there is no paradox! There is no contradiction between the facts that E(B) = 1.12 E(A) and E(A) = 1.12 E(B) if both E(A) and E(B) are infinite. Rather, we have just another
example of a familiar phenomenon, the strange behavior of infinity.[*]
*[[[Castell and Batens note some similar consequences of infinite expected values in another context, in which the distribution is over a countable set. They say that infinite expected values are
"absurd", but I do not see any mathematical absurdity.]]]
To fully resolve the paradox, we need only demonstrate that for distributions with finite expected value, the paradoxical situation does not arise. To do this, we need to precisely state the
conditions expressing the paradoxical situation. In its strongest form, the paradoxical situation arises when E(B|A=n) > n for all n. However, it arises more generally whenever reasoning from B's
dependence on A leads us to the conclusion that there is expected gain on average (rather than all the time) by switching A for B. This will hold whenever E(K-A) > 0, where K is the random variable
derived from A by the transformation x -> E(B|A=x). We therefore need to show that when E(A) is finite, E(K-A) = 0.
Let h be the density function of A. Then h(x) = (g(x) + g(x/2)/2)/2 = (2g(x)+g(x/2))/4. (Note that h != g, as g is the density function of the smaller value.) Then
E(K-A) = integral[0,infinity] h(x) (E(B|A=x) - x) dx
= integral[0,infinity] (2g(x) + g(x/2))/4 . ((2x.2g(x) + x/2.g(x/2))/(2g(x)+g(x/2)) - x) dx
= integral[0,infinity] (2xg(x) - x/2 . g(x/2))/4 dx
= (integral[0,infinity] 2xg(x)dx - integral[0,infinity] 2yg(y)dy)/4
= 0.
Note that the fourth and fifth steps above are valid only if integral[0,infinity] xg(x)dx is finite, which holds iff E(A) is finite. (If integral[0,infinity] xg(x)dx is infinite, it is possible that
integral[0,infinity] 2xg(x)-x/2.g(x/2)dx != 0, even though integral[0,infinity] 2xg(x)dx = integral[0,infinity] x/2.g(x/2) dx.)
It follows that when E(A) is finite, consideration of the dependence of B on A will not lead one to the conclusion that one should switch A for B. A colollary of the result is that when E(A) is
finite, it is impossible that E(B|A=n) > n for all n, so that the strong form of the paradox certainly cannot arise.
If E(A) is infinite, this result does not hold. In such a case, it is possible that E(A) = E(K) (both are infinite) but that E(K-A) > 0. Here, the "paradoxical" reasoning will indeed arise. But now
the result is no longer paradoxical; it is merely counterintuitive. It is a consequence of the fact that given infinite expectations, any given finite value will be disappointing. The situation here
is somewhat reminiscent of the classical St. Petersburg paradox: both "paradoxes" exploit random variables whose values are always finite, but whose expected values are infinite. The combination of
finite values with infinite expected values leads to counterintuitive consequences, but we cannot expect intuitive results where infinity is concerned.[*]
[1] P. Castell and D. Batens, `The Two-Envelope Paradox: The Infinite Case'. Analysis 54:46-49.
[2] F. Jackson, P. Menzies, and G. Oppy, `The Two Envelope "Paradox"', Analysis 54:43-45. | {"url":"http://consc.net/papers/envelope.html","timestamp":"2014-04-19T17:02:59Z","content_type":null,"content_length":"17918","record_id":"<urn:uuid:5bbfc72d-3e56-420b-8474-d618638766b7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Philippe Tondeur
e-mail: tondeur@math.uiuc.edu
Department of Mathematics
University of Illinois at Urbana-Champaign
273 Altgeld Hall
1409 W. Green Street
Urbana, Illinois 61801-2975
General Information
Philippe Tondeur is a research mathematician and educator, and a consultant on mathematics, science and technology policy. His current interests include mathematics research and education; the role
of mathematics in science and society; innovation and science policy; institutional governance; and leadership development.
He retired a few years ago as Director of the Division of Mathematical Sciences at the National Science Foundation (NSF). Previously, he served as Chair of the Department of Mathematics at the
University of Illinois in Urbana-Champaign (UIUC).
He earned an Engineering degree in Zurich, and a Ph.D. degree in Mathematics from the University of Zurich. He subsequently was a Research Fellow and Lecturer at the University of Paris, at Harvard
University, the University of California at Berkeley, an Associate Professor at Wesleyan University, before joining the UIUC faculty in 1968, where he became a Full Professor in 1970.
He published approximately 100 articles and monographs, mainly on his research in differential geometry and topology, in particular the geometry of foliations and geometric applications of partial
differential equations. His bibliography lists nine books. He was continually supported by grants from the NSF from 1967 to 1990. He served as Editor and then as Managing Editor of the Illinois
Journal of Mathematics, and edited the collected works of K.T. Chen. He served on numerous committees at universities and professional societies.
Philippe Tondeur has been a Visiting Professor at the Universities of Buenos Aires, Auckland (New Zealand), Heidelberg, Rome, Santiago de Compostela, Leuven (Belgium), as well as at the
Eidgenoessische Technische Hochschule in Zurich, the Ecole Polytechnique in Paris, the Max Planck Institute for Mathematics in Bonn, Keio University in Tokyo, Tohoku University in Sendai, and
Hokkaido University in Sapporo. He has given approximately 200 invited lectures at various institutions around the world.
Philippe Tondeur has been an Invited Hour Speaker of the American Mathematical Society (AMS) in 1976, a recipient of a 1985 Award for Study in a Second Discipline (Physics) at UIUC, the 1994 William
F. Prokasy Award for Excellence in Undergraduate Teaching at UIUC, a 2002 Frederick A. Howes Commendation for Public Service from the Society of Industrial and Applied Mathematics (SIAM), and the
2008 SIAM Prize for Distinguished Service to the Profession. In 2009 he was selected in the first class of Fellows of SIAM. In 2010 he was selected a Fellow for the American Association for the
Advancement of Science (AAAS). In 2012 he was elected in the first class of Fellows of the AMS.
Since his retirement from NSF Philippe Tondeur chaired the Board of Governors of the Institute for Mathematics and its Applications at the University of Minneapolis, and served on the National
Advisory Council of the Statistical and Applied Mathematical Sciences Institute at the Research Triangle Park in Raleigh, North Carolina. He also served as a member of the National Committee on
Mathematics of the U.S. National Research Council, and as a US Delegate to the General Assembly and International Congress of Mathematician in 2006. He further served as a Trustee of the Instituto
Madrileno de Estudios Avanzadas-Math (IMDEA-Math) in Madrid, on the Scientific Advisory Board of the Canadian Mathematics of Information Technology and Complex Systems Centre of Excellence, on the
Scientific Advisory Board of the Mathematics of Climate Research Network (MCRN), and as the Delegate for the Mathematics Section to the Council of the AAAS.
He served on the Science Policy Committee and other Committees of the Mathematical Association of America (MAA) and of the American Mathematical Society (AMS), as well as the Joint Policy Board of
the Mathematical Sciences in the US. He currently serves on the Committee on Science Policy of SIAM. | {"url":"http://www.math.uiuc.edu/~tondeur/","timestamp":"2014-04-17T13:20:10Z","content_type":null,"content_length":"5376","record_id":"<urn:uuid:1afb3de4-9c63-4792-b45e-6f0a2e257cd1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: One other question about using Auq avg slope as a constant when<br> computing the other two regressions
Date: Dec 5, 2012 1:36 AM
Author: Halitsky
Subject: Re: One other question about using Auq avg slope as a constant when<br> computing the other two regressions
You asked:
> What is the constant supposed to do or be?
> How does it fit into the regression equation?- Hide quoted text -
Here's how Ivo Welch describes the use of the constant:
"Most of the time, the user will provide a constant ('1') as x(0) for
each observation in order to allow the regression package to fit an
Also, his code is an implementation of the Gentleman algorithm, if
that helps you to understand what he's doing:
W. M. Gentleman, University of Waterloo, "Basic
Description For Large, Sparse Or Weighted Linear Least
Squares Problems (Algorithm AS 75)," Applied Statistics
(1974) Vol 23; No. 3 | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7932590","timestamp":"2014-04-17T11:01:58Z","content_type":null,"content_length":"1833","record_id":"<urn:uuid:00505fbb-e894-4823-9ec1-2a9d2401d238>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advogato: Blog for amatus
If you have a unit interval and you want to know how many ways you can divide it such that division marks...
If you have a unit interval and you want to know how many ways you can divide it such that division marks are at m/2^n and the divisions are sorted from smallest to largest, use this recurrence
a[0] = 1
a[n] = 1 + sum_{i=0}^{n-1} {2^(i(n-i-1))a[i]}
Syndicated 2012-12-06 03:01:38 from David Barksdale - Google+ Posts | {"url":"http://advogato.org/person/amatus/diary/93.html","timestamp":"2014-04-21T04:36:56Z","content_type":null,"content_length":"4150","record_id":"<urn:uuid:414ac9f9-ebd8-4526-94da-f9ef269a258b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical simulation of cooling energy consumption in connection with thermostat operation mode and comfort requirements for the Athens buildings
C. Tzivanidis, K.A. Antonopoulos and F. Gioti
Applied Energy, 2011, vol. 88, issue 8, pages 2871-2884
Abstract: A model and a corresponding numerical procedure, based on the finite-difference method, have been developed for the prediction of buildings thermal behavior under the influence of all
possible thermal loads and the "guidance" of cooling control system in conjunction with thermal comfort requirements. Using the developed procedure analyses have been conducted concerning the effects
of thermostat operation mode and cooling power in terms of the time, on the total cooling energy consumption for the ideal space cooling, as well as for various usually encountered real cases, thus
trying to find ways to reduce cooling energy consumption. The results lead to suggestions for energy savings up to 10%. Extensive comparisons between the ideal and various real cooling modes showed
small differences in the 24-h cooling energy consumption. Because of the above finding, our detailed ideal cooling mode predictions gain considerable value and can be considered as a basis for
comparison with real cases. They may also provide a good estimate of energy savings obtained if we decide to increase thermostat set point temperature. Therefore, as the extent of cooling energy
saving is a priori known, one can decide if (and how much) it is worthy to increase thermostat set point temperature at the expense of thermal comfort. All results of the study, which refer to the
Typical Athens Buildings during the typical Athens summer day, under the usual ranges of thermal loads, may be applicable to other regions with similar conditions.
Keywords: Cooling; energy; saving; Air-conditioning; control; Thermostat; operation; Thermal; comfort; Athens; buildings (search for similar items in EconPapers)
Date: 2011
References: View references in EconPapers View complete reference list from CitEc
Citations View citations in EconPapers (1) Track citations by RSS feed
Downloads: (external link)
http://www.sciencedirect.com/science/article/B6V1T ... 6f98b109b507c4c18a03
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
Persistent link: http://EconPapers.repec.org/RePEc:eee:appene:v:88:y:2011:i:8:p:2871-2884
Ordering information: This journal article can be ordered from
http://www.elsevier. ... 405891/bibliographic
Access Statistics for this article
Applied Energy is edited by J. Yan
More articles in Applied Energy from Elsevier
Series data maintained by Zhang, Lei (). | {"url":"http://econpapers.repec.org/article/eeeappene/v_3a88_3ay_3a2011_3ai_3a8_3ap_3a2871-2884.htm","timestamp":"2014-04-17T04:01:36Z","content_type":null,"content_length":"16459","record_id":"<urn:uuid:09428aac-84d1-49bf-8a35-f1b6ae02433d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to prove this version of L'Hopital's rule
March 14th 2011, 09:41 AM #1
Feb 2011
how to prove this version of L'Hopital's rule
Please prove the following version of L'Hopital's rule.
Suppose that $f,g : (a,b) \rightarrow \Re$
are differentiable with
$g(x)$ and $g'(x)$ never equal to zero.
suppose also that:
$lim_{x \rightarrow b-} f(x) =0$
$lim_{x \rightarrow b-} g(x) =0$
$lim_{x \rightarrow b-} \frac{f'(x)}{g'(x)} = \infty$
$lim_{x \rightarrow b-} \frac{f(x)}{g(x)} = \infty$
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/174567-how-prove-version-l-hopital-s-rule.html","timestamp":"2014-04-20T11:33:50Z","content_type":null,"content_length":"29710","record_id":"<urn:uuid:8c9d1f03-d0fd-4205-8200-01444c1e224c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
A variation on Karmarkar’s algorithm for solving linear programming problems.
(English) Zbl 0626.90052
The algorithm described in the paper uses the ideas of the original Karmarkar algorithm, but differs in some respects. At first the minimum value of the objective function has not to be known in
advance. The algorithm solves the standard form of a linear programming problem with the requirement that the primal and dual problems have no degenerate basic feasible solutions. The algorithm
starts from the feasible point $y\in {R}^{n}$ such that ${y}_{i}>0$, $i=1$, 2,..., n and produces a monotone decreasing sequence of values of the goal function. The main difference between this
algorithm and that given by R. J. Vanderbei, M. S. Meketon and B. A. Freedman [Algorithmica 1, 395-407 (1986; Zbl 0626.90056)] is that the constraint $x\ge 0$ is replaced by
$\sum _{i=1}^{n}\frac{{\left({x}_{i}-{y}_{i}\right)}^{2}}{{y}_{i}^{2}}<{R}^{2},$
where $0<R<1$. The convergence of the algorithm is proved and a numerical method for finding a starting point is shown.
At last in the case of absence of degeneracy it is proved that the algorithm converges to an optimal basic feasible solution with the nonbasic variables converging monotonically to zero.
[1] T.M. Cavalier and A.L. Soyster, ”Some computational experience and a modification of the Karmarkar algorithm,” The Pennsylvania State University, ISME Working Paper 85-105, 1985.
[2] P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin and M.H. Wright, ”On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method,” Manuscript,
Stanford University, 1985.
[3] N. Karmarkar, ”A new polynomial-time algorithm for linear programming,” Proceedings of the 16th Annual ACM Symposium on Theory of Computing, 1984, pp. 302–311.
[4] R.J. Vanderbei, M.S. Meketon and B.A. Freedman, ”A modification of Karmarkar’s linear programming algorithm,” Manuscript, AT & T Bell Laboratories, Holmdel, New Jersey, June 1985. | {"url":"http://zbmath.org/?q=an:0626.90052","timestamp":"2014-04-21T02:24:25Z","content_type":null,"content_length":"23238","record_id":"<urn:uuid:fec9dd2c-f4b0-47ec-b476-5f00f4035198>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Divisibility Criteria
Divisibility Criteria for Prime Numbers
When expressing a number as a product of primes, it is helpful to have criteria for deciding if a particular prime will go into the number. We will present criteria for telling if 2, 3, 5, 7, 11, 13,
17, 19, 23, 29, 31, and 37 go into a number as well as a procdure for generalizing the techniques for all the prime except 2 and 5.
2: If the last digit is 0, 2, 4, 6, or 8, then 2 goes into the number
3: If the sum of the digits is divisible by 3, then the number is divisible by 3. This also works for 9. If the sum of the digits is divisible by 9 then the number is divisible by 9. However, this
method will not work for 6. 36 is divisible by 6 even though the sum of the digits, 9, is not, and 15 is not divisible by 6 even though the sum of the digits is 6.
5: If the last digit is 0 or 5, then the number is divisible by 5.
7: Split off the last digit. Double it and subtract that from the number that is left. If the result is divisible by 7 then so is the original number. Since the result will have one fewer digits than
the original number, this will be an easier problem. You may need to repeat the process several times to get a number small enough to be able to tell easily if 7 goes into the result. This method
also works for 3. If the result is divisible by 3 then so is the original number, but the sum of the digits method is easier for 3.
11: Take the sum of every other digit. Then take the sum of the other every other digits. If the difference is divisible by 11 (and 0 = 0x11 is divisible by 11), then the number is divisible by 11.
13: There are two methods. The first is like the method for 7. Split off the last digit, multiply by 9 and subtract from the rest of the number. If the result is divisible by 13, then so is the
original number. This method also works for 7.
The second method is like the one for 9: Split off the last digit, multiply by 4 and add to the rest of the number, The second method is probably easier than the first
17: Split off the last digit, multiply by 5 and subtract the product from the rest of the number. If the result is divisible by 17 then so is the original number.
19: Split off the last digit, multiply it by 2 and add the product to the rest of the number. If the result is divisible by 19, then so is the original number.
23: Split off the last digit, multiply by 7 and add the product to the number that is left. If the result is divisible by 23 then so is the original number.
29: Split off the last digit, multiply by 3 and add the product to the number that is left. If the result is divisible by 29 then so is the original number.
31: Split off the last digit, multiply by 3 and subtract the product from the number that is left. If the result is divisible by 31 then so is the original number.
37: Split off the last digit, multiply by 11, and subtract the product from the number that is left. if the result is divisible by 37 then so is the original number.
While these methods can be generalized to work for any prime number except 2 and 5, at this point, it becomes easier to simply divide the number by the prime in question. Moreover, at some point, it
will be necessary to actually divide the nuber by the prime to see if the quotient has gotten to be smaller than the divisor. If no prime works by the time the quotient gets to be smaller than the
divisor, the number is prime. | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_300/fractions/divisibility/default.html","timestamp":"2014-04-19T02:36:09Z","content_type":null,"content_length":"5054","record_id":"<urn:uuid:96aab34c-b16f-4080-bda5-0515aa52daa9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample size for ratio of variances
May 14th 2009, 11:57 AM #1
Junior Member
Feb 2009
Sample size for ratio of variances
Want to find the sample size required for alpha=0.05 and beta =0.05 (power 95%) in testing the equality of two normal population variances, when actually one variance is 25.5 times larger than
the other. The right tail of the F distribution is chosen for the rejection region.
I believe this implies
Ho: sigmaSquared1 = sigmaSquared2 <-> (sigmaSquared1/sigmaSquared2) =1
Ha: sigmaSquared1 > sigmaSquared2 <-> (sigmaSquared1/sigmaSquared2) >1 *Since we are told using the upper tail.
The true situation must be sigmaSquared1 =25.5*sigmaSquared2
1) Do I infer correctly that since we are using the upper tail of the F that the sign of Ha is > ?
Now, how to get the sample size I am at a loss.
Any hints?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/89026-sample-size-ratio-variances.html","timestamp":"2014-04-19T19:36:15Z","content_type":null,"content_length":"29564","record_id":"<urn:uuid:7bc7ae81-49af-404a-af8c-332b4be71092>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving trig functions! urgent please help!
Number of results: 96,878
Solving trig functions! urgent please help!
1.2sinxtanx + tanx-2sinx-1=0 2.6tan^2x-4sin^2x=1 Both functions is for 0<x<2pi thank you!
Tuesday, December 20, 2011 at 11:57pm by Sally
Solving trig functions! urgent please help!
6 (sin^2/cos^2) -4 sin^2 - 1 = 0 6 s^2 - 4 s^2c^2 - c^2 = 0 6 s^2 -4s^2(1-s^2) -(1-s^2) = 0 6s^2 -4s^2 +4s^4 +s^2 - 1 = 0 4 s^4 +3 s^2 -1 = 0 (4s^2-1)(s^2+1) = 0 sin^2x = 1/4 or sin^2x = -1
(imaginary root) sin x = +1/2 or sin x = -1/2 You can take it from there.
Tuesday, December 20, 2011 at 11:57pm by Damon
Solving trig functions! urgent please help!
2 sin^2x/cosx + sinx/cosx - 2 sinx - 1 = 0 2 sin^2x + sinx -2 sinxcosx - cosx = 0 2sinx(sinx-cosx) = -(sinx-cosx) sinx = -1/2 x = 210 degrees(7pi/6) or 330degrees(11pi/6)
Tuesday, December 20, 2011 at 11:57pm by Damon
Solving Trig Functions: How Do You Solve For Sin 7.5 Using Half-Angle and Double-Angle Identities?
Wednesday, September 22, 2010 at 10:23pm by Spy[c]
Math Urgent help please
As always, draw a diagram. Now, recalling your trig functions, it is easy to see that h/(15+5) = tan 50 now, just solve for h
Monday, February 10, 2014 at 10:50am by Steve
Solving trig functions! urgent please help!
In #1, Damon lost 2 answers by just dropping the (sinx-cosx) from 2sinx(sinx-cosx) = -(sinx-cosx) 2sinx(sinx-cosx) + (sinx-cosx)= 0 (sinx-cosx)(2sinx + 1) = 0 sinx = -1/2 etc, (see above) or sinx -
cosx = 0 sinx = cosx sinx/cosx =1 tanx = 1 x = 45 or 225 (π/4 or 5π...
Tuesday, December 20, 2011 at 11:57pm by Reiny
How do you solve cos(2arcsin 1/4) using inverse trig. functions??!! PLEASE HELP ME!
Monday, May 9, 2011 at 11:20am by Tor
I don't know to find the derivative of the following trig functions. Please help. f(x)= cotx h(x)= secx i(x)= cscx
Thursday, January 19, 2012 at 8:45am by Claire
Subtract 2 pi from 7pi/3. The trig functions of 7 pi/3 will be the same values as the trig functions of pi/3, which is 60 degrees. The cosine of pi/3 is 1/2. That makes the secant 3. The tangent is
sqrt3 You should be able to look up or figure out the other funtions of that ...
Tuesday, January 12, 2010 at 9:50pm by drwls
I don't know to find the derivative of the following trig functions. Please help. f(x)= cotx h(x0= secx i(x)= cscx
Thursday, January 19, 2012 at 8:45am by Claire
Try reviewing your trig functions and post an attempt at some of these. We'll be happy to check your work. They're all pretty straightforward applications of the definitions of trig functions and
Pythagorean Theorem.
Monday, March 5, 2012 at 1:06pm by Steve
graphing trig functions
for graphing basic trig functions such as y=2sinx or y=1/3cosx, how do you know what the points are to graph with out using a calculator?
Friday, May 18, 2012 at 12:31am by bkue
The next two steps in solving this trig identity question are ??? 1/csctheta x costheta x sintheta x sectheta = sinsqtheta arghhh need help please
Saturday, December 17, 2011 at 7:43pm by Don
urgent urgent algebra
find the composite function fog f(X) = 1/X-5,g(x)-6/x question 2 find inverse, domain range and asymatotes of each function f(x)=3+e^4-x can some one help me please been stuck on these for 2 days
need help solving thank you
Tuesday, March 19, 2013 at 11:44pm by zachary
graphing trig functions
You can pick as many x points as you want. The more you plot, the easier it is to plot a smopoth and accurate curve. For the y values that go with each x, you need a calculator, a special slide rule
or a table of trig functions. Hardly anyone uses tables or slide rules anymore.
Friday, May 18, 2012 at 12:31am by drwls
Math grade 12 Advanced Functions
3. Using proper grammar 1 point 3. What is the value of x + y? (5 points) Rubrics: 1. Writing the correct equation(s) 1 point 2. Showing steps 1 point 3. Solving for x 1 point 4. Solving for y 1
point 5. Solving for x + y 1 point
Friday, January 25, 2013 at 11:42am by Anonymous
could someone please refresh my memory of the basic trig functions? ex) cos^2 + sin^2 = 1? That is a trig identity. http://www.sosmath.com/trig/Trig5/trig5/trig5.html
Thursday, April 12, 2007 at 10:18pm by raychael
Math- Urgent
Solving Equations -7+b/15=-6 What does b equal? Please show all work.
Monday, August 12, 2013 at 8:49pm by j
Math- Urgent
Solving Equations 3=r/16+4 What does r equal? Please show all work.
Monday, August 12, 2013 at 8:59pm by j
Trig (inverse functions)
I found my mistake, please disregard.
Monday, April 21, 2008 at 3:16pm by Dennis
What are we doing? Solving ? or proving it is an identity? BTW, you have to put the x for all the trig functions that is, cosx cotx = cscx - sinx Let's see if it is an identity... LS = cosx (cosx/
sinx = cos^2 x/sinx RS = 1/sinx - sinx = (1 - sin^2 x)/sin = cos^2 x/sinx = LS ...
Tuesday, December 4, 2012 at 11:27pm by Reiny
write a product of two trig functions that equals 1.
Sunday, November 6, 2011 at 9:38pm by Dani
any calculus text will list the trig functions and their derivatives.
Thursday, January 19, 2012 at 8:45am by Steve
csc thea= -2, quadrant 4. what are the six trig functions?
Monday, May 6, 2013 at 4:17pm by Shay
how do i do this? give the signs of the six trig functions for each angle: 74 degrees?
Wednesday, December 3, 2008 at 6:59pm by Tay
graphing trig functions
I see the equation of a straight line, not a trig function
Sunday, May 20, 2012 at 10:15pm by Reiny
surely you have learned the basic definitions of the trig functions, tan = y/x = 4/3
Sunday, January 20, 2013 at 5:17pm by Reiny
Calculus is a branch of mathematics that lets you solve equatiuns that involve the rate of change of functions, or the areas under curves. In the course of solving these equations, often many new
functions are introduced.
Sunday, October 24, 2010 at 10:49pm by drwls
What kind of reflections are the following trig functions? y = 3cos(x-1) y = sin(-3x+3) y = -2sin(x)-4
Sunday, April 20, 2008 at 10:27pm by Joshua
What kind of reflections are the following trig functions? y = 3cos(x-1) y = sin(-3x+3) y = -2sin(x)-4
Monday, April 21, 2008 at 12:33am by Joshua
What is the shift along x of the following trig functions? y = (3sinx/2) + 4 y = -2sin(x)-4 y = sin(-3x + 3)
Monday, April 21, 2008 at 12:34am by Amandeep
find the exact values of the six trig functions of angle... sin(-315degrees) how do i do this?
Thursday, September 1, 2011 at 9:16pm by ALISON
Does anybody know a good website that will hlp me with finding the six trig functions of a point NOT on the unit cricle?
Wednesday, September 16, 2009 at 7:59pm by Anonymous
use the unit circle to find the exact value for all the trig functions of =2/3.
Monday, February 1, 2010 at 9:54pm by Anonymous
find the 5 other trig functions if cos(theta) = square root of 2/2 and cotangent is less than 0
Wednesday, June 9, 2010 at 7:04pm by kelly
Do you mean find the six trig functions (sin, cos, tan, csc, sec, and cot)?
Wednesday, December 3, 2008 at 6:59pm by Anonymous1
Let f and g be two invertible functions such that f^-1(x)=5/x+4 and g(x)=4(x-2). Find f(g(5)). Show your steps please so I can see how to do it. Thank you! :)
Friday, October 12, 2012 at 1:44am by Bailey
If x = sin(theta), these equations are identities, expressing other trig functions in terms of sin hey arise from the identity sin^2(theta) + cos^2(theta) = 1 As for real life and calculus, get
familiar with the trig functions, and by the time you have mastered the techniques...
Monday, February 20, 2012 at 9:22pm by Steve
The displacement (in centimeters) of a particle moving back and forth along a straight line is given by the equation of motion s= 2 sin pi t + 3 cos pi t, where t is measured in seconds. (a) Find the
average velocity during each time period: (i)[1,2] (ii) [1,1.1] (iii) [1, 1....
Tuesday, September 23, 2008 at 8:40pm by Sarah
Did you notice that trig co-functions come in pairs: sine ---cosine secant --- cosecant tangent --- cotangent so csc(4π/11) = ......
Wednesday, May 11, 2011 at 4:53am by Reiny
works for me. the co-functions are the functions of the complementary angles. So, by definition, tan(π/2-x) = cot(x). Your proof works as well, though.
Sunday, July 14, 2013 at 8:48pm by Steve
sci 241
could any one PLEASE help me get started on this??? I DO NOT UNDERSTAND IT!!!!!!! any advicer or help PLEASE URGENT Write a 1,400- to 1,750-word paper in APA format outlining your healthy eating
plan. Be sure to discuss: o Your current eating habits (as documented in the Food ...
Saturday, May 30, 2009 at 6:38pm by Brittney......URGENT PLEASE HELP
Given that sin x = - 2/2 and that cos x is negative, find the other functions of x and the value of x. can you explain how this problem can be solved please.
Sunday, January 30, 2011 at 12:49am by Anonymous
he graph of the velocity of a mass attached to a horizontal spring on a horizontal frictionless surface as a function of time is shown below. The numerical value of V is 5.48 m/s, and the numerical
value of t0 is 8.97 s. a) What is the amplitude of the motion in m? b) What is ...
Tuesday, November 20, 2012 at 11:47pm by Jalopy
Find values of all six trig functions if sin(theta)= 4/5 and theta is in the second quadrant.
Tuesday, August 23, 2011 at 2:47pm by Lucy
Monday, December 17, 2012 at 9:00pm by Gabby
find the exact values of the six trig functions of theta equals six pi divided by 8
Monday, October 17, 2011 at 9:01pm by don
Please explain inverse functions. i.e. find angle if Sin x = 2.3 divided by 8.15. Answer 16.30 degrees. How did they do that??? Tks
Thursday, December 4, 2008 at 3:50pm by John
by test time you need to know the "standard" angles with easy-to-recall trig functions 0,π/6,π/4,π/3,π/2 If you know those angles and their trig ratios, you will recall that tan π/3 = √3 Now, recall
the bit about principal values of inverse trig ...
Saturday, May 18, 2013 at 9:37am by Steve
If cos 0 = - 1/2 and tan 0 > 0, find the quadrant that contains the terminal side of 0, and then find the exact values of the other five trig functions of 0.
Saturday, May 14, 2011 at 11:40am by shawn
Well, why not follow that hint sin(-15) = sin(30 - 45) = sin30cos45 - cos30sin45 = .... You should know the trig functions of those special angles.
Tuesday, September 17, 2013 at 11:23pm by Reiny
how do i find six trig functions for cos(pi over 2-x)=3/5, cos x=4/5?
Tuesday, October 19, 2010 at 11:28pm by Anonymous
1.Find the domain f(x)=(x^(2)-9) 2. Find all six trig functions for (-3,4)
Wednesday, June 8, 2011 at 11:27pm by Mike
points do not have trig functions. Angles have trig functions. http://id.mind.net/~zona/mmts/trigonometryRealms/TrigFuncPointDef/TrigFuncPointDefinitions.html
Wednesday, September 16, 2009 at 7:59pm by bobpursley
a. Solve a 9 = 20 b. Solve b 9 > 20 c. How is solving the equation in (a) similar to solving the inequality in (b)? d. How are the solutions different? I dont know how to do this i forgot ... could u
help (Steve, or Mrs.Sue, or Writeteacher plz help)
Friday, January 18, 2013 at 5:42pm by Gabby
Advance Functions
Solve: 12+4|6x−1|<4 5−6|x+7|less than or equal to 17 ...can you please explain to me.. i know that it's like solving algebraically but one of them has no solution..
Saturday, February 16, 2013 at 5:23pm by jc
pre calc
since you are asking about trig functions, it would be logical to assume that you can find the definitions. csc A = 1/sinA if that's difficult, please review your algebra I.
Thursday, January 12, 2012 at 8:02pm by Steve
Math - Trig
Looks like you need to review the basic trig functions and draw useful diagrams. #1 h/15 = tan 46.48 #2 h/17.2 = tan 73.5
Wednesday, December 4, 2013 at 12:11am by Steve
URGENT, Please answer this question: give the value of: Tan(Arctan x+1/x-1+Arctan x-1/x)
Saturday, June 11, 2011 at 10:21am by Mike
Find the exact values of the other trig functions of theta given sec theta=7 and sin theta is less than 0.
Sunday, May 8, 2011 at 12:35am by Nick
Math Trig
Find the values of the six trigonometric functions of an angle in standard position if the point with coordinates (12, 5) lies on its terminal side? Please explain how!
Tuesday, October 16, 2012 at 9:56am by Laytin
Math Trig
Find the values of the six trigonometric functions of an angle in standard position if the point with coordinates (12, 5) lies on its terminal side? Please explain how!
Tuesday, October 16, 2012 at 9:56am by Laytin
yes, that is what I said. Cot 45 = 1/tan 45 = 1 I also did the other trig functions of 45 because I am sure you will need them.
Monday, May 19, 2008 at 6:32pm by Damon
Quadratic Functions
I need help solving this equation: y=-2x^+3 And you do need to graph this equation so can someone please help me?
Monday, April 7, 2008 at 8:10pm by Megan
surely your class materials include graphs of the six trig functions . . . besides, we can't post pictures here. do a web search on secant trig function and you will find many pictures of the graph
Monday, January 23, 2012 at 10:17am by Steve
math(Please please help!!!)
Solving log functions 1) log x + log(x-6) = log 7
Saturday, February 6, 2010 at 8:53am by Hannah
Trig Help (URGENT)
Solve over the indicated interval: 2sin^2x=sqrt2sinx , [-180,180) don't how to get x by itself please help and thank you
Wednesday, February 22, 2012 at 10:25pm by Anonymous
determine all trig functions of theta using the given info, state the answers correct to the nearest hundredth. cos (theta) = -5/13; tan (theta) < 0
Saturday, May 14, 2011 at 11:40am by anna
determine all trig functions of theta using the given info, state the answers correct to the nearest hundredth. cos (theta) = -5/13; tan (theta) < 0
Saturday, May 14, 2011 at 11:40am by anna
Math - Solving Trig Equations
Start by recalling the most important identity. My math teacher calls this "the #1 Identity." sin^2(x) + cos^2(x) = 1 We want to simplify our trig equation by writing everything in terms of sine.
Let's solve the #1 Identity for cos^2(x) because we have that in our trig ...
Wednesday, November 21, 2007 at 6:29pm by Michael
GEOGRAPHY (URGENT!!!)
I NEED HELP!!! this is urgent!!! Human-Environmental Interactions: how did the people in Khartoum modify the environment? how is the people in khartoum being influence by the physical landscape/
environment? sorry my grammar sucks i have other questions but i really extremely ...
Thursday, November 18, 2010 at 7:50pm by NEED HELP URGENT!!!
spanish!!...urgent!...please help
how did the geography influence the migration and exploration of colombia? venezuela? i need websitres for this urgent...please help asap... :(
Thursday, December 6, 2007 at 6:58pm by maggie
Which of the following functions have a derivative at x=0? I. y= absolute value(x^3-3x^2) II y= square root(x^2+.01)- absolute value (x-1) III y= e^x/cosx
Thursday, October 8, 2009 at 5:54pm by Mark
If Y1 is a continuous random variable with a uniform distribution of (0,1) And Y2 is a continuous random variable with a uniform distribution of (0,Y1) Find the joint distribution density function of
the two variables. Obviously, we know the marginal density functions of each ...
Sunday, November 15, 2009 at 10:28pm by Sean
We're learning about different kinds of functions and I don't really understand the difference between rational and algebraic functions. I know that rational functions are functions that are a ratio
of two polynomials, and algebraic functions are any functions that can be made...
Thursday, September 13, 2007 at 11:09am by Kelly
The reason they come up in trig is that any point in the plane can be located at some distance from the origin, and in some direction, given by an angle θ. All your normal trig functions can be
applied to θ.
Sunday, December 8, 2013 at 10:57pm by Steve
Use the trig Identities to find the other 5 trig functions. Problem 7.)Tan(90-x)=-3/8 8.)Csc x=-13/5 9.)Cot x=square root of 3 10.)Sin(90-x)=-.4563 11.)Sec(-x)=4 12.)Cos x=-.2351 I need HELP!
Tuesday, January 5, 2010 at 4:09pm by Jennifer
GR.10 CIVICS!!!!!URGENT
CAN SOMOENE PLEASE GIVE ME A LINK TO A PICTURE THAT HAS TO DO WITH OR IS RELATED TO ABORIGINALS GAINING THE RIGHT TO VOTE IN CANADA(1960).PLEASE! ITS URGENT.THANKS
Saturday, May 30, 2009 at 6:56pm by LALA
Math trig
Look at your trig functions d/24 = cos 25 height H/24 = sin 25 for the 2nd part, (d+2)^2 + h^2 = 24^2 ladder slid down H-h feet
Saturday, October 22, 2011 at 2:13pm by Steve
This is not trigonmetry. You may have to do some plotting of G(x). We cannot do the graphing for you. The relative extreme values can be obtained by setting the derivative equal to zero, giving 3 x^2
= 8 and solving for x Please show your work if you need further assistance.
Wednesday, November 12, 2008 at 11:12pm by drwls
sin 4x = 2sin(2x)cos(2x) = 2(2sinxcosx)(1 - 2sin^2 x) = 4sinxcosx(1 - 2sin^2 x) this is expressed in terms of trig functions of x.
Wednesday, February 17, 2010 at 4:46pm by Reiny
Most of these questions can be answered by the definition of the general trigonometric functions: y = a sin A(x-φ) a is the amplitude 2π/A = period φ = phase shift. If you can transform the
trigonometric functions into the above form and evaluate the values of a, A...
Thursday, December 17, 2009 at 12:00pm by MathMate
please show me again in trig way
Saturday, May 17, 2008 at 12:40pm by Doni --i didnt get this one can please show me the trig way a little bit more clrealy
math(please help urgent)
1) A y = x/3 line makes an angle of arctan 1/3 with the x axis. That line is in the first quadrant (and also the third quadrant). You have not specified the quadrant. tan theta = 1/3, by definition
sin theta = 1/sqrt10 (in the 1st quadrant) cos theta = 3/sqrt10 etc. 2) (a) ...
Friday, January 15, 2010 at 3:50pm by drwls
Math C30
Solving Equations Containing Circular Functions ... Solve 2cos^2=1
Wednesday, November 2, 2011 at 1:05am by don
Trigonometry is the branch of mathematics that deals with the solution of triangles through the use of the trigonometric functions sine, cosine, tangent and their reciprocals. The trig function
values derive from the ratios of the "x" and "y" values of a point on a unit circle...
Monday, March 3, 2008 at 9:37am by tchrwill
math(please help urgent)
1) Find the values (if possible) of the six trigonometric functions of o if the terminal side of o lies on the given line in the specified quadrant. y=1/3x 2)Evaluate (if possible)the sine, cosine,
and tangent of the angles without a calculator. (a) 10pie/3 (b) 17pie/3
Friday, January 15, 2010 at 3:50pm by Hannah
Trigonometry Urgent!
I am writing a paper on the 8 trigonometric identites but can't find any information on them. Please does anyone know of any websites that would have things like their history, development,
applications in ancient times, origins, etc. Please help. Urgent!!!!
Thursday, March 8, 2007 at 10:40pm by kate 316
Solving two step equations
This is rather urgent now...
Monday, September 23, 2013 at 10:44pm by Gabby
Domains of Functions
I dont understand how to do this problem, can you please help me? Given the functions f and g, determine the domain of f+g. 4. f(x)= 2x/(x-3) g(x)=3/(x+6)
Friday, September 14, 2007 at 9:43pm by Jules
Math C30
Solving Equations Containing Circular Functions : Solve 2cos sq x =1
Tuesday, November 1, 2011 at 10:00pm by don
advanced functions/precalculus
1. The function f(x) = (2x + 3)^7 is the composition of two functions, g(x) and h(x). Find at least two different pairs of functions g(x) and h(x) such that f(x) = g(h(x)). 2. Give an example of two
functions that satisfy the following conditions: - one has 2 zeros - one has ...
Wednesday, January 15, 2014 at 2:33am by Diane
trig please
the circles go to the top and there measurements i included in the pictures and there are right angles
Monday, September 14, 2009 at 6:25pm by trig please
Help with solving radical fractions.
Tuesday, September 15, 2009 at 8:05pm by Anonymous
solving radical equations... help! 5+n+1= n+4
Wednesday, December 8, 2010 at 6:35pm by bob
Trig functions
ok, except what is the difference between A and C in #1 ?
Tuesday, February 12, 2008 at 3:54pm by Reiny
Trig functions
its just the capital S in A and lowercase s in C
Tuesday, February 12, 2008 at 3:54pm by Jon
Rather Urgent math~!
Oh and we are doing "Solving Multi-Step Equations" in this lesson
Wednesday, September 25, 2013 at 2:23pm by Gabby
A car accelerates from rest at a constant rate A for some time after which it retards at a constant rate B to come to rest. If the time elapsed is T second,calculate the maximum velocity reached?
(please try solving this problem by plotting velocity time graph and if you can ...
Sunday, September 30, 2012 at 11:43am by help me out it's urgent
A car accelerates from rest at a constant rate A for some time after which it retards at a constant rate B to come to rest. If the time elapsed is T second,calculate the maximum velocity reached?
(please try solving this problem by plotting velocity time graph and if you can ...
Sunday, September 30, 2012 at 11:44am by help me out it's urgent
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Solving+trig+functions!+urgent+please+help!","timestamp":"2014-04-16T04:39:22Z","content_type":null,"content_length":"36005","record_id":"<urn:uuid:08333df8-64a3-4130-a843-97e6641f981d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
8.172 MIN — Minimum value of an argument list
Returns the argument with the smallest (most negative) value.
Fortran 77 and later
Elemental function
RESULT = MIN(A1, A2 [, A3, ...])
A1 The type shall be INTEGER or REAL.
A2, A3, ... An expression of the same type and kind as A1. (As a GNU extension, arguments of different kinds are permitted.)
Return value:
The return value corresponds to the maximum value among the arguments, and has the same type and kind as the first argument.
Specific names:
Name Argument Return type Standard
MIN0(A1) INTEGER(4) A1 INTEGER(4) Fortran 77 and later
AMIN0(A1) INTEGER(4) A1 REAL(4) Fortran 77 and later
MIN1(A1) REAL A1 INTEGER(4) Fortran 77 and later
AMIN1(A1) REAL(4) A1 REAL(4) Fortran 77 and later
DMIN1(A1) REAL(8) A1 REAL(8) Fortran 77 and later
See also:
MAX, MINLOC, MINVAL | {"url":"http://www.lahey.com/docs/lfpro75help/gfortran/min.html","timestamp":"2014-04-20T04:12:07Z","content_type":null,"content_length":"9206","record_id":"<urn:uuid:f920a43c-7092-4b3a-b03d-3840b922af52>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 9
Educating Mathematical Scientists: Doctoral Study and the Postdoctoral Experience in the United States 2 HISTORICAL PERSPECTIVE The system of education evinced by mathematical sciences doctoral and
postdoctoral programs in the United States has deep roots in the European system. The European emphasis on research, and, in particular, on fundamental research, has guided the development of the
American system during the past half century. A brief overview of the history of the mathematical sciences in the United States suggests why this is the case. The statistical and historical data
presented in this chapter and on occasion in the rest of this report are taken from the annual AMS-MAA surveys in the Notices of the American Mathematical Society (1980–1991), The Mathematical
Sciences: A Report (NRC, 1968), A Century of Mathematics in America, Part I (Duren, 1988), Science and Engineering Doctorates: 1960–86 (NSF, 1988), and A Challenge of Numbers (NRC, 1990b). THE EARLY
YEARS During the 19th century, mathematics consisted of pure mathematics, some mathematical physics (the applied mathematics of those days), and statistics. Statistics had a separate professional
identity, fostered by the American Statistical Association, which was founded in 1839, and thus was the earliest domestic professional society in the mathematical sciences. However, statistics as an
academic discipline was considered to be an integral part of mathematics. Europe was the center of mathematical research, and American mathematics relied mainly on European universities to train
American doctoral students and on people trained abroad to staff American colleges and universities. A base for future development was built with the establishment in 1876 of Johns Hopkins
University, the first American research university, the founding of the American Mathematical Society in 1888, and the development of research programs at the University of Chicago toward the turn of
the century. Respected journals were established to disseminate and stimulate mathematical research, among them the American Journal of Mathematics, in 1878, the Annals of Mathematics, in 1884, and
the Transactions of the American Mathematical Society, in 1900. During the period 1910 through 1930, the American mathematical community was small but active. Strong programs grew at Harvard and
Princeton Universities, joining that at the University of Chicago. Mathematicians played a role in World War I primarily by calculating trajectory and range tables. Concern for mathematics education
was growing, as attested to by the founding of the Mathematical Association of America in 1915.
OCR for page 9
Educating Mathematical Scientists: Doctoral Study and the Postdoctoral Experience in the United States With the coming of modern manufacturing methods following World War I, statistics grew into a
separate discipline of the mathematical sciences. Statistical quality control and sampling schemes, two of the major developments of the 1920s, had far-reaching consequences for industrial
development prior to World War II. However, the training ground for the statistician was still the mathematics graduate programs. In the late 1920s, the American mathematical community grew rapidly,
aided by new support from foundations and a National Research Council fellowship program funded by the Rockefeller Foundation. The top research universities established research instructorships in
mathematics, a precursor to the postdoctoral positions of a later era. These instructorships were nonrenewable term appointments for two to four years with slightly reduced teaching loads. They were
funded by the universities and increased the opportunity for postdoctoral experience at research universities. For mathematics as for other sectors of society, the Depression brought an increase in
unemployment and a decrease in salaries. The job market remained difficult during this period, and many new job seekers found only temporary employment or positions at the pre-college level. In spite
of the prevailing poor economic conditions, the 1930s were a period of growth for the mathematical sciences community: the number of doctorates awarded increased from 351 in 1920–1929 to 780 in
1930–1939. PhD production during the 1930s was fairly constant, with some 80 degrees awarded annually. About 15% of the degrees in mathematics came from three departments (at the University of
Chicago, Harvard University, and Princeton University) that were rated as “distinguished” and a further 50% from a group of about 15 “strong” departments. Opportunities for postdoctoral education
were increased with the establishment of the Institute for Advanced Study in 1932. The Institute for Mathematical Statistics, a professional society for mathematical statistics, was organized in
1935. In the United States through the 1930s, teaching was emphasized, both in the way departments conducted their business and in the way graduate students were educated. The heavy teaching load of
U.S. university and college faculty allowed little time for research. In much of Europe, however, the emphasis was on research, and faculty had a normal teaching load approximately half that of their
American counterparts. In the mid-to late 1930s, the political conditions in Nazi Germany induced a number of mathematicians at universities in German-controlled areas to emigrate to the United
States. World-renowned mathematicians such as Richard Courant, Hermann Weyl, and Hans Rademacher were among the emigrés. The U.S. mathematical community was infused with mathematical talent that
would have an effect for generations to come. The additional strain on the U.S. job market for mathematicians was noticeable but not severe. By the end of 1939, 51 mathematicians had left their posts
at German-speaking universities and come to the United States. Some were hired directly by the Institute for Advanced Study and a few universities, while others were placed in temporary positions.
OCR for page 9
Educating Mathematical Scientists: Doctoral Study and the Postdoctoral Experience in the United States efforts were made to avoid placing refugees in regular positions and using university funds for
their support, some domestic faculty members still reacted negatively at a time when funds were limited, teaching loads were increasing, and native-born PhDs were without jobs. THE ERA OF GROWTH
World War II provided new opportunities for mathematicians, including the newly immigrated mathematicians. World War II brought technology to weaponry. However, very few mathematicians—American or
foreign-born—had the applied skills needed for the tasks at hand. Mathematics was forced to broaden its perspective to meet war needs, and many pure mathematicians learned to do applied work.
Statistics increased in importance with the growing demand for quality control, sequential analysis, and analytical/statistical methods for solving dynamical problems such as bombing patterns. The
new disciplines of operations research and computer science were born in the war effort. By the end of World War II, the number of mathematicians who had come to the United States from countries
affected by the war totaled only 120 to 150. These immigrant mathematicians and World War II profoundly changed the culture of the American mathematical community. The immigrant mathematicians added
to the research and scholarship in the United States and became a driving force in changing the emphasis at many institutions from teaching to research. The war made new areas of research important
and brought new sources of support, greatly strengthening the ties between the government and the mathematical sciences. Government support continued after the war through the Office of Naval
Research (ONR), founded in 1947, and the National Science Foundation (NSF), founded in 1950. The new system of grants for summer research, graduate students, and conferences, as well as of peer
review for the awarding of such grants, gradually changed the atmosphere in U.S. colleges and universities. Research became an integral part of the university structure, and institutions across the
country sought to hire mathematicians capable of doing research and obtaining grants. By 1951, the number of PhDs awarded annually was already over 200 per year. The horizons of the mathematical
sciences were being widened. Applied mathematics, scientific computing, and operations research became recognized disciplines of the mathematical sciences. The Society for Industrial and Applied
Mathematics was founded in 1951, the Operations Research Society of America in 1952. By 1961, the number of PhDs awarded annually was well over 300 per year. The mathematics departments at Chicago,
Harvard, and Princeton still had distinguished programs but now were joined by departments at Columbia University, the University of Michigan, Massachusetts Institute of Technology, Stanford
University, the University of
OCR for page 9
Educating Mathematical Scientists: Doctoral Study and the Postdoctoral Experience in the United States California, Berkeley, and Yale University, which had moved up from the “strong” category. Other
programs replenished the ranks of the strong departments, bringing the number of recognized programs to about 25. Strong departments of statistics also emerged during this period, among them the
departments at the University of North California, Iowa State University, and the University of California, Berkeley. The challenges facing the United States in the 1960s—to be first on the moon and
to develop the technology base for economic and military security—strongly influenced both research and PhD production in the mathematical sciences. During this period, the number of doctorates
conferred annually increased from 332 in the 1960–1961 academic year to 1070 in 1968–1969, and the number of doctoral programs went from around 200 to nearly 325. Throughout most of the 1960s,
employment opportunities were unlimited for new PhDs. This situation was fueled by the growth in the number of research departments, increased enrollment in undergraduate mathematical sciences
courses, and a new demand for mathematicians in government and industry. This was also a period of rapid growth for statistics. The number of doctoral statistics departments rose from 8 in 1950 to 17
in 1960 to 34 in 1970. The number of doctoral degrees awarded in statistics, including degrees awarded by mathematics, biology, and social science departments, increased from 110 in 1962 to 324 in
1972. THE ERA OF CONTRACTION The 1970s, however, was a time of great difficulty for many doctoral and postdoctoral programs in mathematics. During the 1970s, federal funding and employment
opportunities decreased and the service role of mathematical sciences departments increased. Enrollment in undergraduate mathematical sciences courses continued to increase while the number of
faculty positions remained level or decreased slightly. The mathematical sciences were increasingly viewed by many university and college administrators as a service discipline. Class sizes grew to
accommodate increasing undergraduate enrollment, contributing to problems in collegiate mathematics. At many institutions, more of the undergraduate teaching load was shifted to graduate teaching
assistants. This close identification of mathematics departments with teaching had serious implications as administrators looked for savings by promoting larger classes. While undergraduate
enrollment was increasing, graduate enrollment was decreasing. Graduate enrollment in the 155 PhD-granting mathematics departments declined by 17% from the fall of 1969 to the fall of 1974, while the
decrease for the top 65 departments was 25%. The number of first-year graduate students decreased by nearly 50% at the top 65 departments. During the same period, federal support for graduate studies
suffered a sharp decrease. Annual doctoral production reached a maximum of 1281 in the 1971–1972 academic year and then began to decline. The traditional mathematics programs reacted by looking
inward and restricting access to the profession. Students
OCR for page 9
Educating Mathematical Scientists: Doctoral Study and the Postdoctoral Experience in the United States were discouraged from pursuing doctoral studies in the mathematics. As a result PhD production
in the mathematical sciences dropped further and took longer to recover than in any of the other sciences or in engineering. PhD production in the mathematical sciences continued to decline through
the 1970s. By the 1979–1980 academic year, the number of new PhDs conferred had dropped to 745. However, the decline was not uniform over all areas of the mathematical sciences. The more applied
areas of the mathematical sciences reacted by creating non-academic opportunities for their new PhDs to make up for the loss of academic positions. The number of doctorates awarded in pure areas
decreased by 55%, while those in applied mathematics, statistics, and operations research remained roughly constant or increased slightly. This shift in emphasis from pure to applied areas was
reflected in the strength of statistics during this period of decline for mathematics. The number of autonomous statistics departments continued to increase, reaching 65 by 1987. In this year, there
was a total of 164 degree programs in statistics, including 47 degree programs in mathematics and mathematical sciences departments. By the 1970s, computer science had emerged as a separate
discipline with an identity distinct from that of the mathematical sciences. The growth of computer science and its establishment as a separate discipline drew students and resources from the
mathematical sciences. Mathematics departments, which originally had had to teach computer science, were often left with a relative decrease in resources and a relative increase in service teaching
when independent computer science departments were established. During the late 1970s, concerns began to surface about not only the decrease in the number of new PhDs but also the decrease in the
percentage of U.S. citizens among new PhDs and about underrepresentation of women and ethnic minorities. The percentage of mathematical sciences doctorates awarded to U.S. citizens decreased from 82%
in the 1970–1971 academic year to 74% in the 1978–1979 academic year. Growing concerns for increasing the participation of underrepresented minorities and women in the mathematical sciences were made
evident by the founding of the National Association of Mathematicians in 1969 and the founding of the Association for Women in Mathematics in 1971. Instead of increasing the employment opportunities
for mathematical sciences PhDs by making mathematicians more employable in nontraditional institutions, most of the community worked to restrict the number of PhDs to fit the small traditional
marketplace. When stability in the employment market returned at the start of the 1980s, many mathematical scientists realized that severe problems had been created by this inward-looking attitude.
Despite success in both research and advanced study, the mathematical sciences in the United States were now unable to attract sufficiently many domestic graduate students for renewal. In addition, a
serious imbalance had developed in federal funding of the mathematical sciences as compared with other sciences. One of the first reports to call attention to the imbalance in funding was the report
Renewing U.S. Mathematics: Critical Resource for the Future (NRC, 1984), which was
OCR for page 9
Educating Mathematical Scientists: Doctoral Study and the Postdoctoral Experience in the United States influential in bringing about a modest increase in federal support for the mathematical
sciences. By 1987, however, there was still nothing close to a balance with other fields. In 1987, 75% and 56% of the R&D faculty in physics and chemistry, respectively, received federal support,
whereas only 37% of the mathematical sciences R&D faculty received federal support. In that same year, 51% and 49% of the graduate students in physics and chemistry, respectively, received federal
support, whereas only 18% of the mathematical sciences graduate students received federal support. A reassessment, Renewing U.S. Mathematics: A Plan for the 1990s (NRC, 1990c), summarized the effects
of the 1984 report and argued strongly for the continued need for increasing federal funding. THE RECENT PAST In contrast to the preceding three decades of rapid change, the 1980s was a period of
stability for the mathematical sciences. In spite of some competition with computer science for students, the number of PhDs awarded annually in the mathematical sciences was roughly constant,
decreasing from 839 in the 1980–1981 academic year to a minimum of 726 in the 1984–1985 academic year and then increasing to 1061 in the 1990–1991 academic year. But the trend toward a higher
percentage of non-U.S. citizens receiving degrees accelerated sharply. In 1980–1981, 68% of the doctorates granted by U.S. institutions in the mathematical sciences went to U.S. citizens, whereas in
1990–1991, only 43% of the doctorates awarded went to U.S. citizens. Increasing undergraduate enrollments, a booming economy, and retirements provided job opportunities for almost all new PhDs in the | {"url":"http://www.nap.edu/openbook.php?record_id=1996&page=9","timestamp":"2014-04-19T18:06:31Z","content_type":null,"content_length":"54364","record_id":"<urn:uuid:3b500fbb-33be-467a-a035-df35eb5a6751>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possession, WA Math Tutor
Find a Possession, WA Math Tutor
...I graduated from Bergen Community College, NJ, in 2009 with Associate in Science degree in Engineering Science. I earned my Bachelor of Science degree in Mechanical and Aerospace Engineering
from Rutgers University (New Brunswick, NJ) in 2012. Math is my all time favorite course.
11 Subjects: including trigonometry, precalculus, algebra 1, algebra 2
Professional tutor focusing on Math/Statistics, Chemistry, Physics and Computers. Personally scored 800 on both SAT Math & SAT Math II & 787 in Chemistry prior to attending CalTech. Have
extensive IT industry experience and have been actively tutoring for 2 years.
43 Subjects: including trigonometry, linear algebra, computer science, discrete math
...Throughout high school, I took German classes and then afterwards spent a brief time in Germany. When I got back and went to Washington State University, I minored in German. My professor also
recruited me to work as her assistant for the introductory class and to teach the junior-level conversational German class.
12 Subjects: including algebra 1, algebra 2, ASVAB, prealgebra
Hi! My name is Joslynn, and I am currently a student at community college. I plan to transfer to the University of Washington in a year and double major in Bioengineering and mechanical
engineering (I plan to go into bioprinting, so that's why there's the weird combination of majors). Also, I plan to minor in math because just through prereq, I'm only 2 classes away and love
math, so why not.
12 Subjects: including calculus, physics, precalculus, SAT math
...I can help students understand the concepts behind specific problems and how those concepts fit into the big picture. And perhaps most importantly of all, I love mathematics, and I have always
enjoyed helping others learn to love it, too! I took IB History of the Americas and IB 20th Century Wo...
35 Subjects: including statistics, linear algebra, English, algebra 1
Related Possession, WA Tutors
Possession, WA Accounting Tutors
Possession, WA ACT Tutors
Possession, WA Algebra Tutors
Possession, WA Algebra 2 Tutors
Possession, WA Calculus Tutors
Possession, WA Geometry Tutors
Possession, WA Math Tutors
Possession, WA Prealgebra Tutors
Possession, WA Precalculus Tutors
Possession, WA SAT Tutors
Possession, WA SAT Math Tutors
Possession, WA Science Tutors
Possession, WA Statistics Tutors
Possession, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Chase Lake, NY Math Tutors
Earlmount, WA Math Tutors
Firdale, WA Math Tutors
Fort Flagler, WA Math Tutors
Kennard Corner, WA Math Tutors
Little Boston, WA Math Tutors
Lowell, WA Math Tutors
Mats Mats, WA Math Tutors
Maxwelton, WA Math Tutors
Queensborough, WA Math Tutors
Richmond Beach, WA Math Tutors
Shine, WA Math Tutors
Sunset Hill, WA Math Tutors
The Highlands, WA Math Tutors
Thrashers Corner, WA Math Tutors | {"url":"http://www.purplemath.com/Possession_WA_Math_tutors.php","timestamp":"2014-04-21T13:13:55Z","content_type":null,"content_length":"24202","record_id":"<urn:uuid:103f6a11-e210-4c3c-ae71-1a195a724149>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Threads in Neural Network in Train/Validation/Test
Replies: 1 Last Post: Feb 21, 2013 8:14 PM
Re: Threads in Neural Network in Train/Validation/Test
Posted: Feb 21, 2013 8:14 PM
"Subodh Paudel" <subodhpaudel@gmail.com> wrote in message <kg5upc$pgj$1@newscl01ah.mathworks.com>...
> Dear All,
> I am using MATLAB R2009a. I have a different answer for train/validation/test from two different method:
> 1) I Use:
> [net tr] = train(net,trainV.P,trainV.T,[],[],valV,testV);
> to train the network, and simulate the different train/validation/test result as:
> normTrainOutput=sim(net,trainV.P,[],[],trainV.T);
> normValidateOutput=sim(net,valV.P,[],[],valV.T);
> normTestOutput=sim(net,testV.P,[],[],testV.T);
Using norm in the output names is confusing because norm has a special meaning
(help/doc norm)
> and then i obtained MSE for training validation and test as:
> MSETrain=tr.perf(end);
> MSEValidate=tr.vperf(end);
> MSETest=tr.tperf(end);
I think if tr.stop indicates validation minimum stopping you should
replace end with end- tr.max_fail or tr.best_epoch.
> And from them finally R2 square value as:
> R2Train=1-NMSETrain
> R2Validate=1-NMSEValidate
> R2Test=1-NMSETest
> And the result i obtained directly from MSEtrain1=mse(normTrainOutput - tn(:XX)), that starts from training interval period i defined. And so on validation and test. Why these two values MSETrain1
and MSETrain differ?
If tr.stop indicates validation minimum stopping, then the last max_fail epochs should not be included. Find tr.best_epoch ,tr.best_perf, etc
> 2) I have R2 Train = 0.7738, R2 Validate = 0.7934 and R2 Test = 0.7926. And from the linear regression plot i obtain R train = 0.89584, R validate = 0.81805 and R Test = 0.92432. Does it mean the
R2 value of neural network is worst than linear regression model? OR the result i obtained during training = 0.89584 from regression is quite good.
If you had chosen the val minimum epoch, I would have expected
R = sqrt( R^2 )
> 3) Every times i simulate my network, my R2 values sometimes good and sometimes even worst -ve. How to make it constant, if i assume i get 27 epochs, hidden neurons =18 the best R2 value?
You get different values because of the random data division and random weight initialization. If you intialize the random number generator to the same state (e.g.,
rng(4151941) ) before data division and weight initialization, you will reproduce runs.
I usually use a double loop over numH candidate values for H and Ntrials weight initialization runs to get five Ntrials X numH sized matrices for numepochs, R2trn,
R2trna, R2val and R2tst.
Search NEWSGROUP and ANSWERS for greg Ntrials (or other of my characteristic
variable names MSE00, Neq, Ntrneq, Nw, Hub, R2, R2a,...)
Hope this helps. | {"url":"http://mathforum.org/kb/message.jspa?messageID=8390307","timestamp":"2014-04-19T04:52:12Z","content_type":null,"content_length":"16947","record_id":"<urn:uuid:d5f92862-39fd-48db-b3fb-b5b4cfc100ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Machine Learning (Theory)
Unfortunately, a scheduling failure meant I missed all of AIStat and most of the learning workshop, otherwise known as Snowbird, when it’s at Snowbird.
At snowbird, the talk on Sum-Product networks by Hoifung Poon stood out to me (Pedro Domingos is a coauthor.). The basic point was that by appropriately constructing networks based on sums and
products, the normalization problem in probabilistic models is eliminated, yielding a highly tractable yet flexible representation+learning algorithm. As an algorithm, this is noticeably cleaner than
deep belief networks with a claim to being an order of magnitude faster and working better on an image completion task.
Snowbird doesn’t have real papers—just the abstract above. I look forward to seeing the paper. (added: Rodrigo points out the deep learning workshop draft.)
3 Comments to “A paper not at Snowbird”
1. What about this paper from the same authors ?
“Sum-Product Networks: A New Deep Architecture”
□ Right, thanks. I added it.
2. An extended paper and source code are now available: http://alchemy.cs.washington.edu/spn/
The source code has some nice optimization tricks not mentioned in the paper.
For example, it uses the knowledge of the specific network structure. It is then able to find the max-valued product nodes faster.
If you want to see examples of simple sum-product networks,
I put them on my blog: | {"url":"http://hunch.net/?p=1772","timestamp":"2014-04-18T10:38:57Z","content_type":null,"content_length":"31242","record_id":"<urn:uuid:60518f3b-ee06-42ed-af45-6dafc8996c63>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimum and maximum values
Let us consider a very general graphic representation of a function. Following observations can easily be made by observing the graph :
Figure 1: There are
multiple minimum
and maximum values.
Graph of a function
1: A function may have local minimum (C, E, G, I) and maximum (B,D,F,H) at more than one point.
2: It is not possible to determine global minimum and maximum unless we know function values corresponding to all values of x in the domain of function. Note that graph above can be defined to any
value beyond A.
3: Local minimum at a point (E) can be greater than local maximum at other points (B and H).
4: If function is continuous in an interval, then pair of minimum and maximum in any order occur alternatively (B,C), (C,D), (D,E), (E,F) , (F,G) , (G,H) , (H,I).
5: A function can not have minimum and maximum at points where function is not defined. Consider a rational function, which is not defined at x=1.
f x = 1 x − 1 ; x ≠ 1 f x = 1 x − 1 ; x ≠ 1
Similarly, a function below is not defined at x=0.
| x=1; x>0
f(x) = |
|x = -1; x<0
Figure 2:
Function is not
at x=0.
Graph of function
Minimum and maximum of function can not occur at points where function is not defined, because there is no function value corresponding to undefined points. We should understand that undefined points
or intervals are not part of domain - thus not part of function definition. On the other hand, minimum and maximum are consideration within the domain of function and as such undefined points or
intervals should not be considered in the first place. Non-occurance of minimum and maximum in this context, however, has been included here to emphasize this fact.
6: A function can have minimum and maximum at points where it is discontinuous. Consider fraction part function in the finite domain. The function is not continuous at x=1, but minimum occurs at this
point (recall its graph).
7: A function can have minimum and maximum at points where it is continuous but not differentiable. In other words, maximum and minimum can occur at corners. For example, modulus function |x| has its
only minimum at corner point at x=0 (recall its graph).
Extreme value or extremum
Extreme value or extremum is either a minimum or maximum value. A function, f(x), has a extremum at x=e, if it has either a minimum or maximum value at that point.
Critical points
Critical points are those points where minimum or maximum of a function can occur. We see that minimum and maximum of function can occur at following points :
(a) Points on the graph of function, where derivative of function is zero. At these points, function is continuous, limit of function exists and tangent to the curve is parallel to x-axis.
(b) Points where function is continuous but not differentiable. Limit of function exits at those points and are equal to function values. Consider, for example, the corner of modulus function graph
at x=0. Minimum of function exist at the corner point i.e at x=0.
(c) Points where function is discontinuous (note that discontinuous is not undefined). A function has function value at the point where it is discontinuous. Neither limit nor derivative exists at
discontinuities. Example : piece-wise defined functions like greatest integer function, fraction part function etc.
We can summarize that critical points are those points where (i) derivative of function does not exist or (ii) derivative of function is equal to zero. The first statement covers the cases described
at (b) and (c) above. The second statement covers the case described at (a). We should, however, be careful to interpret definition of critical points. These are points where minimum and maximum
“can” exist – not that they will exist. Consider the graph shown below, which has an inflexion point at “A”. The tangent crosses through the graph at inflexion point. In the illustration, tangent is
also parallel to x-axis. The derivative of function, therefore, is zero. But “A” is neither a minimum nor a maximum.
Figure 3: “A” is
neither a minimum
nor a maximum.
Graph of function
Thus, minimum or maximum of function occur necessarily at critical points, but not all critical points correspond to minimum or maximum.
Note : We need to underline that concept of critical points as explained above is different to the concept of critical points used in drawing sign scheme/ diagram.
Graphical view
There are mathematical frameworks to describe and understand nature of function with respect to minimum and maximum. We can, however, consider a graphical but effective description that may help us
understand occurrence of minimum and maximum values. We need to understand one simple fact that we can have graphs of any nature except for two situations :
1: function is not defined at certain points or in sub-intervals.
2: function can not be one-many relation. In this case, the given relation is not a function in the first place.
Clearly, there exists possibility of minimum and maximum at all points on the continuous portion of function where derivative is zero and at points where curve is discontinuous. This gives us a
pictorially way to visualize where minimum and maximum can occur. The figure, here, shows one such maximum value at dicontinuity.
Figure 4: Maximum
value at a
Graph of function
Relative or local minimum and maximum
The idea of local or relative minimum and maximum is clearly understood from graphical representation. The minimum function value at a point is least in the immediate neighborhood where minimum
occurs. A function has a relative minimum at a point x=m, if function values in the immediate neighborhood on either side of point are less than the value at the point. To be precise, the immediate
neighborhood needs to be infinitesimally close. Mathematically,
f m < f m + h and f m < f m − h as h → 0 f m < f m + h and f m < f m − h as h → 0
The maximum function value at a point is greatest in the immediate neighborhood where maximum occurs. A function has a relative maximum at a point x=m, if function values in the immediate
neighborhood on either side of point are greater than the value at the point. To be precise, the immediate neighborhood needs to be infinitesimally close. Mathematically,
f m > f m + h and f m > f m − h as h → 0 f m > f m + h and f m > f m − h as h → 0
Global minimum and maximum
Global minimum is also known by “least value” or “absolute minimum”. A function has one global minimum in the domain [a,b]. Global minimum, f(l), is either less than or equal to all function values
in the domain. Thus,
f l ≤ f x for all x ∈ [ a , b ] f l ≤ f x for all x ∈ [ a , b ]
If the domain interval is open like (a,b), then global minimum, f(l), also needs to be less than or equal to function value, which is infinitesimally close to boundary values. It is because open
interval by virtue of its inequality does not ensure this. What we mean that it does not indicate how close “x” is to the boundary values. Hence,
f l ≤ f x for all x ∈ ( a , b ) f l ≤ f x for all x ∈ ( a , b )
f l ≤ lim x → a + 0 f x f l ≤ lim x → a + 0 f x
f l ≤ lim x → b − 0 f x f l ≤ lim x → b − 0 f x
Similarly, global maximum is also known by “greatest value” and “absolute maximum”. A function has one global maximum in the domain [a,b]. Global maximum, f(g), is either greater than or equal to all
function values in the domain. Thus,
f g ≥ f x for all x ∈ [ a , b ] f g ≥ f x for all x ∈ [ a , b ]
If the domain interval is open like (a,b), then global maximum, f(m), also needs to be greater than or equal to function value, which is infinitesimally close to boundary values. It is because open
interval by virtue of its inequality does not ensure this. Hence,
f g ≥ f x for all x ∈ ( a , b ) f g ≥ f x for all x ∈ ( a , b )
f g ≥ lim x → a + 0 f x f g ≥ lim x → a + 0 f x
f g ≥ lim x → b − 0 f x f g ≥ lim x → b − 0 f x
Domain interval
Nature of domain interval plays an important role in deciding about occurrence of minimum and maximum and their nature. In order to understand this, we need to first understand that the notion of
very large positive value and concept of maximum are two different concepts. Similarly, the notion of very large negative value and concept of minimum are two different concepts. The main difference
is that very large negative or positive values are not finite but extremums are finite. Consider a natural logarithmic graph of log e x log e x . It extends from negative infinity to positive
infinity, if base is greater than 1. The function is a strictly increasing function in its entire domain. As such, it has not a single minimum or maximum. The extremely large values at the domain
ends can not be considered to be extremum as we can always have function values greater or less than one considered to be maximum or minimum. This argument is valid for behavior of functions near end
points of an open interval domain. There can always be values greater or smaller than one considered.
Figure 5: Definite sub-interval of
logarithmic function
Definite sub-interval of logarithmic function
However, nature of graph with respect to extremum immediately changes when we define same logarithmic function in a closed interval say [3,4], then log e 3 log e 3 and log e 4 log e 4 are the
respective local minimum and maximum. Incidentally since function is strictly increasing in the domain and hence in the sub-interval, these extremums are global i.e. end values of function are global
minimum and maximum in the new domain of the function.
Above argument is valid for all continuous function which may have varying combination of increasing and decreasing trends within the domain of function. The function values at end points of a closed
interval are extremums (minimum or maximum) - may not be least or greatest. In the general case, there may be more minimum and maximum values apart from the ones at the ends of closed interval. This
generalization, as a matter of fact, is the basis of “extreme value theorem”.
Extreme value theorem
The extreme value theorem of continuous function guarantees existence of minimum and maximum values in a closed interval. Mathematically, if f(x) is a continuous function in the closed interval
[a,b], then there exists f(l) ≤ f(x) and f(g) ≥ f(x) such that f(l) is global minimum and f(g) is global maximum of function.
As discussed earlier, there at least exists a pair of minimum and maximum at the end points. There may be more extremums depending on the nature of graph in the interval.
Range of function
If a function is continuous, then least i.e. global minimum, “A” and greatest i.e. global maximum, “B”, in the domain of function correspond to the end values specifying the range of function. The
range of the function is :
[ A , B ] [ A , B ]
If function is not continuous or if function can not assume certain values, then we need to suitably analyze function and modify the range given above. We shall discuss application of the concept of
least and greatest values to determine range of function in a separate module. | {"url":"http://cnx.org/content/m17417/latest/?collection=col10464/latest","timestamp":"2014-04-20T00:50:20Z","content_type":null,"content_length":"190753","record_id":"<urn:uuid:e54ed38f-c751-42e1-a452-e19a3a49f2c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Billerica Math Tutors
...I teach students that SAT Math is not like classroom math. While classroom math rewards students for doing problems the "right" way, for SAT Math, it's not about how you get there, just that
you get the right answer. While we review the important concepts for the test, I teach students more about how to tackle problems they don't know by using alternate strategies.
26 Subjects: including ACT Math, probability, linear algebra, algebra 1
...My schedule is extremely flexible and am willing to meet you wherever is most convenient for you.I graduated from the University of Connecticut with a B.S. in Physics and minor in Mathematics
before attending graduate school at Brandeis University and Northeastern University, where I received a M...
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...Unlike many music instructors, I did not learn my music fundamentals until I was a young adult in college. This means I clearly remember "not knowing this stuff" and how I discovered and
learned general music concepts. As a professional music teacher I applied this perspective with 7th and 8th graders teaching general music for over a decade.
46 Subjects: including calculus, precalculus, trigonometry, statistics
...I am an experienced high school math and computer science teacher for grades 9-12. I am experienced with Common Core Standards and MCAS preparation. My course teaching experience includes
Algebra 1, Algebra 2, PreCalculus, Computer Programming and Robotics.
22 Subjects: including trigonometry, SAT math, precalculus, prealgebra
...I have also been successfully tutoring high school and college students in chemistry, physics, computer programming, and topics in mathematics from pre-algebra to statistics and advanced
calculus, as well as SSAT, SAT, and ACT test preparation for over ten years. I can help you, too. References...
33 Subjects: including trigonometry, probability, discrete math, differential equations | {"url":"http://www.algebrahelp.com/North_Billerica_math_tutors.jsp","timestamp":"2014-04-18T18:17:30Z","content_type":null,"content_length":"25279","record_id":"<urn:uuid:5ca8a41a-04b4-48bb-872b-369249292266>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Key Stage Wikispace
Half Termly Challenge
Key Stage Wikispace
Key Stage 3
- if you are studying years 7-9
Key Stage 4
- if you are studying GCSE
Key Stage 5
- if you are studying A level
Half Termly Challenge
Bronze Challenge
Silver Challenge
Gold Challenge
Each half term there is a challenge question for pupils to try. If you are up for the challenge please submit your completed answer to your class teacher - this must show your mathematical reasoning
behind your solution.
Bronze Challenge
Recommended for pupils in years 7-9
Half Term Problem Solution
The shape below is a part of a larger one made up of 2004 small squares
that continue this pattern.
The small squares have sides of length 1cm.
What is the length of the perimeter, in cm, of the whole shape?
Silver Challenge
Recommended for pupils in years 9-11
Half Term Problem Solution
1 In the above sum each letter stands for a different non-zero digit.
What is the value of a + w + a + y?
Gold Challenge
Recommended for pupils in years 11-13
Half Term Problem Solution
The British Museum has a 6000m2 glass roof that is made up from 3312 panes of glass,
1 which takes 2 weeks to clean. This job is undertaken once every two years.
Assuming that the panes are made up from congruent equilateral triangles,
how long is the side of each pane? | {"url":"http://overtongrangemaths.wikispaces.com/?responseToken=706a2c887d8d94ad7be51e58c565ee84","timestamp":"2014-04-24T13:38:12Z","content_type":null,"content_length":"50725","record_id":"<urn:uuid:af5c1bb5-2960-4dc9-ab94-d78945d8501d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
The tetrahedral dice are cast … and pack densely
Tetrahedra are special among the platonic solids. They are the simplest polyhedra and the ones most unlike spheres. Surprisingly, much of our knowledge about the packing properties of tetrahedra is
very recent: the past year has witnessed a sudden proliferation of novel, and often surprising, findings. Using Monte Carlo simulations, Haji-Akbari et al. [1] found that, upon compression, systems
of hard tetrahedra spontaneously form a very dense quasicrystalline structure. Now, in a paper in Physical Review Letters, Alexander Jaoshvili, Massimo Porrati, and Paul Chaikin of New York
University and Andria Esakia at Virginia Polytechnic Institute, both in the US, report their experiments on (almost) tetrahedral dice, which shed new light on the disordered structures that result
when tetrahedra particles are poured into a (large) container [2].
Before discussing tetrahedral packing, it is useful to consider first the venerable (yet still not fully solved) problem of sphere packing. In 1611 Kepler proposed that the densest packing of spheres
could be achieved by stacking close-packed planes of spheres. In such a packing, the spheres occupy $π/18≈74.05%$ of space. The Kepler conjecture was (almost certainly) proven in 1998 by Thomas
Hales. However, that does not mean that we know all there is to know about sphere packings: in addition to regular packing, spheres (and, in fact, most hard particles) also exhibit a much less
understood packing, namely, random close packing (RCP).
The quantitative study of random close packing started with J. D. Bernal’s experiments on the packing of ball bearings [3]. His experiments (and those of many others) suggested that it is impossible
to compress disordered sphere packings beyond a volume fraction of approximately $64%$. However, this observation does not necessarily imply that there exists a well-defined density of random close
packing. It could just as well be that the rate at which the disordered hard-sphere packings can be compacted becomes very small around a volume fraction $64%$—small, but not zero. If that were the
case, RCP would not have a clear mechanical definition (that is, pouring and shaking may not lead to a well-defined RCP state). Indeed, in 2000, Torquato, Truskett, and Debenedetti [4] argued on the
basis of computer simulations that states with a density above $64%$ can always be obtained by increasing the local order in a “random” sphere packing. This observation implies that the “mechanical”
route to random close packing may be ill defined.
A different, “nonmechanical” way to view random close packing was proposed a few years ago by O’Hern et al. [5, 6]. The basic idea of their approach is the following: start with a random
configuration of $N$ particles in a volume $V$, interacting through a soft repulsion with a finite range $σ$ ($σ$ is equal to the diameter of the hard spheres that I consider later). For every random
configuration, we can now determine the nearest minimum or zero of the potential energy. At low densities, the states with zero potential energy will occupy a finite fraction of configuration space.
However, as we decrease the volume of the system, the nearest local energy minima (the “inherent structures,” to use the language of Stillinger and Weber [7]) will one-by-one take on a finite value
of the potential energy. For every set of (scaled) coordinates, there is a unique density where this first happens. If we consider the limit that the soft particles become hard spheres, then this
density is the point where this specific inherent structure is no longer allowed.
The key observation by O’Hern et al. [5] is that the simulations show that the rate at which allowed inherent structures disappear with increasing density, has a sharp maximum at a particular
density. Moreover, this peak becomes sharper as the system size becomes larger. In the thermodynamic limit, the number of allowed inherent structures therefore appears to decrease discontinuously at
a hard-sphere volume fraction that happens to be very close to existing estimates of the density of random close packing. The fact that a sharp transition appears to exist implies that the evaluation
of the density of RCP is now a mathematical problem—RCP is a property of three-dimensional space. But, and this is quite unusual, unlike regular close packing that can be exhibited by periodically
repeating a Wigner-Seitz cell containing only a single particle, RCP is “emergent,” it is NOT a property of a small system and becomes only meaningful in the thermodynamic limit.
In the case of tetrahedra, the density of regular close packing is not known, nor is the density of random close packing. In fact, until two years ago, it was not even known if tetrahedra could pack
more densely than spheres [8]. Earlier, Ulam had made the conjecture that, of all hard convex bodies, spheres occupy the smallest volume fraction at regular close packing (however, Ulam apparently
never wrote down his conjecture—it is quoted in a book by Martin Gardner [9]). But until the work of Chen [8], there existed no example of a packing of tetrahedra denser than that of spheres. During
the past year, it has become clear that not only do tetrahedra pack more densely than spheres, but also much more densely. In particular, Haji-Akbari et al. [1] used numerical simulations to study
the high-density packing of hard tetrahedra and observed a packing fraction of more than $85%$—and since then, even higher packing densities have been reported by several authors [10].
But, and this is really unexpected, some of the highest density packings found by Haji-Akbari et al. [1] are not crystalline, but quasicrystalline (a dodecagonal quasicrystal). It is still possible
that the densest packing of tetrahedra is crystalline (but then the crystal structure is nontrivial), yet the fact that hard tetrahedra spontaneously form a quasicrystal upon compression was totally
Haji-Akbari et al. also found that the density of random close packing of tetrahedra is very high (above $78%$, which is also above the Kepler limit). For particles that are not space-filling, this
result is also surprising. As in the case of spheres, one can argue about the precise meaning of random close packing.
The experimental ”Bernal” approach to random close packing of tetrahedra was followed in this new set of experiments by Jaoshvili et al. [2] who poured tetrahedral dice into various containers and
used volumetric measurements to determine the density of RCP and MRI to analyze the local structure of the resulting packing. These experiments suggest a density of random close packing around
$76±2%$, roughly in agreement with the findings of Haji-Akbari et al. [1]. Interestingly, Jaoshvili et al. find that positional and orientational correlations of randomly packed tetrahedra are very
short ranged, suggesting that there is no precursor of the quasicrystalline state in the dense fluid.
The experiments on packing of tetrahedra raise an interesting question about the “technology” of random close packing. There are many practical examples where man-made objects are designed such that
they will efficiently fill the volume into which they are poured. Examples are many pills (both oblate and prolate), candies (such as the M&Ms studied in 2003 by Donev et al. [11]) or, on a larger
scale, egg-shaped coal briquettes. In particular in the latter case, there is a clear incentive to design the shape of the object such that the density of random close packing is maximal, because
briquettes are used as fuel and the higher their RCP density, the smaller the storage requirements. Interestingly, ”egg cokes” have the shape of a biaxial ellipsoid with an aspect ratio that
corresponds closely to the shape that Donev et al. found to have the highest density of random close packing of any ellipsoid (approximately $73.5%$, see Fig.1). This suggests that the coal engineers
of the 19th century had a good understanding of the effect of shape on random close packing—except that they did not make tetrahedral briquettes! Yet such objects would pack more densely than biaxial
ellipsoids. Almost certainly, the nonexistence of the tetrahedral briquette is not due to an oversight of the coal engineers: most likely tetrahedral objects would chip and fracture much more easily
than ellipsoids.
1. A. Haji-Akbari et al., Nature 462, 773 (2009).
2. A. Jaoshvili, A. Esakia, M. Porrati, and P. M. Chaikin, Phys. Rev. Lett. 104, 185501 (2010).
3. J. D. Bernal, Nature 185, 68 (1960).
4. S. Torquato, T. M. Truskett, and P. G. Debenedetti, Phys. Rev. Lett. 84, 2064 (2000).
5. C. S. O’Hern, L. E. Silbert, A. J. Liu, and S. R. Nagel, Phys. Rev. E 68, 011306 (2003).
6. L. E. Silbert, A. J. Liu, and S. R. Nagel, Phys. Rev. E 73, 041304 (2006).
7. F. H. Stillinger and T. A. Weber Science 225, 983 (1984); Phys. Rev. A 28, 2408 (1983).
8. E. R. Chen, Discrete Comput. Geom. 40, 214 (2008).
9. The Ulam conjecture is mentioned in: M. Gardner, The Colossal Book of Mathematics: Classic Puzzles, Paradoxes, and Problems ( Norton, New York, 2001)[Amazon][WorldCat].
10. Y. Kallus, V. Elser, and S. Gravel, arXiv:0910.5226v5; S. Torquato and Y. Jiao, Phys. Rev. E (to be published); E. R. Chen, M. Engel, and S. C. Glotzer, arXiv:1001.0586.
11. A. Donev et al., Science 303, 990 (2004). | {"url":"http://physics.aps.org/articles/v3/37","timestamp":"2014-04-17T01:46:44Z","content_type":null,"content_length":"27582","record_id":"<urn:uuid:7cc503c3-e787-4c02-84e6-a9e1cd20605d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math problems of the week: 4th grade Everyday Math vs. Singapore Math
I. The final decimals problem set in the 4th grade Everyday Math Student Math Journal "Decimals and Their Uses" chapter [click to enlarge]:
II. The final decimals problem set in the 4th grade Singapore Math Primary Mathematics Workbook "The Four Operations of Decimals Chapter"
[click to enlarge]:
III. Extra Credit:
What higher-level thinking opportunities do Singapore Math students lose out on by not being prompted to reflect on their feelings about decimals?
4 comments:
How about this answer for EM Reflection problem #3:
"This question is the hardest, because I'm good at doing actual math--it's easy and fun! but I suck at writing and don't enjoy it at all. Can't we do more math in math class?"
No kidding. lol SM any day.
From the rising-6th grader's summer math packet:
Find 65% of 100.
The question was so stupid, that the kid kept asking 65% of what?? Knowing that they couldn't possibly be asking for 65% of 100! That would be stupid!
That wasn't the end of the problem though, for the complete question was:
Find 65% of 100. Explain your answer.
He has no idea how to explain such a mind-bogglingly simple problem. "They give you a really easy question, and then make it impossible!" he said.
Astounding, Auntie Ann! And--"They give you a really easy question, and then make it impossible!"--very well said!
I will add this to my collection. Thanks for sharing. | {"url":"http://oilf.blogspot.com/2013/07/math-problems-of-week-4th-grade_18.html","timestamp":"2014-04-18T15:41:13Z","content_type":null,"content_length":"105939","record_id":"<urn:uuid:e20e9593-cc8e-480b-9103-228c18e6240f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/unam/asked","timestamp":"2014-04-17T10:06:13Z","content_type":null,"content_length":"104617","record_id":"<urn:uuid:ac732521-fe7e-401c-8250-bbf504a41674>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aldan, PA Precalculus Tutor
Find an Aldan, PA Precalculus Tutor
...Our tutoring sessions will start where you are. Our sessions will be based on mutual respect. In a short time we can build ourselves a partnership of learning.
10 Subjects: including precalculus, calculus, physics, geometry
My Education I have a bachelor's in Chemistry from the University of Delaware (2006) and a Master of Science in Materials Engineering from Drexel University (2011). I have a general enthusiasm
for History, Economics, and Government and keep up with current events and politics. My Experience I am c...
14 Subjects: including precalculus, chemistry, algebra 1, algebra 2
...I have been fascinated with the subject ever since. The SAT is the gold standard in college admissions testing and, in most cases, I discourage students from taking the ACT. The exception is
when a student has achieved well in high school mathematics, has taken pre-calculus, and does not score well on standardized tests.
23 Subjects: including precalculus, English, calculus, geometry
...Over the last 20 years I have given technical presentations and workshops throughout Europe and North America, to delegations from China, Russia, and to NATO (North Atlantic Treaty
Organization). I retired in 2004 to pursue my strongest interest - a career as a personal tutor. I offer (as with f...
10 Subjects: including precalculus, calculus, algebra 1, GRE
...Previously, I completed undergraduate work at North Carolina State University for a degree in Philosophy. Math is a subject that can be a bit difficult for some folks, so I really love the
chance to break down barriers and make math accessible for students that are struggling with aspects of mat...
22 Subjects: including precalculus, calculus, geometry, statistics
Related Aldan, PA Tutors
Aldan, PA Accounting Tutors
Aldan, PA ACT Tutors
Aldan, PA Algebra Tutors
Aldan, PA Algebra 2 Tutors
Aldan, PA Calculus Tutors
Aldan, PA Geometry Tutors
Aldan, PA Math Tutors
Aldan, PA Prealgebra Tutors
Aldan, PA Precalculus Tutors
Aldan, PA SAT Tutors
Aldan, PA SAT Math Tutors
Aldan, PA Science Tutors
Aldan, PA Statistics Tutors
Aldan, PA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Clifton Heights precalculus Tutors
Collingdale, PA precalculus Tutors
Colwyn, PA precalculus Tutors
Darby Township, PA precalculus Tutors
Darby, PA precalculus Tutors
East Lansdowne, PA precalculus Tutors
Folcroft precalculus Tutors
Folsom, PA precalculus Tutors
Glenolden precalculus Tutors
Holmes, PA precalculus Tutors
Lansdowne precalculus Tutors
Morton, PA precalculus Tutors
Rutledge, PA precalculus Tutors
Secane, PA precalculus Tutors
Sharon Hill precalculus Tutors | {"url":"http://www.purplemath.com/Aldan_PA_precalculus_tutors.php","timestamp":"2014-04-17T19:49:01Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:734aab4c-49d8-4091-8029-6fb71cf24ee9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westport, CT Trigonometry Tutor
Find a Westport, CT Trigonometry Tutor
I offer tutoring for a wide range of subjects, SAT, ACT, and AP prep, and college application counseling throughout Connecticut! I was accepted to 12 universities, including Yale University, the
University of Pennsylvania, New York University, Fordham University, Boston College, Boston University, ...
44 Subjects: including trigonometry, English, reading, geometry
...I have a Master's degree in math. I have taught Algebra 1, Algebra 2, Geometry and Calculus and been the chairman of the math department at a local private school. I have extensive experience
in bringing high school students a "real world" and "hands on" educational experience and motivating t...
7 Subjects: including trigonometry, physics, geometry, algebra 1
...My students vary from Kindergarten through grade 12, as well as college students for Algebra and Writing. I tutor a variety of subjects but mainly Mathematics and English, Reading, and Writing
for elementary. For high school students, most NY State Regents examinations.
47 Subjects: including trigonometry, reading, accounting, biology
...I currently run a robotics program and can teach your child robotics and computer programming. I have created several large scale applications in the financial services industry based on Excel,
Access, Outlook! Word and PowerPoint and can teach what you need to know to quickly get up to speed in those programs.
23 Subjects: including trigonometry, reading, geometry, algebra 1
...My name is Pam. I have tutored privately for over five years, but I am new to WyzAnt. I have an MBA from Pace University and an undergraduate degree in education with a minor in psychology from
Ithaca College.
76 Subjects: including trigonometry, reading, biology, calculus | {"url":"http://www.purplemath.com/Westport_CT_Trigonometry_tutors.php","timestamp":"2014-04-21T02:11:26Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:8341ed11-ed10-4520-becb-0d3980cbf564>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post reply
I is a point in triangle ABC such that angle IBA > angle ICA , angle IBC > angle ICB , BI cut AC at B' , CI cut AB at C' . PRove the : BB' < CC'
please help me i'm very thanks
and the second problem
a,b,c>0 prove the equation have 1 root :
d^3 - d(ab+bc+ca) - 2abc =0 .
from the equation and x,y,z >0 . ax+by+cz=xyz . Prove the inequality
x+y+z >= 2/d . (\sqrt[(a+d)(b+d)(c+d)])
C)in assemble N ,every number is white or black so that white + black=black
prove : white + white = white
Re: geometry
nobody can help me !!!!!!!!
Super Member
Re: geometry
I do complex theories, such as Chaos Theory and the Butterfly Effect.
I, however, lack certain math skills.
Boy let me tell you what:
I bet you didn't know it, but I'm a fiddle player too.
And if you'd care to take a dare, I'll make a bet with you.
Re: geometry
can you post the answer ?
Re: geometry
please i'm very very need !!!!!!!
Re: geometry
Hi math, and welcome to the forum.
These are tough questions! Where did they come from? I had a quick look and found I couldn't give an immediate answer ... other people might be the same, so you may have to wait, sorry.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: geometry
MathsIsFun wrote:
Where did they come from? I had a quick look and found I couldn't give an immediate answer
i don't understand . Why !!!!!!!!!!!!!!
are you have the answer of the problem ????
Re: geometry
No, not yet. Proofs are very difficult.
Ideas are easy, though. For "C" I can think of:
white + black=black
therefore: black-black=white, but black-black=0
therefore: white+white = 0 + 0 = 0 = white
But this is not really a proof
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: geometry
your proff is wrong . please help me in problem B,C
Re: geometry
Yes, that's right ... I said it wasn't really a proof.
I am sorry, but your questions are difficult, I am just hoping someone can spare the time to work on answer for you. I have unfortunately been a little busy recently.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: geometry
thanks . I will be waiting for you
Re: geometry
Full Member
Re: geometry
never fear!!! Flowers4Carlos is here to save the day!!!!
*looks at the problem*
eeeeek..... *falls down*
Re: geometry
Maybe ganesh could post these one at a time under "Ganesh's Puzzles".
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: geometry
math wrote:
C)in assemble N ,every number is white or black so that white + black=black
prove : white + white = white
Let white=0, black=1;
white + black = 0 + 1 = 1 = black
white + white = 0 + 0 = 0 = white, Proved!
For A), B) please give some time. Yes, Proofs aren't too easy. They are time consuming.
Character is who you are when no one is looking.
Re: geometry
very thanks to ganesh !!!!
Re: geometry
next week , i will give the proof to my teacher ,please , !!!!!!!
Re: geometry
the problem were unsolve in along time ago !!!!!!!!!
Re: geometry
Draw the triangle, and then draw it again rotated so that B is in the same corner that C was. Now since ICA < IBA and ICB < IBC, ICA + ICB < IBA + IBC. Since this is true, the angle at C is less than
the angle at B. I forget the name of the proof, but if you have such a triangle, the side with a greater angle has a smaller diagonal (BB' and CC'). So Since angle C < B, CC' > BB'.
I don't understand quite understand the wording in 2.
Last edited by Ricky (2005-12-05 08:19:24)
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Real Member
Re: geometry
I think c) proof isn't very right. Because you have "let" here. Here's a proof without "let":
(b means black, w means white)
b1+w1+w2=b3; b1+(w1+w2)=b3,
b1+w3=b3, so w1+w2=w3.
IPBLE: Increasing Performance By Lowering Expectations.
Post reply | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=16158","timestamp":"2014-04-21T12:33:49Z","content_type":null,"content_length":"29935","record_id":"<urn:uuid:6ef42c3d-fe06-4147-8c0f-1b5a3b9d27f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about Vectors
May 9th 2010, 03:02 AM #1
Junior Member
Sep 2008
Question about Vectors
If A(-1,3,4), B(4,6,3), C(-2,1,1) and D are the verticies of a parallelogram, find all the possible coordinates for the point D.
So I let D be (x,y,z), then I worked out vectors AB,AC,AD,BC,BD,CD and after this I don't know what to do. (am i even doing it right?) Can someone help me with this question?
Diagonals of a parallelogram bisect each other.
So, $(\frac{-1-2}{2}, \frac{3+1}{2}, \frac{4+1}{2})=(\frac{4+x}{2}, \frac{6+y}{2}, \frac{3+z}{2})$
the ans by alex is very much correct
This is what i have done so far. If vector AB is parallel to CD then x=4,y=5,z=0.
If BC is parallel to AD then x=-6,y=-1,z=2
Diagonals of a parallelogram bisect each other (BC and AC)
$<br /> (\frac{-1-1}{2},\frac{3+2}{2},\frac{4+1}{2}) = (\frac{x+4}{2},\frac{y+6}{2},\frac{z+3}{2})<br />$
and i get the solution x=-6,y=-1,z=2, which is the same as my 2nd solution set.
The problem is I need the third solution set. How do I do this?
May 9th 2010, 03:25 AM #2
May 9th 2010, 03:35 AM #3
May 9th 2010, 05:49 AM #4
Junior Member
Sep 2008
May 15th 2010, 05:25 AM #5
Junior Member
Sep 2008 | {"url":"http://mathhelpforum.com/pre-calculus/143812-question-about-vectors.html","timestamp":"2014-04-16T05:49:42Z","content_type":null,"content_length":"39286","record_id":"<urn:uuid:5c24dc7b-d8a9-4bce-b5ed-47f1fdbbe52f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Faster-Than-Light Telegraph That Wasn't
Herbert's FLASH system—the acronym stood for "first laser-amplified superluminal hookup"—employed a source that emitted pairs of photons in opposite directions. The scheme focused on photons'
polarization—that is, the directions along which their associated electric fields oscillated. The photons could be plane-polarized, with the electric fields oscillating either horizontally (H) or
vertically (V). Or the photons could be circularly polarized, with the electric fields tracing out helical patterns in either a right-handed (R) or left-handed (L) orientation.
Physicists had long known that the two flavors of polarization—plane or circular—were intimately related. Plane-polarized light could be used to create circularly polarized light, and vice versa. For
example, a beam of H-polarized light consisted of equal parts R- and L-polarized light, in a particular combination, just as a beam of R-polarized light could be broken down into equal parts H and V.
Likewise for individual photons: a photon in state R, for example, could be represented as a special combination of states H and V. If one prepared a photon in state R but chose to measure plane
rather than circular polarization, one would have an equal probability of finding H or V: a single-particle version of Schrödinger’s cat.
In Herbert's imagined set-up, one physicist, Alice ("Detector A" in the illustration), could choose to measure either plane or circular polarization of the photon headed her way [1]. If she chose to
measure plane polarization, she would measure H and V outcomes with equal probability. If she chose to measure circular polarization, she would find R and L outcomes with equal probability.
In addition, Alice knows that because of the nature of the source of photons, each photon she measures has an entangled twin moving toward her partner, Bob. Quantum entanglement means that the two
photons behave like two sides of a coin: if one is measured to be in state R, then the other must be in state L; or if one is measured in state H, the other must be in state V. The kicker, according
to Bell's theorem, is that Alice's choice of which type of polarization to measure (plane or circular) should instantly affect the other photon, streaming toward Bob [2]. If she chose to measure
plane polarization and happened to get the result H, then the entangled photon heading toward Bob would enter the state V instantaneously. If she had chosen instead to measure circular polarization
and found the result R, then the entangled photon instantly would have entered the state L.
Next came Herbert's special twist. Before the second photon made its way to Bob's detectors, it entered a laser gain tube [3]. Lasers had been around for 20 years by that time, and as the leading
textbooks routinely touted, the output from a laser had the same polarization as the input signal. That suggested that the laser should release a burst of photons in the complementary state to
whatever Alice had found at her side. Bob could then split the beam [4], sending half toward a detector to measure plane polarization [5] and half toward a detector to measure circular polarization
If Alice chose to measure circular polarization and happened to find L, then the entangled photon heading toward Bob would instantly go into the state R prior to entering the laser gain tube. Out of
the laser would burst a stream of R photons heading toward Bob. He could then send half the beam toward a detector to measure plane polarization and half toward a detector to measure circular
polarization. In this case, Herbert concluded, Bob would find half the photons in state R, none in state L, and a quarter each in states H and V. In an instant, Bob would know that Alice had chosen
to measure circular polarization. Alice's choice—plane or circular polarization—would function like the dots and dashes of Morse code. She could signal Bob simply by alternating her choice of what
type of polarization to measure. Bob could decode each bit of Alice’s code faster than light could have traveled between them.
As GianCarlo Ghirardi, Tullio Weber, Wojciech Zurek, Bill Wootters and Dennis Dieks each clarified, Herbert’s device would not actually allow superluminal signaling. A photon in state R, for example,
would exist as a combination of equal parts H and V. Each of those underlying states would be amplified by the laser. Hence the output would be a superposition of two states: one in which all the
photons were in state H, and the other in which all the photons were in state V, each with a probability of 50 percent. Bob would never find half in H and half in V at the same time, just as
physicists would never find Schrödinger's cat to be both half-dead and half-alive upon opening the box. Thus, Bob would receive only noise no matter what setting Alice had chosen on her end. Moment
by moment, Bob's detectors would flash H with R or V with L or H with L and so on, in random combinations. He would never find H and V with R, and hence he would have no way to determine what Alice
had been trying to tell him. Quantum entanglement and relativity could coexist after all.
This discovery became known as the "no-cloning theorem": a powerful statement about the ultimate foundations of quantum theory. An arbitrary or unknown quantum state cannot be copied without
disturbing the original state. No one had ever recognized that fundamental feature of quantum theory before the cat-and-mouse game had unfolded between Nick Herbert's thought experiment and his
talented detractors. The fact that quantum theory sets an ultimate limit on the ability of anyone—including a potential eavesdropper—to seize individual quantum particles and make copies of them soon
became the bedrock for quantum encryption, and today is at the heart of the flourishing field of quantum information science. | {"url":"http://www.scientificamerican.com/article/mistakes-faster-than-light-telegraph-that-wasnt/","timestamp":"2014-04-17T19:30:49Z","content_type":null,"content_length":"63606","record_id":"<urn:uuid:027755e6-c6a1-45a5-b07c-8da9379a3303>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Trek BBS - View Single Post - Whee are the aliens?
CorporalCaptain wrote:
Edit_XYZ wrote:
CorporalCaptain wrote:
Care to explain just what 100! is a "very rough approximation" of in this context and how you know it even is an approximation?
I take it as a very rough approximation because it leaves out many factors:
-In the primordial earth, there were many environments, not just 100;
-As such, the problem of abiogenesis is more correctly stated as: 'these 100 steps should follow one after another without one or more destructive environments appearing between them, destroying the
future self-replicating molecule';
-The number of steps necessary to create a molecule replicating half-way reliably is probably larger than 100;
In essence, 100! is a simplification, only there to give a rough idea about the improbability of self-replicating molecules emerging.
So, in other words, it's just something you pulled out of your ass
Annoyed much, CorporalCaptain?
BTW, a simplification is far from 'pulling out of one's ass'.
The issue isn't so much the 100 part, but the factorial. You seem to have settled on that, because you like the fact that 100! is astronomically huge.
You don't say?
Here are just two problems with your assumption that factorial is the correct function.
First, the steps that need to be performed in order are not all independent of each other. Certain later steps can occur only if their reactants are available. Therefore, not all of the combinations
counted by your function are equally likely to occur, since some of them are in fact impossible, namely the ones describing sequences in which steps occur before their reactants are available. For
example, if C depends on both A and B, then CAB, CBA, ACB, and BCA are four of the 6=3! sequences counted by your function, but they couldn't possibly occur, since C simply can't happen without the
products of both A and B. This is one reason why your function grossly underestimates the odds of the final product occurring randomly. The impossible combinations, being ruled out, can't muck things
You are confusing the environments and the chemical steps to which they give birth.
I posited 100 environments that must occur in order, in order to create in the 'warm pond' or wherever the chemical steps for the appearance of life.
These environments are independent of each other, one can occur out of order* - in which case, of course, the fledgling molecule not being available, bye bye future self-replicating molecule.
*Unless you want to post a magical environment that creates the successive ones/a large number of the successive ones.
The overwhelming majority of the combinations you've counted are in fact impossible for this first reason alone. In fact, the number of impossible combinations is at least 99!. Just consider the
combinations where the final step occurs first, and then count all ways of reordering the remaining 99 steps. Those are all impossible combinations, unless the molecule in question only needs one
step to be produced.
Cute. See above.
A second problem is that steps that are independent of each other can be interchanged. For example, if A and B aSo - here you're positing a magical initial condition that can create re independent of
each other, but C depends on both A and B, then ABSo - here you're positing a magical initial condition that can create C and BAC are both valid sequences. Your function only admits one of them.
The factorial function counts the number of ways of reordering sequences. That's the wrong function to use in this case. There is no valid argument that it is even a rough approximation of the
correct value.
The chemical steps necessary you achieve a self-replicating molecule cannot be interchanged (especially when talking about as few as 100*) - even if the 100 environments can appear independently of
each other (although, if they have no fledgling molecule, they go nowhere).
If you interchange these chemical steps you will end up with a boring chemical soup - nothing self-replicating.
*When you want to talk about a more realistic ~1000 chemical steps - sure, you can probably interchange a few. | {"url":"http://www.trekbbs.com/showpost.php?p=8562886&postcount=39","timestamp":"2014-04-21T11:38:47Z","content_type":null,"content_length":"26074","record_id":"<urn:uuid:128804ae-f1de-4c0c-9ae3-656df0c3ea03>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical analysis and significance testing of serial analysis of gene expression data using a Poisson mixture model
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2007; 8: 282.
Statistical analysis and significance testing of serial analysis of gene expression data using a Poisson mixture model
Serial analysis of gene expression (SAGE) is used to obtain quantitative snapshots of the transcriptome. These profiles are count-based and are assumed to follow a Binomial or Poisson distribution.
However, tag counts observed across multiple libraries (for example, one or more groups of biological replicates) have additional variance that cannot be accommodated by this assumption alone.
Several models have been proposed to account for this effect, all of which utilize a continuous prior distribution to explain the excess variance. Here, a Poisson mixture model, which assumes excess
variability arises from sampling a mixture of distinct components, is proposed and the merits of this model are discussed and evaluated.
The goodness of fit of the Poisson mixture model on 15 sets of biological SAGE replicates is compared to the previously proposed hierarchical gamma-Poisson (negative binomial) model, and a
substantial improvement is seen. In further support of the mixture model, there is observed: 1) an increase in the number of mixture components needed to fit the expression of tags representing more
than one transcript; and 2) a tendency for components to cluster libraries into the same groups. A confidence score is presented that can identify tags that are differentially expressed between
groups of SAGE libraries. Several examples where this test outperforms those previously proposed are highlighted.
The Poisson mixture model performs well as a) a method to represent SAGE data from biological replicates, and b) a basis to assign significance when testing for differential expression between
multiple groups of replicates. Code for the R statistical software package is included to assist investigators in applying this model to their own data.
Serial analysis of gene expression (SAGE) is a technique for obtaining a quantitative, global snapshot of the transcriptome [1]. The method extracts short sequence tags (containing 10, 17, or 22 bp
of information, depending on the protocol) from each messenger RNA; these are serially ligated, cloned and sequenced, and can then be counted to obtain a profile [1-3]. SAGE has been used to study
the transcriptome of a variety of tissue and cell types from a diverse set of organisms. The technique was originally conceived to study the cancer transcriptome, and has been utilized extensively to
do so.
As a counting technology, SAGE produces profiles consisting of a digital output that is quantitative in nature. For example, a statement can be made with reasonable certainty that a SAGE tag observed
30 times in a library of 100,000 tags corresponds to a transcript that comprises 0.03% of the total transcriptome; the same statement cannot be made reliably with analog values, like that obtained
from a microarray. Accordingly, a reliable statistical model should account for the discrete, count-based nature of SAGE observations. When testing for differential expression between groups, where
each group can contain multiple libraries, statistical methods that incorporate a continuous probability distribution (e.g. the Normal distribution assumed by Student's t-test) should be avoided.
Indeed, such tests require tag counts be normalized by division with the total library size; this removal of library size from the set of sufficient statistics discards an informative facet of the
The sampling of SAGE tags can be modeled by the Binomial distribution which describes the probability of observing a number of successes in a series of Bernoulli trials. Here, the library size
corresponds to the number of trials and the count of a particular tag is the number of successful trial outcomes. When the probability of an event is small, the Binomal distribution approaches the
Poisson distribution as the number of trials increases. This is the case for SAGE (since the tag counts are small relative to a large library size), so the form of the Poisson and Binomial
distribution is essentially the same. A fortunate characteristic of both of these distributions is that they are a function of a single parameter only, since the variance in observed data is directly
calculable from the mean.
However, in practice, the variance of SAGE data is often larger than can be explained by sampling alone. Several authors have attributed this effect, termed "overdispersion", to a latent biological
variability [4-6]. [4] refers to this as "between"-library variability, as opposed to "within"-library variability caused by sampling. Examples of factors that could contribute to this variability
are numerous, including: sample preparation or quality, artefacts intrinsic to the library construction protocol, differences in gene transcription due to environment, or the intrinsic stability or
regulatory complexity of transcription at a particular locus. This will adversely affect statistical analysis because additional variance results in an overstated significance. Procedures for using
hierarchical models which incorporate a continuous prior distribution to explain the excess variance have been presented for both the Binomial (viz. beta-binomial using logistic regression [5], t[w]
-test [4], or Bayes error rate [7]) and Poisson (viz. negative binomial a.k.a. hierarchical gamma-Poisson using log-linear regression [6]) distributions. Attempts to use the log-normal and
inverse-Gaussian as prior distributions (both of these have longer tails) did not show an appreciable improvement and are computationally difficult to fit (data not shown).
Here, it is argued that the excess variation is due to a mixing of two or more distinct Poisson (or Binomial) components, and this mixing is the predominant source of total variation. This assumption
corresponds to a finite mixture model, which have found wide applicability in several fields (for a general introduction, McLachlan and Peel is a good source [8]). To illustrate, consider a tag from
ten SAGE libraries of equal size (e.g. 100,000 tags) that has observed counts where half are realizations of an expression of 0.0003 and the other half of 0.0004. As a result, the probability
distribution of observing a particular tag count will be a combination of these two components (Figure (Figure1).1). Note the similarity between the shapes of the probability distributions estimated
from a fitted negative binomial (which assumes sampling variability drawn from a latent biological variability) and a Poisson mixture model (which assumes a set of independent components, each having
sampling variability only).
Probability density of several models applied to data generated from two Poisson components. 10 observations were randomly drawn from each of two Poisson distributions, one with a mean of 30, the
other 40. The values drawn from the first component were ...
If the Poisson mixture model is an accurate foundation to explain SAGE observations, it is attractive for several reasons. First, this approach does not rely on a vague and continuous prior
distribution to explain additional variance. Rather, the model asserts that a gene's expression level will take on one of a number of distinct states. Second, overdispersed models applied to SAGE
data tend to show a wide range of excess variation; in many cases, far greater than can be attributed to counting. This is a troubling prospect for studies that utilize a limited number of libraries
(e.g. pair-wise comparisons), since the observed count may differ wildly from the underlying expression. If a mixture model provides an improved fit to SAGE data, this concern would be assuaged.
Finally, mixture models, by nature, allow for the concept of subsets (or latent classes) in the expression values of each tag. Dysregulation of genes in disease processes such as cancer are often
observed in only a proportion of profiled samples, and these will be naturally identified during model fitting. This property can also be utilized to identify sets of co-expressed genes.
Goodness of fit
In order to evaluate the efficacy of a mixture model approach, a comparison of the goodness of fit of this and previously described models on 15 sets of biological replicates from publicly available
SAGE data was performed (see Methods).
Goodness of fit was assessed for: 1) the canonical log-linear (Poisson) model, 2) negative binomial (i.e. hierarchical gamma-Poisson or overdispersed log-linear) model, and 3) k-component Poisson
mixture model (see Methods for a brief description of each). Since maximum likelihood estimation (MLE) is used to fit each of these models, the log-likelihood was the basis for assessing relative
goodness of fit. A comparison of the Akaike information criterion (AIC) [9] and Bayesian information criterion (BIC) [10] (both of which use the log-likelihood and a term to penalize a model for
estimating a larger number of parameters) was performed on each of the datasets (Table (Table11).
Comparison of model fits to a single group of biological replicates
As expected, the canonical Poisson model, which does not account for excess variance, performs poorly in all cases. The Poisson mixture model consistently outperforms the negative binomial model
regardless of the metric used. The competitiveness of the negative binomial model is perhaps not surprising since a comparison of the fit of these two models to simulated data indicates that the
negative binomial can often fit better to data generated from a two-component Poisson mixture. This becomes more problematic as the component means draw closer (data not shown, Figure Figure11 is a
good example). However, several hypotheses can be tested to further strengthen the case for the mixture model approach. These are considered in turn.
Tags with ambiguous mappings are represented by a greater number of components
Consider an idealized situation where a gene's expression can take on one of two states (and can therefore be modelled by a two-component Poisson mixture). A significant proportion of SAGE tags are
ambiguous (correspond to more than one gene) and, under the idealized model, would result in tag counts that are modelled by 2^g components (where g is the number of expressed genes the tag
corresponds to). Therefore, the number of components in the mixture should be higher for ambiguous tags.
Simply partitioning the data into ambiguous and unambiguous tags and comparing the number of components is unlikely to be informative since, for any given ambiguous tag, it is not known how many of
the possible genes are actually expressed. However, two normal brain libraries used in this study were generated using LongSAGE (GSM31931 and GSM31935), which provides 17 base pairs of information
rather than 10. The tag sequences in these libraries were shortened before inclusion in the normal brain dataset used in the previous section. However, by comparing the shortened tag list to the
original library, tags that actually correspond to two or more LongSAGE tag sequences (and presumably represent different transcripts) were identified. Tags counts of one or two were considered
artefacts of PCR amplification or sequencing and were not used in this determination.
The number of ambiguous and unambiguous tags was tallied for each estimated number of components (Table (Table2).2). Ambiguous tags are represented more highly in the set of model fits that consist
of a larger number of components. This effect, which is statistically significant, is consistent with the mixture model hypothesis.
Mean number of mixture model components
Component assignment of libraries is non-random
If the mixture model approach holds, then the Poisson components should cluster the libraries into recurring groups. Such an enrichment of certain component assignments would be expected for a number
of reasons. Two possibilities are: a) if one or more libraries are mislabelled, the tag expression in those libraries should show a preferential assignment to a separate component; and b) if the
genes corresponding to a set of tags are co-expressed, the component assignment should be similar amongst these genes. Conversely, if the negative binomial model is more appropriate then component
assignments should essentially be random, since the distribution assumed to give rise to biological variability is continuous and unconditional.
For each of the datasets, the component assignments for tags where the estimated number of components is two were tallied. The individual assignment was based on the component with the highest
posterior probability, given a tag count and library size. In all cases, there were a significant number of tags where the parent libraries were partitioned into the same two components (Table
(Table3).3). For example, in the AML libraries containing the cytogenetic abnormality t(8;21), of the 225 tags that had expression that could be fit to two Poisson components, 110 were partitioned in
the form -++-- (p = 4.5E-67; Binomial test). In other words, almost half of the tags that fit to two components were assigned to a single component configuration (for 5 libraries, (2^5/2)-1 = 15 such
configurations are possible).
Top component memberships
Determining differentially expressed genes
In previously described overdispersed models, the identity of a library is included a priori as a model covariate. Significance is then determined by testing the null hypothesis that the fitted β
coefficient for this covariate is zero [5,6]. A Bayesian significance score has also described, although this was developed using a beta-binomial model [7]. In contrast, the Poisson mixture model
does not require the identity of the libraries be included (although the addition of such covariates is possible). Rather, once a mixture model has been fit, the posterior probabilities of membership
in a particular component given the observed tag counts can be used to determine how well the components can differentiate between two or more sample types (e.g. normal versus cancer). Here, a score
is presented based on the confidence that a sample is of type ω given that it arises from component(s) k. Using Bayes Theorem, one can derive the following expression [see Additional file 1]
$P(ω|k)=∑jωτjk/∑jτjk MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=
where ω is the set of libraries corresponding to some label of interest (e.g. normal or cancer) and τ[jk ]is the posterior probability of the tag count from library j arising from component(s) k.
Using this expression, one can determine which tags have a set of mixture components that are closely linked with the sample type(s) of interest.
To illustrate, SAGE libraries from normal brain (n = 8) and ependymoma (n = 10) (a type of brain tumour) were analyzed using both the overdispersed log-linear and Poisson mixture models. In the
former case, significance was calculated using the method described in [5] (see also example R code in Methods). In the latter case, the method described above was used. A plot of the two sets of
scores shows a moderate correlation and tags that are found highly significant in one test tend to be so in the other (Figure (Figure22).
Comparison to significance scores for a test of differential expression calculated using a negative binomial model. Using the tag counts from 8 normal brain libraries versus 10 ependymoma libraries,
differential expression between these two sample types ...
However, a number of observations are found significant using the overdispersed log-linear model and not the Poisson mixture model, and vice versa. A closer look at the most extreme examples
illustrates the superior performance of the mixture approach (Figure (Figure3).3). In the first example, tag ACAACAAAGA seems clearly expressed in normal libraries, but is completely abolished in
the ependymoma libraries. However, according to the overdispersed model, the observation is not at all significant (p = 0.9998). The mixture model, however, produces a confidence score of 99.42%,
which suggests this tag is highly informative with respect to sample type. This example demonstrates the difficulty that the log-linear model has with fitting groups where tag counts are zero, a
problem that is even more pronounced when using a logistic regression model (for a more thorough discussion of this problem see [6]).
Counts for two tags assessed using a negative binomial model and the Poisson mixture model where one model shows significance and the other does not. The figure is divided to show separate plots of
the expression level of two tags observed in 8 normal ...
In the second example, tag CAGTTGTGGT clearly has increased expression in some libraries from both the normal and ependymoma groups. However, according to the overdispersed model, the observation is
highly significant (p = 8.8E-7). The mixture model, however, produces a confidence score of 59.8% which is only nominally better than chance. This example demonstrates how the log-linear model seems
to downweight the occasional extreme observation in one group, even if it is in agreement with observations in the other group. This can result in candidate lists based on the log-linear significance
containing tags that have extreme observations that occur at a higher rate in one group over another, which are typically of little interest.
Similar results were obtained when comparing to the Bayes error rate described in [7]. Again, a moderate correlation is seen and tags found highly significant in one test tend to be so in the other
(Figure (Figure4).4). Overall, the Bayes error rate is in better agreement with the mixture model confidence score and appears to be more robust in assessing tags with zero counts in one group.
However, the assumption of a hierarchical model (in this case, a beta-binomial) used to calculate the Bayes error rate versus a Poisson mixture model results in differences between the two methods.
Two examples, analogous to those described above, are highlighted (Figure (Figure5).5). In both cases, the Poisson mixture model appears to give confidence values that are in better agreement with
the observations.
Comparison to Bayes error rate for a test of differential expression calculated using a beta binomial model. Using the tag counts from 8 normal brain libraries versus 10 ependymoma libraries,
differential expression between these two sample types was ...
Counts for two tags assessed using a Bayes error rate and the Poisson mixture model where one model shows significance and the other does not. The figure is divided to show separate plots of the
expression level of two tags observed in 8 normal brain ...
The exploration of statistical approaches to SAGE analysis is important since the number of studies using the technology has resulted in a continuing rise in the amount of available data. The notion
of sampling variability being the predominant source of "within"-library variability and distinct components being the predominant source of "between"-library variability is reassuring for
investigators who choose the SAGE technique to obtain a comprehensive profile of gene expression in a limited number of samples. Nevertheless, there is certainly a contribution by a latent biological
variability as evidenced by the increased performance of the negative binomial as the number of libraries increases. However, this work demonstrates that a simple overdispersed model may overstate
this effect, and that certainly there is a clustering of expression into distinct components, which are then sampled. This is consistent with the view of gene transcription for any one locus
consisting of (possibly several) inactivated or activated state(s). The same idea holds for some known mechanisms of genetic disease, such as loss of heterozygosity (LOH) or amplification of a
particular locus (e.g. cancer).
For this reason, it is recommended that investigators try the mixture model approach in comparisons of groups of biological replicates. Failing this, some of the difficulties that can be encountered
with the negative binomial model can be lessened by: a) setting a tolerance for how much overdispersion (ϕ) is acceptable in a final list of candidate tags, although such a cutoff would be somewhat
arbritrary; and b) add a small value to the tag count to avoid the problems the model has with groups consisting of many zero counts. One strategy is to assume equal odds that the next tag drawn is
the one of interest by adding 1 to the count, and 2 to the library size (i.e. (count+1)/(size+2)) (K. Baggerly, personal communication).
In the future, it may be worthwhile to combine both approaches by defining a negative binomial mixture model. However, at this point, such an approach is unlikely to provide significant improvement
given the small number of libraries in a typical set of available biological replicates. In addition, applying the concept of "information sharing" between tags may provide estimates of statistically
informative variables that apply library-wide, and could be utilized to improve the power of the method described in this paper [11,12].
The Poisson mixture model appears to be a rational means to represent SAGE data that are biological replicates and as a basis to assign significance when comparing multiple groups of such replicates.
The use of a mixture model can improve the process of selecting differentially expressed genes, and provide a foundation for ab initio identification of co-expressed genes and/or
biologically-relevant sample subsets.
Test datasets
Test datasets were obtained from the Gene Expression Omnibus (GEO) [13] and reflect a range of cancer studies, including malignancies of the skin [14-16], breast [17-19], blood [20], and brain [21].
The full description, accession, and size for each library were recorded [see Additional file 2]. In the case of breast and skin data, libraries from a combination of studies were used. Datasets were
filtered to remove tags expressed at a mean less than 100 tags per million.
Model fitting
The open-source statistical software package R was used to perform all calculations in this paper [22]. R code is included with the explanation for each model. For each of the models, let Y[i ]be the
observed tag count in library i, n[i ]be the total number of tags in library i, and N be the total number of libraries. Also, let x[i ]be the vector of explanatory variables (e.g. normal = 0 and
cancer = 1) associated with the library i, and β be the vector of coefficients.
Log-linear (Poisson) regression model
The log-linear model assumes that the observed tag counts are distributed as
Y[i ]~ Poisson(μ[i])
μ[i ]= n[i]p[i]
where p[i ]is the actual expression in terms of the proportion of all expressed tags.
Here, the unconditional mean and variance are E(Y[i]) = Var(Y[i]) = μ[i]. Using the log link function, which linearizes the relationship between the dependent variables and the predictor(s), we
obtain the linear equation
log(Y[i]) = log(n[i]) + x[i]β
p[i ]= exp(x[i]β)
Using iteratively reweighted least-squares (IRLS), the value(s) for the coefficient(s) β are estimated. The stats library included with R can fit a log-linear model using the following code:
counts <- c(9, 13, 11, 8, 9, 20, 16, 19, 18, 15)
library.sizes <- rep(100000, 10)
# first 5 observations are from sample type 0 (e.g. normal)
# last 5 observations are from sample type 1 (e.g. cancer)
classes <- c(0,0,0,0,0,1,1,1,1,1)
fit <- glm(counts ~ offset(log(library.sizes)) + classes, family=poisson(link="log"))
# get the beta coefficients
beta0 <- fit$coefficients[[1]]
beta1 <- fit$coefficients[[2]]
# get the expression (expressed as a proportion) for each group
prop0 <- exp(beta0)
prop1 <- exp(beta0+beta1)
# calculate significance score for differential expression
# i.e. null hypothesis that beta_1 = 0
t.value <- summary(fit)$coefficients [,"z value"][2]
p.value <- 2 * pt(-abs(t.value), fit$df.residual)
Overdispersed log-linear regression model
In contrast to the canonical log-linear model, we assume the actual expression is distributed as
θ[i ]~ Gamma(μ[i], 1/ϕ),
μ[i ]= n[i]p[i]
where, as above, p[i ]is the actual expression in terms of the proportion of all expressed tags. Here, the unconditional mean and variance are E(θ[i]) = μ[i ]and Var(θ[i]) = μ[i]^2ϕ. Since we are now
sampling from this latent Gamma distribution, the observed tag counts are conditional on this underlying expression and are distributed as
Y[i ]| p[i],ϕ ~ Poisson(θ[i])
Now, the unconditional mean and variance are E(Y[i]) = μ[i ]and Var(Y[i]) = μ[i](1+μ[i]ϕ).
As above, using the log link function we obtain the linear equation
log(Y[i]) = log(n[i]) + x[i]β
p[i ]= exp(x[i]β)
Here, a maximum likelihood estimate of the values for the coefficient(s) β and the overdispersion parameter (ϕ) can be performed. The MASS library [23] for R can fit an overdispersed log-linear model
using the following code:
counts <- c(9, 13, 11, 8, 9, 20, 16, 19, 18, 15)
library.sizes <- rep(100000, 10)
# first 5 observations are from sample type 0 (e.g. normal)
# last 5 observations are from sample type 1 (e.g. cancer)
classes <- c(0,0,0,0,0,1,1,1,1,1)
fit <- glm.nb(counts ~ offset(log(library.sizes)) + classes)
# get the beta coefficients
beta0 <- fit$coefficients[[1]]
beta1 <- fit$coefficients[[2]]
# get the dispersion parameter
dispersion <- 1/fit$theta
# get the expression (expressed as a proportion) for each group
prop0 <- exp(beta0)
prop1 <- exp(beta0+beta1)
# calculate significance score for differential expression
# i.e. null hypothesis that beta_1 = 0
t.value <- summary(fit)$coefficients [,"z value"][2]
p.value <- 2 * pt(-abs(t.value), fit$df.residual)
A more complete discussion of this model and its application to SAGE, including significance testing, is described in [6].
Poisson mixture model
Like the canonical log-linear regression model, we assume the observed tag counts are Poisson distributed. However, the counts are conditional on the choice of a Poisson-distributed component, such
Y[i ]| k ~ Poisson(μ[ik])
μ[ik ]= n[i]p[ik]
where the component k = 1, 2, ..., K and p[ik ]is the actual expression for component k in terms of the proportion of all expressed tags. The posterior probability that an observed tag count belongs
to a component k is given by
$P(k|Yi,ψ)=πkf(Yi|μik)∑jKf(Yi|μij) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=
where ψ is the parameter vector containing the component means (θ[1],...,θ[K]) and mixing coefficients (π[1],...π[K-1]). f(.) is the Poisson probability density function. To fit the model, one must
estimate the values in ψ. This can be done by maximum likelihood estimation (MLE) using the EM algorithm [24]. The flexmix library for R uses the EM algorithm to fit a variety of finite mixture
models [25]. In the case of SAGE data, the following code can be used:
counts <- c(9, 13, 11, 8, 9, 20, 16, 19, 18, 15)
library.sizes <- rep(100000, 10)
# first 5 observations are from sample type 0 (e.g. normal)
# last 5 observations are from sample type 1 (e.g. cancer)
classes <- c(0,0,0,0,0,1,1,1,1,1)
# set fitting control parameters to settings that work
# well for SAGE
custom.FLXcontrol <- list(iter.max=200,
custom.FLXcontrol <- as(custom.FLXcontrol, "FLXcontrol")
# specify the maximum number of model components
maxk <- 5
fits <- list()
aic.fits <- rep(NA, maxk)
# increase number of components until AIC decreases
for(k in 1:maxk) {
# make an initial "good" guess of class membership
# using k-means – helps avoid falling into a local
# likelihood maximum
cm <- rep(1, length(counts))
if(k > 1) cm <- kmeans((counts+1)/(sizes+2),
fit <- try(flexmix(counts ~ 1,
cluster=cm), silent=TRUE)
if("try-error" %in% class(fit)) break
# stop if there were less components found then
# specified
if(max(cluster(fit)) > k) break
fits [k]] <- fit
aic.fits [k] <- AIC(fits [k]])
if(k == 1) next
if(aic.fits [k] >= aic.fits [k-1]) break
# what number of components minimized AIC?
k.optimal <- which(aic.fits == min(aic.fits, na.rm=TRUE))[1]
fit <- fits [k.optimal]]
# get the theta parameters
thetas <- array(dim=k.optimal)
for(i in 1:k.optimal) thetas [i] <- parameters(fit,component=i)$coef
# get the pi parameters
pis <- attributes(fit)$prior
# what is the confidence score that the fitted components
# differentiate between groups?
confidence <- pmm.confidence(fit, classes, use.scaled=FALSE)
The confidence score, ranging from 50–100%, is explained in the Results section and code for performing the calculation is available [see Additional file 3].
Authors' contributions
SDZ conceived of, developed, and tested the presented research.
Supplementary Material
Additional file 1:
Derivation of confidence score. A derivation of a confidence score for differential expression based on a Poisson mixture model fit.
Additional file 2:
Supplementary SAGE library information. An Excel spreadsheet containing accessions, sizes, and descriptions of the libraries included in this study.
Additional file 3:
Confidence score R function. R source code for a function to calculate the differential expression confidence score based on a Poisson mixture model fit.
SDZ is supported by the Canadian Institutes for Health Research (CIHR) and the Michael Smith Foundation for Health Research (MSFHR). Computationally intensive portions of this work were made possible
by the WestGrid computing resources http://www.westgrid.ca, which are funded in part by the Canada Foundation for Innovation, Alberta Innovation and Science, BC Advanced Education and the
participating research institutions. WestGrid equipment is provided by IBM, Hewlett Packard and SGI. The author thanks Dr. Greg Vatcher for critical reading of the manuscript, and Drs. Raymond Ng and
Victor Ling for helpful advice.
• Velculescu VE, Zhang L, Vogelstein B, Kinzler KW. Serial analysis of gene expression. Science. 1995;270:484–487. doi: 10.1126/science.270.5235.484. [PubMed] [Cross Ref]
• Saha S, Sparks AB, Rago C, Akmaev V, Wang CJ, Vogelstein B, Kinzler KW, Velculescu VE. Using the transcriptome to annotate the genome. Nature biotechnology. 2002;20:508–512. doi: 10.1038/
nbt0502-508. [PubMed] [Cross Ref]
• Matsumura H, Reich S, Ito A, Saitoh H, Kamoun S, Winter P, Kahl G, Reuter M, Kruger DH, Terauchi R. Gene expression analysis of plant host-pathogen interactions by SuperSAGE. Proceedings of the
National Academy of Sciences of the United States of America. 2003;100:15718–15723. doi: 10.1073/pnas.2536670100. [PMC free article] [PubMed] [Cross Ref]
• Baggerly KA, Deng L, Morris JS, Aldaz CM. Differential expression in SAGE: accounting for normal between-library variation. Bioinformatics. 2003;19:1477–1483. doi: 10.1093/bioinformatics/btg173.
[PubMed] [Cross Ref]
• Baggerly KA, Deng L, Morris JS, Aldaz CM. Overdispersed logistic regression for SAGE: modelling multiple groups and covariates. BMC Bioinformatics. 2004;5:144. doi: 10.1186/1471-2105-5-144. [PMC
free article] [PubMed] [Cross Ref]
• Lu J, Tomfohr JK, Kepler TB. Identifying differential expression in multiple SAGE libraries: an overdispersed log-linear model approach. BMC Bioinformatics. 2005;6:165. doi: 10.1186/
1471-2105-6-165. [PMC free article] [PubMed] [Cross Ref]
• Vencio RZ, Brentani H, Patrao DF, Pereira CA. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE) BMC Bioinformatics. 2004;5:119. doi:
10.1186/1471-2105-5-119. [PMC free article] [PubMed] [Cross Ref]
• McLachlan GJ, Peel D. Finite mixture models. New York: Wiley; 2000.
• Akaike H. A new look at the statistical model identification. IEEE Transactions on Automatic Control. 1974;19:716–723. doi: 10.1109/TAC.1974.1100705. [Cross Ref]
• Schwarz G. Estimating the dimension of a model. Annals of Statistics. 1978;6:461–464.
• Kuznetsov VA, Knott GD, Bonner RF. General statistics of stochastic process of gene expression in eukaryotic cells. Genetics. 2002;161:1321–1332. [PMC free article] [PubMed]
• Thygesen HH, Zwinderman AH. Modeling Sage data with a truncated gamma-Poisson model. BMC Bioinformatics. 2006;7:157. doi: 10.1186/1471-2105-7-157. [PMC free article] [PubMed] [Cross Ref]
• Edgar R, Domrachev M, Lash AE. Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic acids research. 2002;30:207–210. doi: 10.1093/nar/30.1.207. [PMC free
article] [PubMed] [Cross Ref]
• Cornelissen M, van der Kuyl AC, van den Burg R, Zorgdrager F, van Noesel CJ, Goudsmit J. Gene expression profile of AIDS-related Kaposi's sarcoma. BMC Cancer. 2003;3:7. doi: 10.1186/
1471-2407-3-7. [PMC free article] [PubMed] [Cross Ref]
• van Ruissen F, Jansen BJ, de Jongh GJ, Zeeuwen PL, Schalkwijk J. A partial transcriptome of human epidermis. Genomics. 2002;79:671–678. doi: 10.1006/geno.2002.6756. [PubMed] [Cross Ref]
• Weeraratna AT, Becker D, Carr KM, Duray PH, Rosenblatt KP, Yang S, Chen Y, Bittner M, Strausberg RL, Riggins GJ, et al. Generation and analysis of melanoma SAGE libraries: SAGE advice on the
melanoma transcriptome. Oncogene. 2004;23:2264–2274. doi: 10.1038/sj.onc.1207337. [PubMed] [Cross Ref]
• Porter D, Lahti-Domenici J, Keshaviah A, Bae YK, Argani P, Marks J, Richardson A, Cooper A, Strausberg R, Riggins GJ, et al. Molecular markers in ductal carcinoma in situ of the breast. Mol
Cancer Res. 2003;1:362–375. [PubMed]
• Porter D, Weremowicz S, Chin K, Seth P, Keshaviah A, Lahti-Domenici J, Bae YK, Monitto CL, Merlos-Suarez A, Chan J, et al. A neural survival factor is a candidate oncogene in breast cancer.
Proceedings of the National Academy of Sciences of the United States of America. 2003;100:10931–10936. doi: 10.1073/pnas.1932980100. [PMC free article] [PubMed] [Cross Ref]
• Porter DA, Krop IE, Nasser S, Sgroi D, Kaelin CM, Marks JR, Riggins G, Polyak K. A SAGE (serial analysis of gene expression) view of breast tumor progression. Cancer Res. 2001;61:5697–5702. [
• Lee S, Chen J, Zhou G, Shi RZ, Bouffard GG, Kocherginsky M, Ge X, Sun M, Jayathilaka N, Kim YC, et al. Gene expression profiles in acute myeloid leukemia with common translocations using SAGE.
Proceedings of the National Academy of Sciences of the United States of America. 2006;103:1030–1035. doi: 10.1073/pnas.0509878103. [PMC free article] [PubMed] [Cross Ref]
• Boon K, Edwards JB, Eberhart CG, Riggins GJ. Identification of astrocytoma associated genes including cell surface markers. BMC Cancer. 2004;4:39. doi: 10.1186/1471-2407-4-39. [PMC free article]
[PubMed] [Cross Ref]
• R Development Core Team A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. 2006.
• Venables WN, Ripley BD. Modern Applied Statistics with S. Fourth. New York: Springer; 2002.
• Dempster A, Laird N, Rubin D. Maximum Likelihood from Incomplete Data via the EM-Algorithm. Journal of the Royal Statistical Society Series B (Methodological) 1977;39:1–38.
• Leisch F. FlexMix: A general framework for finite mixture models and latent class regression in R. Journal of Statistical Software. 2004;11
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2147036/?tool=pubmed","timestamp":"2014-04-19T10:23:28Z","content_type":null,"content_length":"108228","record_id":"<urn:uuid:831b404d-3c27-4cc6-8322-9f0f5cfea3df>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Find the maximum area of a parallelogram drawn in the area enclosed by the curves y=4-x^2 & y=x^2+2x
We will use geogebra! Let's see if we can do this.
1) Type in f(x) = 4 - x^2
2) Type in g(x) = x^2 + 2x
3) Use the intersection tool on the two functions and the points A and B will be created.
4) Relabel B to C.
5) Create a slider called b set the interval to -2 to 1 with a step size .001. Type (b,f(b)). Point B will be created on f(x).
6) Use the line tool to create a line from A to B.
7) Use the parallel line tool to create a line that is parallel to AB and passes through C.
8) Get the intersection of this second line and g(x) using the intersection tool. Point D and E will be created. Hide E.
9) Create a line through BC.
10) Draw a line through D that is parallel to BC.
Notice that we now have a generic parallelogram drawn between the two curves.This is all we need!
11) You can hide the lines as best as you can. Create a polygon that uses A,B,C and D as its vertices.
12) Use the slider to get the maximum area. It is not difficult to get 6.75
13) You should have something close to the drawing shown below. | {"url":"http://www.mathisfunforum.com/post.php?tid=17790&qid=220411","timestamp":"2014-04-17T04:03:03Z","content_type":null,"content_length":"16849","record_id":"<urn:uuid:0bea3da1-a602-4d41-930d-0d65d42cadd5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Atkin's theorem on pseudo-squares
R. Balasubramanian and D.S. Ramana
Institute for Mathematical Sciences, Chennai, India
Abstract: We give an elementary proof of a theorem of A.O.L. Atkin on psuedo-squares. As pointed out by Atkin, from this theorem it immediately follows that there exists an infinite sequence of
positive integers, whose $j$ th term $s(j)$ satisfies $s(j)=j^2 + O(\log(j))$, such that the set of integers representable as a sum of two distinct terms of this sequence is of positive asymptotic
Classification (MSC2000): 11B13
Full text of the article:
Electronic fulltext finalized on: 6 Apr 2000. This page was last modified: 16 Nov 2001.
© 2000 Mathematical Institute of the Serbian Academy of Science and Arts
© 2000--2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/077/3.html","timestamp":"2014-04-20T09:15:10Z","content_type":null,"content_length":"3475","record_id":"<urn:uuid:ba4431c3-e7fe-4265-8de5-c57de0b133f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transformations of Logarithmic Functions
February 9th 2009, 02:03 PM
Transformations of Logarithmic Functions
Hey guys
I have a question about transformations and the laws of logarithms.
The question is:
"Given y=1/5log(9x-36)^15-13 (the log has a base of 3, not 10. I don't know how to write that in clearly, so I hope it makes sense)
Apply the laws of logarithms to change the form of the equation. Graph the function by first stating the basic function and then describe each transformation applied in order. Specifically
describe what happens to the domain, range, asymptotes, x-intercept, and vertical stretch or compression."
I am fine with graphing the transformation, I just don't understand exactly how to "apply the laws of logarithms to change the form of the equation"
Can anybody help me out a bit?
Thanks a lot
February 9th 2009, 07:54 PM
is the question?
$y=\frac{1}{5}\cdot \log_3 (9x-36)^{15}-13$
$y=\frac{1}{5}\cdot 15\log_3 (9x-36)-13$
$y=3\cdot\log_3 9(x-4)-13$
$y=3\cdot\log_3 9+\log_3(x-4)-13$
$y=3\cdot2 +\log_3(x-4)-13$
$y=\log_3(x-4)+6 -13$
$y=\log_3(x-4) -7$
lol i haven't done logs in a while.. only about 30% confident in my log abilities... i hope this isn't completely wrong and i'm steering you wrong.
February 9th 2009, 07:55 PM
Hey guys
I have a question about transformations and the laws of logarithms.
The question is:
"Given y=1/5log(9x-36)^15-13 (the log has a base of 3, not 10. I don't know how to write that in clearly, so I hope it makes sense)
Apply the laws of logarithms to change the form of the equation. Graph the function by first stating the basic function and then describe each transformation applied in order. Specifically
describe what happens to the domain, range, asymptotes, x-intercept, and vertical stretch or compression."
I am fine with graphing the transformation, I just don't understand exactly how to "apply the laws of logarithms to change the form of the equation"
Can anybody help me out a bit?
Thanks a lot
Let me make sure that I am reading your equation correctly first, is it
$y = \frac{1}{5} log_3 (9x-36)^{15} -13$
$y = \frac{1}{5 log_3 (9x-36)^{15}} -13$
February 9th 2009, 09:24 PM
I'm pretty sure that the exponent is not 15-13.
Really the only thing you can do to this equation, if the way I am reading it is correct is bring the 15 in front. You can't split up the (9x - 36) into two separate logs.:
$y = \frac{1}{5} log_3 (9x - 36)^{15} - 13$
$= 3 log_3 (9x - 36) - 13$
February 10th 2009, 01:39 AM
February 10th 2009, 07:42 AM
Hey Guys
You got the equation right, sorry for not being more clear...
I was able to find an example of this question that I had missed when going over my course material. So you guys did get it right =)
Thanks so much for the help!
February 10th 2009, 08:11 PM
I'm pretty sure that the exponent is not 15-13.
Really the only thing you can do to this equation, if the way I am reading it is correct is bring the 15 in front. You can't split up the (9x - 36) into two separate logs.:
$y = \frac{1}{5} log_3 (9x - 36)^{15} - 13$
$= 3 log_3 (9x - 36) - 13$
from $\log_3(9x-36)$ i'm factoring out $log_3[9(x-4)]$ and using $\log(a\cdot b)= \log(a)+ \log(b)$ to separate. Then using: If $x=b^y ,$ then $y = \log_b (x)$ to translate into $\log_3 9 = 2$
because $3^2 = 9$
Do i see a function in there? y=mx+b
Oh i see it.. i just added instead of factoring. Turns out my fundamental algebra is the prob not my log abilities. lol
Glad you got it cazury | {"url":"http://mathhelpforum.com/algebra/72711-transformations-logarithmic-functions-print.html","timestamp":"2014-04-19T18:46:54Z","content_type":null,"content_length":"16189","record_id":"<urn:uuid:2440f5cb-9170-445a-802e-d32c754daa36>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
SPOJ.com - Problem PERMUT2
Submit All submissions Best solutions PS PDF Back to list
SPOJ Problem Set (classical)
379. Ambiguous Permutations
Problem code: PERMUT2
Some programming contest problems are really tricky: not only do they require a different output format from what you might have expected, but also the sample output does not show the difference.
For an example, let us look at permutations.
A permutation of the integers 1 to n is an ordering of these integers. So the natural way to represent a permutation is to list the integers in this order. With n = 5, a permutation might look
like 2, 3, 4, 5, 1.
However, there is another possibility of representing a permutation: You create a list of numbers where the i-th number is the position of the integer i in the permutation. Let us call this second
possibility an inverse permutation. The inverse permutation for the sequence above is 5, 1, 2, 3, 4.
An ambiguous permutation is a permutation which cannot be distinguished from its inverse permutation. The permutation 1, 4, 3, 2 for example is ambiguous, because its inverse permutation is the
same. To get rid of such annoying sample test cases, you have to write a program which detects if a given permutation is ambiguous or not.
Input Specification
The input contains several test cases.
The first line of each test case contains an integer n (1 ≤ n ≤ 100000). Then a permutation of the integers 1 to n follows in the next line. There is exactly one space character between
consecutive integers. You can assume that every integer between 1 and n appears exactly once in the permutation.
The last test case is followed by a zero.
Output Specification
For each test case output whether the permutation is ambiguous or not. Adhere to the format shown in the sample output.
Sample Input
Sample Output
not ambiguous
Added by: Adrian Kuegel
Date: 2005-06-24
Time limit: 10s
Source limit: 50000B
Memory limit: 256MB
Cluster: Pyramid (Intel Pentium III 733 MHz)
Languages: All except: NODEJS PERL 6
Resource: own problem, used in University of Ulm Local Contest 2005
hide comments
2014-02-11 03:00:56 Alexandra Mirtcheva
Is it just me, or is the definition of an inverse permutation wrong as described in the problem.
Isn't the inverse permutation of 2,3,4,5,1 = 1,5,4,3,2?
Last edit: 2014-02-11 03:03:21
2014-02-10 13:15:19 do_do
10s,O(n),its good... :)
2014-01-25 22:08:07 RAJAT SINGH
very easy...... got AC in first attempt
2014-01-18 21:16:06 Petar Velièkoviæ
You need O(n) time to read the data anyway... so no.
2014-01-16 16:56:15 Anubhav Balodhi ;-D
got Ac in the first attempt... is there any better solution than O(n) ?!
2013-07-16 14:57:46 (Tjandra Satria Gunawan)(曾毅昆)
@Ouditchya Sinha: Actually 10 languages.. You can resubmit your PYTH 3.2.3 with PYTH 3.2.3n :-)
2013-07-16 14:46:03 Ouditchya Sinha
This was fun. AC in 9 languages! :)
@Tjandra : 10 now! Thank you I didn't notice that. I bet you can do it in atleast 15. :) :D
Last edit: 2013-07-16 19:21:57
2013-03-10 06:35:08 killerz
Easy one :)
2013-01-26 19:09:41 vishnu
my 50th AC code :)
half century!! | {"url":"http://www.spoj.com/problems/PERMUT2/","timestamp":"2014-04-16T04:11:17Z","content_type":null,"content_length":"23754","record_id":"<urn:uuid:2ead3a21-53f3-41f7-a9a9-7bbbc10a2710>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Aggregated Weighted Summary Statistics Using Probability Weight
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Aggregated Weighted Summary Statistics Using Probability Weights
From Steve Samuels <sjsamuels@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Aggregated Weighted Summary Statistics Using Probability Weights
Date Tue, 13 Jul 2010 22:58:32 -0400
Lindsay Newman--
You do not show us actual commands or results as requested by the FAQ,
"Say exactly what you typed and exactly what Stata typed (or did) in
response. N.B. exactly! If you can, reproduce the error with one of
Stata's provided datasets or a simple concocted dataset that you
include in your posting."
In the following example, the discrepancy between #1 and #2,#3, and
#4 is absent.
sysuse auto, clear
bys foreign: sum mpg [aw =rep78] //1
sum mpg if foreign==0 [aw=rep78] //2
sysuse auto, clear
svyset _n [pweight=rep78]
svy: mean mpg if foreign==0 //3
estat sd
di sqrt(e(N) * el(e(V_srs),1,1)) //4
On Tue, Jul 13, 2010 at 11:17 AM, Lindsay Newman <lrshorr@gmail.com> wrote:
> I am using survey data with probability weights. I want to compute
> various summary statistics, including the mean and standard deviation,
> of the data at an aggregated level. In particular, I want to use
> individual responses to certain questions to calculate the country
> year weighted mean and standard deviation of the response. For
> instance, if 200 individuals responded to a particular question, what
> is the weighted average response for that country year? What is the
> weighted standard deviation of the responses for that country year?
> When I sort by country year and use the following code:
> (1) by countryyear: summarize (response variable) [aw=weight variable]
> I get different results for the standard deviations than when I either run:
> (2) summarize (response variable) if countryyear ==x [aw=weight variable]
> or when I calculate the standard deviation manually using:
> (3) di sqrt(e(N) * el(e(V_srs),1,1))
> When I analyze the responses for just one country year (i.e. deleting
> all but responses from a single country year) using:
> (4) svy: mean (response variable) estat sd,
> the standard deviations match 2 and 3 but not 1. Why is this?
> Thank you.
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Steven Samuels
18 Cantine's Island
Saugerties NY 12477
Voice: 845-246-0774
Fax: 206-202-4783
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-07/msg00720.html","timestamp":"2014-04-20T06:18:47Z","content_type":null,"content_length":"10512","record_id":"<urn:uuid:5b0b5a43-5f17-49b7-b9a1-2b812a04515c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probabilistic dominated convergence theorem
December 13th 2009, 04:48 AM #1
Senior Member
Jan 2009
Probabilistic dominated convergence theorem
Dominated convergence theorem: Suppose Xn->X almost surely and |Xn| ≤ W for all n, where E(W)<∞. Then E(Xn)->E(X).
1) Suppose Xn->X in probability and |Xn| ≤ W for all n, where E(W)<∞. Show that E(Xn)->E(X).
2) Suppose Xn->X in probability and (Xn)^2 ≤ W for all n, where E(W)<∞. Show that Xn->X in mean square. (i.e. E[(Xn-X)^2]->0)
1) Let E(Xnk) be any subsequence of {E(Xn)}.
Then Xnk->X in probability => there exists a further subsequence Xnk' such that Xnk'->X almost surely.
By dominated convergence theorem, this implies that E(Xnk')->E(X) and so every subsequence of {E(Xn)} has a further subsequence of it which converges to E(X) => E(Xn)->E(X).
2) But now I am stuck with question 2. The assumption is replaced by (Xn)^2 ≤ W, and we need to prove more: Xn->X in mean square. I tried to modify the proof in question 1, but it doesn't seem to
work here. How can we modify the proof? Or do we need something new?
Any help is much appreciated!
This implication is true, but not obvious. Can you justify it? (or maybe it is from your lecture notes)
2) But now I am stuck with question 2. The assumption is replaced by (Xn)^2 ≤ W, and we need to prove more: Xn->X in mean square. I tried to modify the proof in question 1, but it doesn't seem to
work here. How can we modify the proof? Or do we need something new?
Nothing new is needed. Just prove that the sequence $Y_n=(X_n-X)^2$ satisfies the assumptions of question 1) (with 0 as the limit).
Hint: using the property about subsequences, you have $|X|^2\leq W$ as well.
Yes, it's from my notes. I am not sure why it is true though.
Nothing new is needed. Just prove that the sequence $Y_n=(X_n-X)^2$ satisfies the assumptions of question 1) (with 0 as the limit).
Hint: using the property about subsequences, you have $|X|^2\leq W$ as well.
Xn->X in probability
=>|Xn-X|->0 in probability
=>|Xn-X|^2 ->0 in probability
So the first assumption is satisfied.
But I don't know how to prove that the other assumptions are satisfied. Here is my attempt:
|(Xn-X)^2| = (Xn-X)^2 = (Xn)^2 - 2(Xn)(X) + X^2 ≤ |Xn|^2 + 2|Xn| |X| + |X|^2 ≤ 2W +2|Xn| |X|
But I have no idea how to prove E(2W +2|Xn| |X|) < ∞
Please help!
I told you you can prove $|X|^2\leq W$, so that $|X_n||X|\leq \sqrt{W}\sqrt{W}=W$. So you just have to prove what I claimed.
OK, so E(4W)< ∞, so it remains to prove that "if Xn->X in probability and (Xn)^2 ≤ W for all n, then X^2 ≤ W."
I wrote out all the given information and definitions, and searched for properties and theorems, but I don't see the connection. Which property about subsequence do I need?
Last edited by kingwinner; December 14th 2009 at 09:01 AM.
December 13th 2009, 06:34 AM #2
MHF Contributor
Aug 2008
Paris, France
December 14th 2009, 01:40 AM #3
Senior Member
Jan 2009
December 14th 2009, 03:25 AM #4
MHF Contributor
Aug 2008
Paris, France
December 14th 2009, 06:53 AM #5
Senior Member
Jan 2009
December 14th 2009, 09:52 AM #6
MHF Contributor
Aug 2008
Paris, France | {"url":"http://mathhelpforum.com/advanced-statistics/120186-probabilistic-dominated-convergence-theorem.html","timestamp":"2014-04-16T13:28:08Z","content_type":null,"content_length":"53418","record_id":"<urn:uuid:63cb3a04-1c47-40bf-8b04-85c4de1ad807>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
How well can hand size predict height?
This activity has been undergone anonymous peer review.
This activity was anonymously reviewed by educators with appropriate statistics background according to the CAUSE review criteria for its pedagogic collection.
This page first made public: May 17, 2007
This material is replicated on a number of sites as part of the
SERC Pedagogic Service Project
This activity uses student's own data to introduce bivariate relationship using hand size to predict height. Students enter their data through a real-time online database. Data from different classes
are stored and accumulated in the database. This real-time database approach speeds up the data gathering process and shifts the data entry and cleansing from instructor to engaging students in the
process of data production.
Learning Goals
This activity is designed to help students learn the issues related to data measurement and production, to learn the relationship of two variables. By the end of this activity, students will be able
• explain the importance of data measurement and production in a given context,
• compare different measurements and distinguish which one has smaller variation when measured repeatedly,
• apply graphical and numerical techniques to describe and interpret the relationship between two variables in a given context,
• explain the least square method in a given context,
• distinguish between causation and association,
• determine if a linear model is appropriate or not using graphical residual analysis tools,
• identify outlying cases and determine the effect of the outlying cases.
Context for Use
• This activity is appropriate for introducing bivariate relationship at introductory level with high school algebra as prerequisite.
• This activity can be conducted as a group project, an individual project or a home work project.
• The activity is easy to conduct. The actual time for guiding students collect, enter and download the data is usually less than 10 minutes for the entire class.
Description and Teaching Materials
The detailed description and materials of this activity are located at the site:
Real-Time Hands-on Activities
The following materials are used to introduce the bivariate relationship. One may choose to use a subset of the materials her/his class.
• The power point slides: used for introducing bivariate relationships. (PowerPoint 1.4MB May16 07) This is a complete set of materials for class lecture. You may already have your own lecture
notes. Feel free to take any part of the slides.
• Hand size data (20 cases): (Excel 6kB May15 07) This data set is part of the hand-size data randomly selected from the activity database. This is used throughout the power point sildes as the
class demonstration to introduce the bivariate relationship.
• Activity Worksheet - Hand Size: (Microsoft Word 37kB May17 07) This is a set of questions that guides students to investigate how well hand size can predict height. This is usually used as a
group activity. It is suggested starting the group activity during the class period (or lab sessions), completing the activity after class and turning the worksheet the next class period.
• The hand size data (50 cases): (Excel 11kB May15 07) This data is a subset of the hand size data. The questions in the Hand-Size worksheet are based on the analysis of this data set.
• Activity Worksheet - online applet: (Microsoft Word 26kB May15 07) This worksheet is for students to learn the effects of outliers and influential cases. It may be assigned as a group activity or
as an individual homework activity.
• Questions for assessing learning outcomes (Microsoft Word 99kB May3 07).
Teaching Notes and Tips
Features about the Real-Time online data collection the instructor should be aware of:
• This activity requires students to collect data and enter their own data to an online database. Here is the instruction sheet for instructor: Instruction for instructor to facilitate the data
collection. (Microsoft Word 28kB May17 07)
• The equipment for conducting this activity are (a) one-foot actual or paper-copy ruler and (b) Computer station with Internet connection.
• The best classroom setting is a computer classroom with Internet connection. Students can also enter their own data using any computer that has Internet connection after class.
Prior to conducting this activity, the instructor needs to:
• spend half an hour to navigate the Real-Time Hands-on Activities site to get familiar with the site.
• register to request for an activity code for the activity before the class.
• prepare paper rulers or actual rulers and make sure the Internet connection works in the computer lab.
During the session of conducting the group activity,
• Start with the discussion on how to measure hand size and ask students to compare different ways of measuring hand size in terms of (a) is the measurement measures 'hand size', (b) is it easy to
measure, and (c) how well can it be measured repeatedly.
• Comparing Hand-length(from wrist to tip of the middle finger) and hand-width(from the tip of the thumb and the tip of little finger, when stretching out the hand), They both are valid
measurements, easy to measure. Hand length is more repeatable.
• Ask students to go to the Real-Time Hands-on Activities and direct them to enter the data. See Instruction for Instructor (Microsoft Word 28kB May17 07) for step-by-step instruction.
• Ask student to make an 'educated' guess as to which one 'hand length' or 'hand width' a better predictor and their reasons. Then, make a comparison later after the analysis.
• Outlier cases may occur. For example, Based on how hand-length and width are measured in this activity, hand-width is always longer than hand-length. If a case that shows hand width is shorter
than hand-length, this provide a discussion on measurement error and the effect of outliers.
• The hand-size data for the worksheet has a case with hand-width shorter than hand-length. Students are asked to analyze the data, firtst, without knowing this case, then, investigate the data,
and delete this case, re-analyze the data and make a comparison.
The use of this activity beyond introductory level:
• This activity may be used to introduce models with both qualitative (gender) and quantitative (hand length) predictors.
• This activity may be used to introduce the concept of muliticollinearity by using both hand length and hand width as predictors.
• This activity may be used to introduce variable selection techniques by including gender, hand width, and hand length as predictors.
Students are assessed using
• Classroom Group activity worksheet: Activity Worksheet - Hand Size: (Microsoft Word 37kB May17 07) The data set used for this activity is The hand size data (50 cases): (Excel 11kB May15 07).
This activity assesses students' overall knowledge of bivariate relationships. In this data, there is a case that has measurement error; Hand-width is shorter than Hand-length. This error
occurred when the student did not stretch out the hand for taking the hand width. Students are asked to analyze this entire data. Then, they are asked to locate this measurement error case,
identify at least two more possible errors that may occur. Delete the case, re-analyze the data and make a comparison.
• Activity Worksheet - Using an online applet (see reference for the source):Activity Worksheet - online applet: (Microsoft Word 26kB May15 07) This activity assesses their understanding of the
effects of outliers and influential cases. It may be assigned as a group activity or as an individual homework activity.
• Questions for assessing learning outcomes (Microsoft Word 99kB May3 07). This is a set of multiple choice questions for assessing learning outcomes at the end of the topic or used for final exam.
It doees mean to use all of them at once. You may choose any subset of these questions for your class.
References and Resources
• Real-Time Hands-on Activities site: This site consists of the detailed description of this activity, and all of the materials related to this activity. In addition, there are several other
real-time online activities available on this site.
• Online Applete for Bivariate Relationship. This applet allows students to create their own scatter plots, observe and compare the patterns and correlation coefficients as well as least square
lines. It is especially useful for students to create different situations involving outliers and observe the effect of the outliers. | {"url":"http://serc.carleton.edu/sp/cause/cooperative/examples/18172.html","timestamp":"2014-04-19T10:22:58Z","content_type":null,"content_length":"37755","record_id":"<urn:uuid:ad515cda-c89a-44a0-a92f-09f62bcb2c34>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology § 224
Date: Mar 24, 2013 9:57 AM
Author: fom
Subject: Re: Matheology § 224
On 3/24/2013 4:08 AM, WM wrote:
> On 24 Mrz., 01:41, Virgil <vir...@ligriv.com> wrote:
>> In article
>> <5c674f26-92a7-44ed-b080-692d23ec3...@g4g2000yqd.googlegroups.com>,
>> WM <mueck...@rz.fh-augsburg.de> wrote:
>>> Do you think it is not a contradiction, to have the statements:
>>> 1) 0.111... has more 1's than any finite sequence of 1's.
>>> 2) But if we remove all finite sequences of 1's, then nothing remains.
>> In proper English (1) should read
>> "the infinite sequence represented by 0.111... has more 1's in it
>> than in any finite sequence of 1's."
> You seem to have difficulties when terminology of proper mathematics
> is in question.
You are repeatedly asked for proper definitions of your
use of terms in statements. That is what proper mathematics
> 0.111... is an infinite sequence that represents a
> number
Well, the not-so-finite finite reappears. WM is the sometimes
ultrafinitist, who is always assuming infinity.
>- it is not only representing an infinite sequence.
That is why proper definitions are needed.
All '0.111...' in these discussions is that no one can take
away your crayons. No definition. No intelligible meaning.
>> And if WM wishes to prevail, he WM must explain how he intends to remove
>> all finite sequences of 1's without removing all 1's in the process.
> That is simple: All finite sequences like
> 0.1
> 0.11
> 0.111
> ...
> can be removed from 1/9 without ever removing all.
That's an assertion.
He asked for explanation.
Please provide that which had been requested.
> So, if 1/9 has a
> decimal representation, something must remain, nat least the
> counterfactual belief of matheologians.
Told you before -- mathematics does not deal with
the truth conditions of counterfactuals. The only
arguing from belief here is you.
See "conceited reasoner"
>> The fact is that one cannot remove every set containing a natural from a
>> family of sets some of which contain that natural of without removing
>> that natural from the union of set of remaining sets.
> of without removing? Proper English?
Proper Mathematics?
The occasional typographic error is far more decipherable
than your theory of monotonic inclusive crayon marks.
> I proved
You have *proven* nothing.
Once again, the only thing close to *proof* with which
you may be associated is the result of WH's patient
attempts to discern anything close to rational from
your remarks. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8746761","timestamp":"2014-04-16T16:25:10Z","content_type":null,"content_length":"4448","record_id":"<urn:uuid:7a9e063b-b5d8-4c12-8e41-bb04373f95e4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bobby Grizzard's Homepage | The University of Texas
Bobby Grizzard
office: RLM 11.146
office phone: (512) 475-8809
email: rgrizzard@math.utexas.edu
Office hours for Spring 2013: MW 11:00am - 12:00pm, TTh 9:30am - 10:30am
PHOTOS OF MY BABY!
I am a Ph.D. Candidate working under Professor Jeffrey Vaaler at The University of Texas at Austin Department of Mathematics.
I am interested in the solutions of polynomial equations in one variable (the theory of algebraic numbers), and in the algebraic solutions to polynomial equations in several variables (diophantine
geometry). I am especially interested in heights, which are real-valued functions that measure the complexity of an algebraic number or an algebraic point on a variety. Most of my research has
focused on studying properties of heights over infinite extensions of the rational numbers. This is motivated by applications to diophantine equations, where a powerful tool is to compare height
estimates to prove that equations don't have too many solutions.
Likes: heights, diophantine geometry, algebraic number theory, Galois theory, group theory, arithmetic statistics, SAGE, GAP, MAGMA
Papers, etc.
(long) research statement
Upcoming/Recent Travel
⊕ 9/7/13-9/8/13: PANTS XX, Davidson College
⊗ 10/5/13-10/6/13: Maine-Québec Number Theory Conference, University of Maine
⊕ 11/25/13-12/4/13: Heights in diophantine geometry, group theory, and combinatorics, ESI, Vienna
⊗ 3/15/14-3/19/14: Arizona Winter School 2014: Arithmetic Statistics, University of Arizona
⊕ 7/14/14-7/25/14: Second ERC: Diophantine Geometry, Unlikely Intersections and Algebraic Dynamics, Cetraro, Italy
Current: M 427L, Vector Calculus, with Dr. Bart Goddard
Recent Courses:
⊕ M 408S, Integral Calculus, with Dr. Elizabeth Stepp, Spring 2013.
⊗ M316L Foundations of Geometry, Statistics, and Probability, Spring 2012
⊕ M316K Foundations of Arithmetic, Fall 2011
Previous Courses (full list)
UT Math Home | {"url":"http://www.ma.utexas.edu/users/rgrizzard/","timestamp":"2014-04-21T07:05:28Z","content_type":null,"content_length":"3761","record_id":"<urn:uuid:2a363412-7835-4c98-8b42-1cffc53394ce>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Make a Distributed BitCoin Escrow Service
Summary: Giving BitCoin a decentralized escrow would give it an advantage over all other exchange mediums, which might increase its adoption rate. Details follow.
For a
escrows seem to be the norm for BitCoin today. An example:
Alice wants to buy $5 USD worth of BitCoins from Bob, but neither Alice nor Bob fully trust the other, so they go to a site they both trust--say Mt. Gox. There they deposit their respective monies
and there they have Mt. Gox make the exchange for them.
No offense to Mt. Gox (a site I like), but can we do without its escrow service?
An almost distributed alternative:
• Charlie, a trusted third-party, generates a BitCoin private key.
• Charlie then uses the Unix command split to split the private key in half--giving one half to Alice and one half to Bob.
• Bob deposits $5 USD worth of BitCoins into the split BitCoin account;
• Alice verifies the transaction using the public block;
• Alice sends $5 USD to Bob by PayPal;
• Bob verifies the PayPal transaction;
• Bob sends Alice his half of the split private key so Alice can access the BitCoins he deposited earlier.
(For simplicity I omit part of the PayPal details like who pays the transaction fee and how long you should wait to avoid chargeback fraud. I also omit any incentive for Bob to perform the final
More advanced almost-distributed examples can be made if we substitute something more sophisticated for the Unix command
. For example: a Shamir's secret sharing scheme implementation like
[1]. A utility like
allows Alice and Bob to appoint an arbiter in case they get in a disagreement.
The problem with all of this, of course, is that we must trust Charlie to not abuse the full copy of the private key he creates.
The ideal solution would be for Alice and Bob to each generate half of the private key on their own. I don't fully understand the math used in modern keypairs, but I doubt this is possible with the
current algorithm.
Is there an alternative way for Alice and Bob to each acquire half of a private key without giving the whole key to any party?
[1] See: | {"url":"https://bitcointalk.org/?topic=1283.0","timestamp":"2014-04-17T01:08:41Z","content_type":null,"content_length":"96287","record_id":"<urn:uuid:a1304366-448f-4614-a002-0f087de550da>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Correlation (in statistics)
From Encyclopedia of Mathematics
A dependence between random variables not necessarily expressed by a rigorous functional relationship. Unlike functional dependence, a correlation is, as a rule, considered when one of the random
variables depends not only on the other (given) one, but also on several random factors. The dependence between two random events is manifested in the fact that the conditional probability of one of
them, given the occurrence of the other, differs from the unconditional probability. Similarly, the influence of one random variable on another is characterized by the conditional distributions of
one of them, given fixed values of the other. Let regression of
One has
if, moreover,
When studying the interdependence of several random variables correlation matrix. A measure of the linear relationship between multiple-correlation coefficient. If the mutual relationship of partial
correlation coefficient of
For measures of correlation based on rank statistics (cf. Rank statistic) see Kendall coefficient of rank correlation; Spearman coefficient of rank correlation.
Mathematical statisticians have developed methods for estimating coefficients that characterize the correlation between random variables or tests; there are also methods to test hypotheses concerning
their values, using their sampling analogues. These methods are collectively known as correlation analysis. Correlation analysis of statistical data consists of the following basic practical steps:
1) the construction of a scatter plot and the compilation of a correlation table; 2) the computation of sampling correlation ratios or correlation coefficients; 3) testing statistical hypothesis
concerning the significance of the dependence. Further investigation may consist in establishing the concrete form of the dependence between the variables (see Regression).
Among the aids to analysis of two-dimensional sample data are the scatter plot and the correlation table. The scatter plot is obtained by plotting the sample points on the coordinate plane.
Examination of the configuration formed by the points of the scatter plot yields a preliminary idea of the type of dependence between the random variables (e.g. whether one of the variables increases
or decreases on the average as the other increases). Prior to numerical processing, the results are usually grouped and presented in the form of a correlation table. In each entry of this table one
writes the number
For more accurate information about the nature and strength of the relationship than that provided by the scatter plot, one turns to the correlation coefficient and correlation ratio. The sample
correlation coefficient is defined by the formula
In the case of a large number of independent observations, governed by one and the same near-normal distribution,
where the numerator represents the spread of the conditional mean values
The testing of hypotheses concerning the significance of a relationship are based on the distributions of the sample correlation characteristics. In the case of a normal distribution, the value of
the sample correlation coefficient
Even at relatively small values
and variance
For the distribution of the sample correlation ratio and for tests of the linearity hypothesis for the regression, see [3].
[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] B.L. van der Waerden, "Mathematische Statistik" , Springer (1957)
[3] M.G. Kendall, A. Stuart, "The advanced theory of statistics" , 2. Inference and relationship , Griffin (1979)
[4] S.A. Aivazyan, "Statistical research on dependence" , Moscow (1968) (In Russian)
How to Cite This Entry:
Correlation (in statistics). A.V. Prokhorov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Correlation_(in_statistics)&oldid=11629
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Correlation_(in_statistics)","timestamp":"2014-04-17T06:57:22Z","content_type":null,"content_length":"36331","record_id":"<urn:uuid:91aaf2e3-fc59-4123-90de-ab5d1786ba04>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thinking outside the box
June 2009
This article is the runner-up in the university category of the Plus new writers award 2009.
"Fine. The next person who comes in without having tried the exercise will lose not only the marks for this exercise, but for any previous exercises already completed in my class."
Hash words. I wasn't sure whether or not my maths professor had the power to do this, but he was definitely angry enough at his uncooperative class to try it. It wasn't our fault that the class was
Fast forward a week.
"So. Where's your exercise?" Obviously, I had been the one to forget. To make it worse, I was ten minutes late. The professor raised his voice, in order to better make a spectacle of me. "So what's
your excuse?" I looked at the floor, about to mutter something about forgetting before taking my average-destroying punishment. He looked up over the top of his glasses, his anger at the whole class
focused on me, and I decided that at least I would go out with a bang. "Go on, you can tell me," he urged, the calm before the storm. I took a deep breath and began to explain.
"It all began the weekend before last. Before that, I can assure you, I was a fairly normal person. But then, last weekend, I began to see things."
"Things? What kind of things?" the professor butted in, perhaps in surprise at my rather unusual response. I ad libbed.
It had been a rather ordinary Sunday afternoon, and I had been staring pleasantly into space lost in some kind of thought. Suddenly, and quite out of the blue, a sphere had appeared in the air in
front of me. It had grown considerably larger, and then proceeded to shrink back down again until it disappeared.
I looked up to see the professor's response. He was busy drawing on the board. I turned to have a look at what he had sketched.
"Did you note precisely how the radius or volume of the sphere changed with time?" the professor enquired. Startled by the question, I looked up from his drawing and immediately replied that I had
not. "That's a great pity," the professor replied. "You may have just had an encounter with a four-dimensional hypersphere." The rest of the class tittered, amused by my story and the professor's
compliance with my game. He had drawn a three-dimensional sphere passing through a two-dimensional plane. To a two-dimensional person on his two-dimensional plane the sphere would have looked like a
growing and shrinking circle. Just like my growing and shrinking sphere. So my growing and shrinking sphere was really just a three-dimensional cross section of a four-dimensional sphere moving
through some forth dimension? I posed the question to the professor. "Of course there's no way of knowing for sure that it was a four-dimensional sphere, and not a four-dimensional ellipsoid with a
spherical cross section, or some other four-dimensional solid with a spherical cross section, but had you timed how its radius changed with time, we could have assumed it was moving with a uniform
velocity in the fourth dimension and at least had a better idea." He looked rather accusing, so I decided to continue with my story.
A sphere passing through a plane, represented here by a line. The intersection of the sphere and plane is a growing and then shrinking circle.
"I had just about convinced myself that I had imagined the whole sphere thing, when I tripped over a rather curious object on the floor. It looked like this," I quickly sketched a drawing on the
I explained that I had thought it was just a toy my little sister had left lying around, when it started to fold itself. I couldn't quite understand what I was watching, parts of it seemed to turn
inside out and pass through each other, until it simply disappeared again.
"Ah, the tesseract!" The professor was truly pleased. The class looked truly confused. He explained.
"In one dimension the only geometric figure you can draw is two points joined by a straight line. In two dimensions a square arises from displacing a straight line segment along a direction
perpendicular to the segment. Similarly you can form a cube in three dimensions by displacing a square. But why stop there? Why not have a fourth dimension? Now you can make a four-dimensional
hypercube by displacing a cube perpendicularly in the forth dimension. It's called a tesseract!"
"We can't visualise the four-dimensional tesseract, but we can visualise its three-dimensional net. Just as a cube can be unfolded to give a two-dimensional shape that looks like a T, so the
tesseract can be unfolded to give a three-dimensional object — that's the shape you mistook for your sister's toy."
A shadow of a tesseract rotating around a plane.
A student in the class piped up:"Physicists tell us that time is the fourth dimension, so is this the extra dimension you've just told us about the same as time?"
"No, but you can treat time mathematically like a fourth dimension. Time happens to have some very interesting physical properties that make it useful, in physics, to think about it like a fourth
dimension: in this thinking the Universe can be described as Minowski space-time. It doesn't mean time is just like the other spatial dimensions". We all relented, and accepted what he said, even
though we hadn't actually seen the equations.
"You haven't seen the equations yet! Why are you agreeing with me?!" He wrote on the board
"This quantity d describes the distance between two points in ordinary three-dimensional space: if you impose a coordinate system on the space so that one of your points corresponds to the
coordinates (0,0,0) and the other to the coordinates (x,y,z), then Pythagoras' theorem tells you that d is exactly the distance between the two. There are three terms in this expression because there
are three dimensions of space. According to general experience, and classical physics, this way of describing distance is the same whether you are on a moving train, boat or plane, or standing beside
it watching someone on the moving object. In other words, it is conserved."
"This is no longer true in special relativity. Einstein showed theoretically that the distance d does depend on the velocity of your frame of reference, so d is no longer conserved! There is another
quantity that is conserved though, and this is given by
s^2 = x^2+y^2+z^2 - (ct)^2,
where c is the speed of light and t is time. It makes time look a little similar to the physical dimensions x, y, and z, doesn't it? Of course, there's more to it than that," he continued,
referencing a textbook.
A lightbulb is nearly spherical, so the inverse square law applies.
"Mathematically, there is no reason why you can't have extra spatial dimensions, it just so happens that physically we don't seem to have them. Take the inverse square law for electromagnetic
radiation. If a point source, for example a light bulb, is emitting light, then there are P Watts of energy per second leaving the light bulb — P Watts must go through the spherical surface of the
bulb per second. Since the surface of a sphere has area A=4πr^2, where r is the radius of the sphere, the intensity is given by
Experiment shows that that this is indeed the case. If the world had four physical dimensions, then the inverse square law would be the inverse cube law, with the intensity falling off as 1/r^3,
since the light would emanate in four dimensions instead of three. Of course, light might not work in the fourth dimension (in fact, we've just shown that our kind of light doesn't), and there might
be other reasons why we don't perceive a fourth dimension, but because none of our forces seem to work there, it seems likely that it would be very different from our three dimensions."
"So I'm crazy," I sighed sadly. The professor laughed. "Just a little. Anyway, let's give a big round of applause for our impromptu storyteller." As my face went from pink to crimson, I managed to
breathe a sigh of relief, he didn't sound angry anymore, maybe I was off the hook. "We'll arrange your punishment after class." My heart sank, but at least I had gone out with the bang I had hoped
for. After the lecture was over, the professor approached me. "That was the best lecture I have had so far with this class," he told me. "In exchange for losing no marks, I expect you to have another
special topic prepared for us to talk about next week." Then he let me go. I left, amazed that I had managed to escape so lightly. I guess sometimes it doesn't hurt to think outside the box.
About the author
Sonia Buckley is a final year undergraduate at Trinity College Dublin, where she studies experimental physics (but don't worry, she loves maths too). She is about to begin a PhD in Applied Physics at
Stanford University. When the physics gets too much for her, she likes to go hiking or rock climbing (or both). | {"url":"http://plus.maths.org/content/thinking-outside-box","timestamp":"2014-04-16T07:44:25Z","content_type":null,"content_length":"33616","record_id":"<urn:uuid:e67e4378-8c9d-4ef7-9aee-e9925fc791c9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phil. 160 Symbolic Logic II
Catalog description: Further study of deductive logic. Topics include: principles of inference for quantified predicate logic; connectives; quantifiers; relations; sets; modality; properties of
formal logical systems, e.g. consistency and completeness; and interpretations of deductive systems in mathematics, science, and ordinary language. Prerequisite: MATH 031, or PHIL 060, or instructor
permission. 3 units.
Textbooks: There are two required books. Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter; and Logics, by John Nolt. In addition to the books, there will be some webpages and class
handouts that you should consider to be required reading.
The two books are available at the CSUS Hornet Bookstore, but here is a website that compares book prices at many online bookstores: http://www.bookase.com. You can get the book used at Amazon.com
for $150: http://www.amazon.com/Logics-John-Nolt/dp/0534506402. The book comes with a CD that we won't be using. We will be reading approximately half of the book Gödel, Escher, Bach; two copies of
it are on reserve in the CSUS Main Library. If you understand a little about music theory, then this book will deeply enrich your understanding; but if you don't know about music theory, then you
will be relieved to know that you won't be tested on any music knowledge, just knowledge of logic. The same goes for art and ancient Japanese poetry.
Grades: Your grade will be determined by four homework assignments (each 14%), a midterm exam (20%), and a comprehensive final exam (24%). Homework questions will be handed out a week in advance of
the due date. Class attendance is optional, but you are responsible for material covered in class that isn't in the book.
Due dates:
Homework 1: Feb. 14, 2013 (wk 3)
Homework 2: Mar. 7 (wk 6)
Midterm: Mar. 21 (wk 8)
Homework 3: Apr. 18 (wk 11)
Homework 4: May 9 (wk 14)
Final Exam: May 23, 10:15 A.M.
For homeworks, you are responsible for any announced changes to questions that are made after the homework is handed out but before it is due, even if you didn't attend class the day the change was
Course Description: Our course presupposes you have had a first course in deductive logic, such as Phil. 60, or have learned this material on your own. The first month will contain a review of Phil.
60, but also will enrich that material. Our goal is to appreciate what can be done with deductive symbolic logic and what can't be done. That is, we will explore the scope and limits of deductive
logic rather than its depth in one particular area.
Deductive logic explores deductively valid reasoning, the most secure kind of reasoning. A mathematical proof is deductively valid reasoning. Inductive reasoning, by contrast, is about less secure
reasoning from the circumstantial evidence of the lawyer, the documentary evidence of the historian, the statistical evidence of the economist, and the experimental evidence of the scientist.
For a helpful metaphor, you might think of our symbolic deductive logic as a machine for detecting the presence of the most secure reasoning. In our course, we will not only use the machine but also
study what it can and cannot do, and whether it can be revised to do other things. For example, does it have the power to show that "Obama's father is working in his office" logically implies
"Someone's father is working"? Can we use the machine to demonstrate that no use of the machine will lead to a contradiction?
Our course will survey the deep results yielded by the developments in symbolic deductive logic. These results concern the surprising extent to which human knowledge can not be freed of
contradictions, to what extent our knowledge can be expressed without loss of content inside of a formal language, and what our civilization has learned from the field of symbolic deductive logic
about the limits to what people can know and about the limits of what computers can do, the major results here being the Unsolvability of the Halting Problem, the Church-Turing Undecidability
Theorem, Tarki's Undefinability Theorem for Truth, and Gödel's Incompleteness Theorems.
We will begin our course with a review of Phil. 60 while providing a rigorous development of both propositional logic (also called statement logic and sentential logic and propositional calculus and
statement calculus and the theory of truth functions) and elementary predicate logic (also called first-order logic, relational logic, quantificational logic and predicate calculus). Then we will
learn about their applications, extensions, meta-theory, and non-classical variants. Regarding non-classical variants, this comment in 1970 by the American logician W.V.O. Quine is helpful:
Logic is in principle no less open to revision than quantum mechancis or the theory of relativity.
My role in our course will be to cut through the jargon and help you understand as quickly as possible.
A good analogy for our course is that learning symbolic logic is much like learning a computer language. The big difference is that in symbolic logic the focus is on using the formal language to
assess argument correctness rather than on getting a computing machine to follow its intended program. To continue with the analogy, in our course we will not be focusing on doing actual programming
so much as learning the capabilities of the computer.
Speaking about the second textbook Gödel, Escher Bach, the M.I.T. senior student Justin Curry, who gives the online lectures about it, says, "I advise everyone seriously interested to buy the book,
grab on and get ready for a mind-expanding voyage into higher dimensions of recursive thinking."
Topics and reading assignments: Click here.
Relevance of logic to other subjects: If you are curious about the relevance of deductive logic to other subjects such as philosophy, mathematics, and computer science, then click on the ticket
Student outcome goals: The hope is that by the end of the semester you will have achieved the following goals:
• Be able to reason more effectively.
• Be able to describe the scope of deductive logic, that is, what it can be used to do; and be able to describe the limits of logic, that is, what it cannot be used to do.
• Build on the abilities you learned in Phil. 60 to recognize when the quality of an English argument is capable of being analyzed with symbolic deductive techniques, to translate a symbolic
deductive argument into English and vice versa, to determine if a symbolic deductive sentence is logically true, to determine if a set of symbolic sentences is consistent, to assess the logical
correctness or incorrectness of arguments using the techniques of symbolic deductive logic, to create proofs in both predicate logic and propositional logic, and be capable of creating and
analyzing rigorous proofs using the methods of classical symbolic deductive logic.
• Understand Hilbert's program and the process of formally axiomatizing a theory.
• Be familiar with the most important meta-theoretic results such as Gödel's Theorems, the Church-Turing Undecidability Theorem, Tarski's Undefinability Theorem, and the Löwenheim-Skolem Theorem.
You will be able to appreciate why Gödel says all consistent axiomatic formulations of first-order number theory include undecidable propositions.
• Be able to say how symbolic deductive logic has deepened our knowledge of some important philosophical issues, and how it has led to new issues of its own.
• Know the extent to which human knowledge can be freed of contradictions.
• Be able to say what our civilization has learned from the field of symbolic deductive logic about the limits to what people can know and about the limits of what computers can do.
• Know that there are important extensions of classical first-order logic to non-standard logics such as modal logic, deontic logic, free logic, many-valued logic, second-order logic, many-sorted
logic, fuzzy logic, and paraconsistent logic.
Laptops, cell phones: Photographing during class is not allowed without permission of the instructor. Audio recording is OK. During class, turn off your cellphone. Your computers may be used only for
note taking, and not for browsing the web, reading emails, or other activities unrelated to the class. If you use a computer during class, then please sit in the back of the room or in a side row so
that your monitor's screen won't distract other students.
Testing protocol: For in-class tests, you may use any books and notes but not your computer or cellphone.
Disabilities: If you have a documented disability and require accommodation or assistance with assignments, tests, attendance, note taking, and so forth, then please see me early in the semester so
that appropriate arrangements can be made to ensure your full participation in class. Also, you are encouraged to contact the Services for Students with Disabilities (Lassen Hall) for additional
information regarding services that might be available to you.
Plagiarism and Academic Honesty: A student tutorial on how not to plagiarize is available online from our library. The University's policy on academic honesty is at http://www.csus.edu/umanual/
Food: Except for water, please do not eat or drink during class. You are welcome to leave class (and return) any time you wish.
Late assignments, and make-up assignments: I realize that during your college career you occasionally may be unable to complete an assignment on time. If this happens in our course, contact me as
soon as you are able. If you promptly provide me with a good reason for missing a test or homework assignment (illness, accident, ...), then I'll use your grade on the final exam as your missing
grade. There will be no make-up tests nor make-up homework. I do accept late homework with a grade penalty of one-third of a letter grade per 24-hour period beginning at the class time the assignment
is due. Here are some examples of how this works. If you turn in the assignment a few hours after it is due, then your A becomes an A-. Instead, if you turn in the same assignment 30 hours late, then
your A becomes a B+. Weekends count, so scan your late, but finished work on the weekend and email it as an attachment. No late work will be accepted after the answer sheet has been handed out
(normally this will be at the next class meeting), nor after the answers are discussed in class, even if you weren't in class that day.
Add-Drop: To add the course, try to do so by using the CMS system. If the course is full, then see me about signing up on the waiting list. To drop the course during the first two weeks, use the CMS
system. No paperwork is required. After the first two weeks, it is harder to drop, and a departmental form is required, the "Petition to Add/Drop After Deadline." As with any university course, make
sure you are dropped officially (by CMS or by the instructor or department secretary); don't simply walk away into the ozone or else you will get a "WU" grade for the course, which is counted as an
"F" in computing your GPA (grade point average).
Professor: My office is in Mendocino Hall 3022, and my weekly office hours are TuTh 9:30-10:30 and 12:00-12:30. Feel free to stop by at any of those times, or to call. If those hours are inconvenient
for you, then I can arrange an appointment for an alternative time. You may send me e-mail at dowden@csus.edu or call my office at 278-7384 or the Philosophy Department Office at 278-6424. The
fastest way to contact me is by email. My personal web page is at http://www.csus.edu/indiv/d/dowdenb/index.htm
Prof. Dowden
Study tips: As you read an assignment, it is helpful first to skim the assignment to get some sense of what’s ahead. Look at how it is organized and how the author signifies main ideas (section
titles, bold face, italics, full capitals, and so forth). Make your own notes as you read. Stop every twenty minutes to look back over what you’ve read and try to summarize the key ideas for
yourself. This periodic pausing and reviewing will help you maintain your concentration, process the information more deeply, and retain it longer. Notice connections between one section and another.
You’ll be given sample questions now and then to help guide your studying for future assignments, but the homework and test questions in our course will usually require you to apply your knowledge to
new questions not specifically discussed in class nor in the book. This ability to use your knowledge in new situations requires study activities different from memorizing. You goal is to improve
your skills, rather than to memorize information. Think of the textbook more as a math book than a novel, so re-reading is important.
Contact me at dowden@csus.edu if you'd like more information about our course.
PHILOSOPHY DEPARTMENT / PROF. DOWDEN / CSUS
Updated: May 4, 2013 | {"url":"http://www.csus.edu/indiv/d/dowdenb/160/s13/160-s13.htm","timestamp":"2014-04-18T23:19:16Z","content_type":null,"content_length":"25083","record_id":"<urn:uuid:99df997d-470f-45af-ba94-c6efa8da7eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
ISC Board Question Paper Physics Class XII - 2009
ISC Board Question Paper Class XII – 2009
Physics (Theory)
Paper - 1
(Candidates are allowed additional 15 minutes for only reading the paper. Thay must NOT start writing during this time)
General Instruction:
(i) Answer all question in Part I and six questions from Part II, choosing two questions from each of the Section A, B and C.
(ii). All working including rough work should be done on the same sheet as, and adjacent to the rest of the answer.
(iii). The intended marks for questions or parts of questions are given in brackets []. (Material to be supplied: Log tables including Trigonometric functions)
(iv). A list of useful physical constants is given at the end of this paper.
PART I (Compulsory)
Answer all question briefly and to the point:
(i) Explain the statement ‘Relative permittivity of water is 81’.
(ii) Draw (at least three) electric lines of force due to an electric dipole.
(iii) Find the value of resistance X in the circuit below so that the junction M and N are at the same potential.
(iv) When the cold junction of a certain thermo-couple was maintained at 20 degree C, its neutral temperature was found to be 180 degree C. Find its temperature of inversion.
(v) State how the magnetic susceptibility of a ferromagnetic change when it is heated.
(vi) Write an equation of Lorentz force F acting on a charged particle having charge q moving in a magnetic field B with a velocity v in vector form.
(vii) What is the value of power factor in a series LCR circuit at resonance?
(viii) An a.c. generator generates an emf ‘e’ given by : e = 311 Sin (100 pai t) volt. Find the rms value of the emf generated by the generator.
(ix) A ray LM of monochromatic light incident normally on one refracting surface AB of a regular glass prism ABC emerges in air from the adjacent surface AC as shown in Figure. Calculate the
refractive index of the material of the prism.
(x) Describe the absorption spectrum of Sodium.
(xi) A thin converging lens of focal length 15 cm is kept in contact with a thin diverging lens of focal length 20 cm. Find the focal length of this combination.
(xii) Can two sodium vapour lamps act as coherent sources? Explain in brief.
(xiii) Why all over the world, giant telescopes are of reflecting type? State any one reason.
(xiv) A ray of ordinary light is incident on a rectangular block of glass at Brewster's angle. What is the angle between the reflected ray and the refracted ray of light?
(xv) Find the momentum of a photon of energy 3.0 eV.
(xvi) The half life of a certain radio active element is 8 hours. if a pupli starts with 32 g of this element, how much of the sample will be left behind at the end of one day?
(xvii) If a hydrogen atom goes from III excited state to 11 excited state, what kind of radiation (visible light, ultra violet, infrared, etc.) is emitted?
(xviii) Where in our universe is the thermo-nuclear energy being released naturally?
(xix) In which of the solids (semi-conductors, conductors or insulators) do conduction band and valence band overlap?
(x) What is the symbol of a NOR gate?
PART II
Answer six questions in this part, choosing two questions
from each of the Sections A, B and C
SECTION A
(Answer any two questions)
(a) With the help of a labeled diagram, obtain an expression for the electric field intensity ‘E’ at a point P in broad side position (i.e. equatorial plane) of an electric dipole.
(b) Find the electric charge Q1 on plate of capacitor C1, shown in Figure below:
(c) (i) What is meant by:
(1) Drift velocity and
(2) Relaxation time?
(ii) A metallic plug AB is carrying a current 1 (see Figure below). State how the drift velocity of free electrons varies, if at all, from end A to end B.
(a) Figure below shows a uniform manganin wire XY of length 100 cm and resistance 9 ome, connected to an accumulator D of emf 4V and internal resistance 1 ome through a variable resistance R, E is a
cell of emf 1.8 V connected to the wire XY via a jockey J and a central zero galvanometer G. It is found that the galvanometer shows no deflection when XJ = 80 cm. Find the value of R.
(b) Obtain an expression for magnetic flux density ‘B’ at the center of a circular coil of radius R and having N turns, when a current I is flowing through it.
(c) (i) State any two differences between a moving coil galvanometer and a tangent galvanometer.
(ii) What is the use of a Cyclotron?
(a) What is meant by the time constant of an LR circuit? When the current flowing through a coil P decreases from 5A 0 in 0.2 seconds, an emf of 60V is induced across the terminals of an adjacent
coil Q. Calculate the coefficient of mutual inductance of the two coils P and Q.
(b) When an alternating ernf e = 300 Sin (100 n t + n/6) volt is applied to a circuit, the current I through it Is I = 5.0 Sin (100 n t + n/6) ampere. Find the:
(i) Phase difference between the emf and the current.
(ii) Average power consumed by the circuit.
SECTION –B
(Answer any two questions)
(a) In which part of' the electromagnetic spectrum, do the following radiations lie:
(i) Having wavelength of 20 nm
(ii) Having frequency of 10 MHz
(b) In Young's double slit experiment, what is meant by ‘fringe width’ or ‘fringe separation’? State two ways of increasing the fringe width, without changing the source of light.
(c) A thin convex lens which is made of glass (refractive index 1.5) has a focal length of 20 cm. It is now completely immersed in a transparent liquid having refractive index 1.75. Find the new
focal length of the lens.
(a) Draw a labeled graph showing the variation in intensity of light with distance in a single slit Fraunhofer diffraction experiment.
(b) Give any two methods by which (ordinary) light can be polarised.
(c) A point source of monochromatic light ‘S’ is kept at the center C of the bottom of a cylinder. Radius of the circular base of the cylinder is 50.0 cm. The cylinder contains water (refractive
index=4/3) to a height of 7.0 cm. (see Figure below):
Find the area of water surface through which light emerges in air.(Take n = 22/7)
(a) An astronomical telescope consists of two convex lenses having focal length 80 cm and 4 cm. When it is in normal adjustment, what is its:
(i) Length,
(ii) Magnifying power?
(b) A convex lens of focal length 5 cm is to be used as a simple microscope. Where should an object be kept so that image formed by the lens lies at least distance D of distinct vision (D=25 cm)?
Also calculate the magnifying power of this instrument in this set up.
(c) What is meant by 'Chromatic aberration'? A thin convex lens of focal length 30 cm and made of flint glass (dispersive power = 0.03) is kept in contact with a thin concave lens of focal length 20
cm and made of crown glass. Calculate the dispersive power of crown glass if the above said combination acts as an achromatic doublet.
SECTION C
(Answer any two questions)
(a) Electrons, initially at rest, are passed through a poyential difference of 2 kV.
Calculate their:
(i) Final velocity and
(ii) de Broglie wavelength
(b) What are characteristic X rays? How are they different from continuous X rays? Give any one difference.
(c) Wavelength of the Ist line (Ha) of Balmer series of hydrogen is 656.3 nm. Find the wavelength of its 2nd line (Hb).
(a) Plot a labeled graph of |Vs| where Vs is stopping potential of photoelectrons versus frequency ‘f’ of incident radiation. How will you use this graph to determine the value of Planck’s constant?
(b) (i) Define ‘unified atomic mass unit’.
(ii) Find the minimum energy which a gamma ray photon should possess so that it is capable of producing an electron positron pair.
(c) Fission of U - 235 nucleus releases 200 MeV of energy. Calculate the fission rate (i.e. no. of fissions per second) in order to produce a power of 320 MW.
(a) Draw a neatly labelled circuit diagram of a Full Wave rectifier using two Junction diodes.
(b) A sinusoidal voltage c = ro Sin (wt) is fed to a common emitter amplifier. Draw neatly labelled diagrams to show:
(i) Signal voltage
(ii) Output voltage of the amplifier.
(c) Make a truth showing input at A and B and output at X, Y and Z for the combination of gates shown in Figure below. | {"url":"http://www.boardguess.com/icseisc-board/question-papers-icse/class-xii-question-isc/year-2009-xii-question-isc/isc-board-question-paper-physics-class-xii-2009-6920.htm","timestamp":"2014-04-20T18:23:46Z","content_type":null,"content_length":"38356","record_id":"<urn:uuid:bab130ba-3897-400b-93b1-c32ba417848b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/sinusoidal/answered","timestamp":"2014-04-21T12:28:08Z","content_type":null,"content_length":"108825","record_id":"<urn:uuid:3f5f0033-d092-46df-9409-f2cf4cc13c06>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References
January 2004 (vol. 15 no. 1)
pp. 28-39
ASCII Text x
Weng-Long Chang, Jih-Woei Huang, Chih-Ping Chu, "Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References," IEEE Transactions on Parallel and
Distributed Systems, vol. 15, no. 1, pp. 28-39, January, 2004.
BibTex x
@article{ 10.1109/TPDS.2004.1264783,
author = {Weng-Long Chang and Jih-Woei Huang and Chih-Ping Chu},
title = {Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {15},
number = {1},
issn = {1045-9219},
year = {2004},
pages = {28-39},
doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2004.1264783},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References
IS - 1
SN - 1045-9219
EPD - 28-39
A1 - Weng-Long Chang,
A1 - Jih-Woei Huang,
A1 - Chih-Ping Chu,
PY - 2004
KW - Parallel compiler
KW - communication-free alignment
KW - parallel computing
KW - loop optimization
KW - data dependence analysis
KW - load balancing.
VL - 15
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—Data alignment that facilitates data locality so that the data access communication costs can be minimized, helps distributed memory parallel machines improve their throughput. Most data
alignment methods are devised mainly to align the arrays referenced using linear subscripts or quadratic subscripts with few (one or two) loop index variables. In this paper, we propose two
communication-free alignment techniques to align the arrays referenced using linear subscripts or quadratic subscripts with multiple loop index variables. The experimental results from our techniques
on Vector Loop and TRFD of the Perfect Benchmarks reveal that our techniques can improve the execution times of the subroutines in these benchmarks.
[1] J. Edmonds, Systems of Distinct Representative and Linear Algebra J. Research of Nat'l Bureau of Standards, Section B, vol. 71, no. 4, pp. 241-245, 1967.
[2] D.G. Luenberger, Linear and Nonlinear Programming. Addison-Wesley Publishing Company, 1984.
[3] J. Ramanujam and P. Sadayappan, “Compile-Time Techniques for Data Distribution in Distributed Memory Machines,” IEEE Trans. Parallel and Distributed Systems, vol. 2, no. 4, pp. 472-482, Oct.
[4] D. Levine, D. Callahan, and J. Dongarra, A Comparative Study of Automatic Vectorizing Compilers Parallel Computing, vol. 17, pp. 1223-1244, 1991.
[5] J. Dongarra, M. Furtney, S. Reinhardt, and J. Russell, Parallel Loops A Test Suite for Parallelizing Compilers: Description and Example Results Parallel Computing, vol. 17, pp. 1247-1255, 1991.
[6] P. Feautrier, Toward Automatic Partitioning of Arrays on Distributed Memory Computers Proc. ACM Int'l Conf. Supercomputing, pp. 175-184, 1993.
[7] D. Bau, I. Kodukula, V. Kotlyar, K. Pingali, and P. Stodghill, Solving Alignment Using Elementary Linear Algebra Proc. Conf. Record Seventh Workshop Languages and Compilers for Parallel
Computing, pp. 46-60, Aug. 1994.
[8] M. Wolfe, High Performance Compilers for Parallel Computing. Redwood City: Addison-Wesley Publishing Company, 1996.
[9] P.M. Petersen and D.A. Padua, Static and Dynamic Evaluation of Data Dependence Analysis Techniques IEEE Trans. Parallel and Distributed Systems, vol. 7, no. 11, pp. 1121-1132, Nov. 1996.
[10] M. Dion and Y. Robert, Mapping Affine Loop Nests: New Results Parallel Computing, vol. 22, no. 10, pp. 1373-1397, Dec. 1996.
[11] P. Lee, “Efficient Algorithms for Data Distribution on Distributed Memory Parallel Computers,” IEEE Trans. Parallel and Distributed Systems, vol. 8, no. 8, pp. 825-839, 1997.
[12] R. Eigenmann, J. Hoeflinger, and D. Padua, "On the Automatic Parallelization of the Perfect Benchmarks," IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 1, pp. 5-23, Jan. 1998.
[13] Y.-C. Chung, C.-H. Hsu, and S.-W. Bai, “A Basic-Cycle Calculation Technique for Efficient Dynamic Data Redistribution,” IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 4, Apr. 1998.
[14] A.W. Lam and M.S. Lam, Maximizing Parallelism and Minimizing Synchronization with Affine Partitions Parallel Computing, vol. 24, nos. 3-4, pp. 445-475, May 1998.
[15] M. Kandemir, A. Choudhary, N. Shenoy, P. Banerjee, and J. Ramanujam, A Hyperplane Based Approach for Optimizing Spatial Locality in Loop Nests Proc. 12th ACM Int'l Conf. Supercomputing, pp.
69-76, July 1998.
[16] M. Kandemir, J. Ramanujam, A. Choudhary, and P. Banerjee, A Loop Transformation Algorithm Based on Explicit DADA Layout Representation for Optimizing Locality Proc. 11th Int'l Workshop Languages
and Compilers for Parallel Computing, pp. 34-50, Aug. 1998.
[17] C.-P. Chu, W.-L. Chang, I. Chen, and P.-S. Chen, Communication-Free Alignment for Array References with Linear Subscripts in Two Loop Index Variables or Quadratic Subscripts Proc. Second IASTED
Int'l Conf. Parallel and Distributed Computing and Networks, pp. 571-576, 1998.
[18] V. Boudet, F. Rastello, and Y. Robert, Alignment and Distribution is NOT (Always) NP-Hard Proc. Int'l Conf. Parallel and Distributed Systems, vol. 5, no. 9, pp. 648-657, Dec. 1998.
[19] C.-J. Liao and Y.-C. Chung, “Tree-Based Parallel Load-Balancing Methods for Solution Adaptive Finite Element Graphs on Distributed Memory Multicomputers,” IEEE Trans. Parallel and Distributed
Systems, vol. 10, no. 4, Apr. 1999.
[20] G.-H. Hwang and J.K. Lee, An Expression-Rewriting Framework to Generate Communication Sets for HPF Programs with Block-Cyclic Distribution Parallel Computing, vol. 25, pp. 1105-1139, 1999.
[21] A.W. Lam, G.I. Cheong, and M.S. Lam, An Affine Partitioning Algorithm to Maximize Parallelism and Minimize Communication Proc. 13th ACM Int'l Conf. Supercomputing, pp. 228-237, June 1999.
[22] K.-P. Shih, J.-P. Sheu, and C.-H. Huang, Statement-Level Communication-Free Partitioning Techniques for Parallelizing Compilers J. Supercomputing, pp. 243-269, vol. 15, no. 3, Feb. 2000.
[23] C.-H. Hsu, S.-W. Bai, Y.-C. Chung, and C.-S. Yang, A Generalized Basic-Cycle Calculation Method for Array Redistribution IEEE Trans. Parallel and Distributed Systems, vol. 11, no. 12, pp.
1201-1216, Dec. 2000.
[24] W.-L. Chang, C.-P. Chu, and J.-H. Wu, Communication-Free Alignment for Array References with Linear Subscripts in Three Loop Index Variables or Quadratic Subscripts J. Supercomputing, vol. 20,
no. 1, pp. 67-83, Aug. 2001.
Index Terms:
Parallel compiler, communication-free alignment, parallel computing, loop optimization, data dependence analysis, load balancing.
Weng-Long Chang, Jih-Woei Huang, Chih-Ping Chu, "Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References," IEEE Transactions on Parallel and Distributed
Systems, vol. 15, no. 1, pp. 28-39, Jan. 2004, doi:10.1109/TPDS.2004.1264783
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/2004/01/l0028-abs.html","timestamp":"2014-04-16T23:13:30Z","content_type":null,"content_length":"57436","record_id":"<urn:uuid:1409e995-033a-4df2-a948-bc94e9c3ae9a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Few confusion in concept regarding inequalities.
June 28th 2013, 06:57 PM #1
Mar 2012
Few confusion in concept regarding inequalities.
Hi Friends,
I am preparing for once competitive exams which involves the topics of inequalities.I have some doubt which i would like to clarify here.Please help me out to understand some of the concepts.
|x+5| >5
what does the above equation mean.Please explain it to me.
How -7<x<3 and |x+2| <5 are the same please explain it to me.
also how |x+2|>=5 is equal to |x|<=3
How -7<x<3 and |x+2| <5 are the same please explain it to me.
also how |x+2|>=5 is equal to |x|<=3
This much for today thanks.
An early replay will be very much helpful.
Thanks in advance,
Re: Few confusion in concept regarding inequalities.
Okay, so I would hope you know what absolute value is (|x|), if not absolute value is the "distance from zero" on the number line. So basically, the absolute value of any positive number is
itself, and the absolute value of a negative number would be the positive counterpart of that number. Using the || notation, you can see stuff like |-5|=5. So back to the inequality, |x+5| > 5
just means that the x+5's distance from zero is greater than 5. You know that if x+5 is positive, then x+5 must be greater than 5, but if x+5 is negative, then it must be less than -5, because
then the |x+5| will be greater than 5. Putting the words into an inequality, x+5>5 or x+5<-5 which becomes x>0 or x<-10. Note that if you have |x|, and it is < or <= some value, then we say "and"
for the two inequalities, not "or". The reason being "and" states the there is a single section of the number line that x is from, when |x| is greater than some value, we say "or" for the two
inequalities because they are two separate parts of the number line, and x cannot come from two sections at once. When you have an "and" set of two or more inequalities, you can string them into
one inequality with several < or > signs. Also a tip, in math competitions, they will usually ask you to graph the inequality on a number line rather than write out the inequality algebraically,
so be sure you understand how to graph inequalities. Remember that when graphing inequalities, have a solid dot if there is an = sign, but have it hollow if there is none. May I ask which
competition this is? Hope this helps.
Last edited by ShadowKnight8702; June 28th 2013 at 07:27 PM.
June 28th 2013, 07:23 PM #2
Sep 2012
United States | {"url":"http://mathhelpforum.com/math/220231-few-confusion-concept-regarding-inequalities.html","timestamp":"2014-04-17T08:31:38Z","content_type":null,"content_length":"34464","record_id":"<urn:uuid:8db538e1-933e-4845-a7fa-450bb1ee9842>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Equation using Structures
Create a structure named quad that has three data numbers of the type double for the a, b, and c coefficients of a quadratic polynomial: f(x) = ax^2 + bx + c
Write a main() function that will declare and initialize a quad structure and display the values of the coefficient. Now write and test the following functions:
1) Function readValue() that takes a single reference argument to a quad structure and a void return type. It should ask the user to enter values for the a, b, and c.
2) Function showValue() that takes a single reference argument to a quad structure and a void return type. It should display the quadratic on the terminal in this form: -2x^2 - 4x +35 3) Function
addQuad() that takes references to two quad structures as parameters and returns a quad structure that is the sum of the two input quad structures. For quadratic functions f(x) = ax^2 + bx + c and g
(x) = dx^2 + ex + f the sum is given by f(x) + g(x) = (a+d)x^2 + (b+e)x + c + f
Demonstrate these functions in a main() program by doing the following: 1) Enter the following 2 quadratic functions using the readValue() function: f(x) = 5x^2 - 2x + 3 and g(x) = 4x^2 + 3x +7 2)
Determine the sum of f(x) + g(x) using teh addQuad() function.
3) Display the resulting polynomial using the showValue() function.
I started it but can you explain how i would do part 2)? What would the addQuad() function look like and how do i call these functions in the main program?
Last edited on
A quick once-over and it looks fine to me. The parameters to showValue() and to addQuad() should be const references though. Is it working the way you intend it to?
I don't have a compiler or program to run it on my laptop so I have no way of knowing until I can manage to go to the computer labs but it's due in two days so I wanted someone's opinion to see if I
was on track.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/83508/","timestamp":"2014-04-20T11:19:17Z","content_type":null,"content_length":"13118","record_id":"<urn:uuid:2a31fc32-73d1-4194-b0e2-353d7be85810>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Debt Service Payments
Edit Article
Calculate the Debt Service PaymentsTest the Amortization FormulaEvaluate the Viability of an Investment
Edited by IngeborgK, Teresa, Kobe27, Jeff and 5 others
Debt Service Payments is a term used to describe a variable factor within the debt service coverage ratio (DSCR) formula. The DSCR formula is used by investors and lenders to evaluate the potential
of an investment property or commercial enterprise by determining its ability to service the debt on a loan given the terms. The Debt Service Payments factor of the equation is simply the amount of
the monthly payments made on the interest and principal of a loan. This article provides instructions on how to calculate the Debt Service Payments factor and insert it into the DSCR equation to
evaluate the viability of an investment.
Method 1 of 3: Calculate the Debt Service Payments
1. 1
Determine the variables for use in the amortization formula: p (mortgage principal), i (interest rate) and n (number of months).
□ Use the standard formula to calculate amortization. Once you have determined the amounts of each of the 3 variables (i, p and n) you can solve for the monthly payment (M). The formula for
calculating the amortization of a loan is: M = P I(1 + I)^n /(1 + I )^n - 1.
Method 2 of 3: Test the Amortization Formula
1. 1
Calculate the monthly payment for a $300,000 loan at an interest rate of 6% for a term of 30 years (360 months). M = 300,000 .06/12(1+.06/12)^360 / (1+.06/12)^360-1. M= 1,798.65. The debt service
payments on a $300,000 loan at an interest rate of 6% for a term of 30 years will be $1,798.65.
Method 3 of 3: Evaluate the Viability of an Investment
1. 1
Determine the variables for use in the DSCR formula: m (Monthly Rent Payment), v (Vacancy Rate), d (Debt Service Payments) and t (Total Annual Expenses). Once you have determined the amounts of
each of the 4 variables, they can be used to determine the debt coverage ratio.
2. 2
Test the DSCR formula. Assign the variables for the equation: Let m = $2,000, v = 2%, d = $1,798.65 and t = $4,000.
□ Calculate the gross annual income (G): G = m (12) = 2,000 (12) = 24,000. G = 24,000.
□ Calculate the annual mortgage payment (A). A = d (12) = 1,798.65(12) = 21583.8. A = 21583.8.
□ Calculate the Vacancy and Credit Loss (V). V = G (v/100) = 24000(.02/100) = 4.8. V = 4.8.
□ Determine the gross operating income (I). I = (G - V). I = 24000-4.8 = 23995.2. I = 23995.2.
□ Determine the net operating income (N). N = (23995.2 - t). 23995.2�4000 = 19995.2. N = 19995.2.
□ Calculate the debt service coverage ratio (D). D = I/N.D = 23995.2/19995.2 = 1.2. The debt service payment for the given example is $1,798.65 and the debt service coverage ratio is 1.2.
Article Info
Categories: Investments and Trading
Recent edits by: Lutherus, BR, Confusionist
Thanks to all authors for creating a page that has been read 28,008 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Calculate-Debt-Service-Payments","timestamp":"2014-04-21T12:18:25Z","content_type":null,"content_length":"62010","record_id":"<urn:uuid:9868d82f-efe5-42f8-acff-7c2ff75fd485>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
247 kilos to tons
You asked:
247 kilos to tons
0.272270893798324 short tons
the mass 0.272270893798324 short tons
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/247_kilos_to_tons","timestamp":"2014-04-20T12:14:40Z","content_type":null,"content_length":"49219","record_id":"<urn:uuid:5d4ce50d-86ab-4bb2-98fd-57d6555800f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
VIX and More Subscriber Newsletter Blog
– Aggregate Market Sentiment Indicator: a
VIX and More
proprietary sentiment indicator that incorporates components of volatility, put to call data, market breadth, volume and other factors
Aggressive Trader Model Portfolio
– a a mechanical, long-only, aggressive growth portfolio of 5 stocks that was launched on 3/30/08 and is evaluated for rebalancing every weekend. The portfolio is typically 100% invested in U.S.
equities and ADRs. It is non-diversified and has extremely high turnover, generally >1000% per year.
– a downward sloping term structure curve in a futures product (e.g., VXX, VXZ) in which front month futures are priced higher than back month futures
– an upward sloping term structure curve in a futures product (e.g., VIX, crude oil, natural gas, etc.) in which front month futures are priced lower than back month futures
Contango Index
– a
VIX and More
proprietary index that evaluates the degree of
roll yield across all outstanding VIX futures contracts on a scale of 0-100. A high number means a high degree of negative roll yield across the full term structure and a low number means a positive
roll yield across the full term structure. A value of 50 is a considered the median reading and actually indicates some small amount of negative roll yield, as the full VIX term structure is
typically in contango.
– ticker for the C-Tracks ETN on CVOL, which targets VIX futures with three to four months of maturity, utilizes 2x leverage, and also includes a dynamic short position in the S&P 500 index
– ticker/abbreviation for the
Dow Jones Industrial Average
: the U.S. equity index that is most widely tracked by the media and the general public
– the
MSCI EAFE Index
of developed countries from
ustralasia and the
ast (excludes the U.S. and Canada) and basis for the popular
– ticker for an ETF that tracks the
MSCI Emerging Markets Index EFA
– ticker for an ETF that tracks the
MSCI EAFE Index
of developed countries from
ustralasia and the
ast, which excludes stocks from the U.S. and Canada
exponential moving average
: applies an exponential weighting so that most recent data points in a series are given greater weight in the calculation of a moving average
exchange-traded fund
: a group of stocks that often resemble a mutual fund in composition, but can be traded much like a stock during the trading day
– exchange-traded note: similar in most respects to an ETF, except that ETNs are technically a debt security of the issuer
– exchange-traded product: in an effort to simplify nomenclature and gloss over the distinctions between ETFs and ETNs, I am using the ETP name to describe a superset of exchange-traded products
consisting of both ETFs and ETNs
Federal Open Market Committee
: plays a lead role in establishing U.S. monetary policy by setting target Fed Funds rates
Focus Foreign Growth Model Portfolio
– a mechanical, long-only, aggressive growth portfolio of 5 stocks that was launched on 3/30/08 and is evaluated for rebalancing every weekend. The portfolio is typically 100% invested in ADRs. It is
non-diversified and has high turnover, generally >1000% per year.
Focus Growth 2 Model Portfolio
– a mechanical, long-only, aggressive growth portfolio of 5 stocks that was launched on 8/31/08 and is evaluated for rebalancing every weekend. The portfolio is typically 100% invested in U.S.
equities and ADRs. It is non-diversified and has high turnover, generally >500% per year.
Global Volatility Index
: a
VIX and More
proprietary index which is derived from a weighted average of the implied volatility in options for equities in the 15 largest global economies
– historical volatility: a measure of actual volatility in the price of security over a specified period of time, typically calculated in terms of standard deviations from the mean of a data series
– implied volatility: a measure of estimated future volatility in the price of a security as derived from options prices
– last week
McClellan Summation Index
– A runn
ing total o
f the difference between the 19-day and 39-day exponential moving averages of the net difference between the NYSE advancing issues minus declining issues (
Mean Reversion Index
a VIX and More proprietary index that Evaluates the likelihood that the VIX will decline due to the effect of mean reversion on a scale of 0-100. The calculations in this index incorporate
short-term, medium-term and long-term VIX moving averages in order to handicap the likelihood that the current level of the VIX will return to a prior trading range. A high number means that the VIX
is above most or all of its moving averages and is expected to decline going forward; a low number means that the VIX is below most or all of its moving averages and is likely to rise going forward.
– ticker/abbreviation for the
NASDAQ-100 Index
: an index of the largest domestic and international non-financial securities listed on The NASDAQ based on market capitalization
Retro VIX
– calculates a backward-looking "VIX" based on realized volatility in the SPX over the course of the last 21 trading sessions
Roll Yield Index - a VIX and More proprietary index that Evaluates the degree of negative roll yield between the front month VIX futures and the second month VIX futures on a scale of 0-100. A high
number means a high degree of negative roll yield (second month much higher than front month) and a low number means a positive roll yield (second month lower than front month.) A value of 50 is a
considered the median reading and actually indicates some small amount of negative roll yield, as these two months are typically in contango.
– ticker/abbreviation for the
Russell 2000 Index
measures the performance of the small-cap segment of the U.S. equity universe SMA
simple moving average
: unweighted mean of a data series
– stock of the week
– ticker/abbreviation for the
Standard and Poor’s (S&P) 500 Index
: de facto standard of U.S. equity indices for investors
SPX hv20
– the 20-day historical (aka statistical, realized or actual) volatility for the SPX
– ticker for the
U.S. Oil Fund
, an ETF that tracks the movements of light, sweet crude oil, a.k.a. West Texas Intermediate
VIX Futures Contango Index [often shortened to Contango Index]
– a
VIX and More
proprietary index that evaluates the degree of
roll yield across all outstanding VIX futures contracts on a scale of 0-100. A high number means a high degree of negative roll yield across the full term structure and a low number means a positive
roll yield across the full term structure. A value of 50 is a considered the median reading and actually indicates some small amount of negative roll yield, as the full VIX term structure is
typically in contango.
VIX sma10
– a 10-day simple moving average for the VIX (CBOE Volatility Index), which measures the expected 30 day volatility that is implied by options in the SPX
volatility crush
– a dramatic decrease in implied volatility, often associated with the passing of a major news events such as earnings or an FDA decision on a drug application
vs. 10d/20d/50d/200d
– current price relative to 10/20/50/200 day simple moving average
vs. LW
– percentage change since last week (for non-holiday weeks, this is equal to current price relative to the 5 day simple moving average)
– ticker for the CBOE S&P 500 3-Month Volatility Index, which measures the expected 93 day volatility that is implied by options in the SPX
– ticker for the iPath S&P 500 VIX Short-Term Futures ETN, which targets VIX futures with one month to maturity
VXX roll yield
– net differential between VIX front month futures and VIX second month futures
– ticker for the iPath S&P 500 VIX Mid-Term Futures ETN, which targets VIX futures with five months to maturity
– ticker for the
homebuilders sector SPDR
, an ETF
+XIV Index
a VIX and More proprietary index that is a composite index which incorporates a dynamic weighted average of the Roll Yield Index, the Contango Index and the Mean Reversion Index in an effort to
determine the attractiveness of a long XIV and/or short VXX position from a risk-reward perspective. The weights change each week, but generally the Roll Yield Index has the highest weighting,
followed by the Mean Reversion Index and the Contango Index. While the index has theoretical values of 0-100, most readings cluster around the 40-60 range. Additionally, while 50 is considered a
median reading, note that this should be interpreted as a long XIV and/or short VXX position as having ‘median attractiveness.’ XLF
– ticker for the
financial sector SPDR
, an ETF
– ticker for the
consumer discretionary sector SPDR
, an ETF
No comments: | {"url":"http://vixandmoresubscriber.blogspot.com/2009/01/vix-and-more-subscriber-newsletter.html","timestamp":"2014-04-19T01:46:37Z","content_type":null,"content_length":"60856","record_id":"<urn:uuid:6713c693-b4e0-4297-9f48-55a84de1c873>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 2
Part 1: Home and School Investigation
Send the Letter to Family (PDF file) home with each child. You can begin using the groups of objects after Lesson 3. Choose groups of six or less objects anytime after Lesson 3. Choose groups of
seven or eight objects anytime after Lesson 6. Each day choose several groups of objects and have the child who brought them in present them to the class. Ask questions such as the following example
of 4 blue pencils and 1 red pencil.
• What did Maria bring? (red and blue pencils)
• How many blue pencils are there? (4)
• How many red pencils are there? (1)
• How many pencils are there in all? (5)
• How can we show that 4 blue pencils and 1 red pencil make 5 pencils in all? (4 + 1 = 5)
Part 2: Be an Investigator
A good time to do this investigation is after Lesson 6 on ways to make 7 and 8.
Introducing the Investigation
Divide the class into groups of four children. Give each group a slip of paper with the number 4, 5, 6, 7, or 8 on it.
Doing the Investigation
Ask the groups to find as many ways as they can to make their number by adding two numbers together. Tell them that they cannot use zero. Let children use counters to help them find the solutions.
Here are all the ways to make each number:
2 and 2
3 and 1
2 and 3
4 and 1
3 and 3
2 and 4
5 and 1
6 and 1
2 and 5
3 and 4
4 and 4
5 and 3
6 and 2
1 and 7
When the groups have found all of the ways to make their numbers, tell them that they are going to make a funny animal for one of the ways. They will use the color black to draw the animal and then
use two colors on the animal to show one of the ways to make that number. For example, for the number 6 they might draw an animal with 4 red legs and 2 blue legs. They can label their animal 4 + 2 =
Display the drawings in the classroom or make a book of the drawings.
Extending the Investigation
Assign the activity with numbers greater than 8. | {"url":"http://www.eduplace.com/math/mw/minv/1/minv_1c2.html","timestamp":"2014-04-21T00:01:29Z","content_type":null,"content_length":"5066","record_id":"<urn:uuid:d070916c-8820-49c9-bb77-b0ea7ce758bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cupertino Algebra 1 Tutor
Find a Cupertino Algebra 1 Tutor
...Before I begin tutoring, I like to get to know my students and how they would like to learn. I learn best during lectures, when the teacher writes on the board and explains the material. But
many of my students learn better through analogies, fun stories, and hands-on activities, to name a few.
24 Subjects: including algebra 1, reading, writing, ESL/ESOL
...History is always more interesting when it's brought to life. There is nothing more important than having a clear understanding of the basics when learn math. I've been volunteer tutoring
basic mathematics for over 4 years now.
17 Subjects: including algebra 1, reading, English, biology
...I am a clear communicator and a patient teacher. I have a Masters degree in Biological Sciences as well as undergraduate degrees the related fields of exercise physiology and psychology with
an emphasis in biology. I have also taught high school Biology or Biology Honors for 9 years as well as tutoring in Biology, Biology Honors and AP Biology.
16 Subjects: including algebra 1, chemistry, geometry, biology
...I recently moved to Sunnyvale after spending a year working for Partners In Health in rural Mexico and Guatemala, and 4 months before then working for Floating Doctors in Panama. I am
bilingual in Spanish, and have six years of tutoring and teaching experience. I have done private tutoring for ...
27 Subjects: including algebra 1, chemistry, reading, English
...My methods are individual to each student. I will go through several approaches to attempt to identify how best you or your child learn. For some, they just need me to go over the "pure
numbers" slower so they can work through the understanding at their own pace.
11 Subjects: including algebra 1, calculus, geometry, accounting
Nearby Cities With algebra 1 Tutor
Belmont, CA algebra 1 Tutors
Campbell, CA algebra 1 Tutors
East Palo Alto, CA algebra 1 Tutors
Los Altos algebra 1 Tutors
Los Altos Hills, CA algebra 1 Tutors
Los Gatos algebra 1 Tutors
Milpitas algebra 1 Tutors
Mountain View, CA algebra 1 Tutors
Newark, CA algebra 1 Tutors
San Carlos, CA algebra 1 Tutors
San Jose, CA algebra 1 Tutors
Santa Clara, CA algebra 1 Tutors
Saratoga, CA algebra 1 Tutors
Stanford, CA algebra 1 Tutors
Sunnyvale, CA algebra 1 Tutors | {"url":"http://www.purplemath.com/Cupertino_algebra_1_tutors.php","timestamp":"2014-04-18T06:16:10Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:bde587e5-3d8b-474c-85c2-765cdb0f23e2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prob with an exercise
Join Date
Oct 2007
Rep Power
Hi,I'm having a very strange problem.I have written code for an exercise that calculates the square root of a number either by asking the user for input or by giving it as argument when
running the prog using Newtons method(a method I must create) and Math.sqrt and printing the results to see if they are the same.That is done.The next exercise is to tamper with the code and
add a method that fills an array X(declared on the top) random integers so that another method I have to create will again use Math.sqrt and Newton so that it can measure the milliseconds in
execution and print which method is faster.Only when I use start=System.currentTimeMillis(),the value stored in start is like this:1.193772081281E12 and is the same in both start and end
variables even if i have a for() in between.And as a result it gives 0.00 ms.
Here is my code:
Java Code:
package sroot2;
import java.io.*;
import java.util.Random;
public class SRoot2 {
public static int i;
public static Random random = new Random();
public final static int X=1000;
public static double[] numbers=new double[X];
public static double startNewton=0, endNewton=0, startMath=0, endMath=0, timeNewton=0, timeMath=0;
//Here we declare the two variables that will help us get the input from the user
//and parse it into an integer.
static public InputStreamReader input = new InputStreamReader(System.in);
static public BufferedReader keyboardInput = new BufferedReader(input);
public static void main(String[] args) throws IOException {
//If the string array is empty(meaning that no arguments were given upon launch of the program),
//the inputdatainteger variable calls the readline function to get an input from the user and
//return an integer value.Then the program runs the same way as if args wasn't empty but only for
//one value at a time.Also the program asks repeatedly for more inputs from the user until the latter
//gives 0 for input.
double inputdatainteger=readLine();
double NewtonResUs=Newtonsqrt(inputdatainteger,1);
double MathResUs=Math.sqrt(inputdatainteger);
double VariationUs=Math.abs(MathResUs-NewtonResUs);
System.out.println("SRoot(Java Math):"+Math.round(MathResUs));
//If the string array is not empty it runs as SRoot.
double[] argumentsdouble = new double[args.length];
double[] NewtonResults= new double[args.length];
double[] MathResults= new double[ args.length];
double[] Variation= new double[args.length];
argumentsdouble[i] = Double.parseDouble(args[i]);
System.out.println (argumentsdouble[i]+" ERROR: Non-positive number");
System.out.println("SRoot(Java Math):"+Math.round(MathResults[i]));
//The function that calculates the square root of a given number using Newton's method.
//It runs by using the guess and the number recursively until the difference between the
//x and newx is smaller than the threshold.Which means that we have more or less found the square root.
public static double Newtonsqrt(double B,double x){
double newX;
double threshold = 0.1;
newX = 0.5 * (x + B / x);
if (Math.abs(x - newX) > threshold) {
newX = Newtonsqrt(B, newX);
return newX;
//This function reads the input from the keyboard and parses it(as it is a string) into an integer
//before returning it for storage.
public static double readLine() throws IOException{
System.out.print("Input an integer: ");
String ss = keyboardInput.readLine();
double inputdatainteger = Double.parseDouble(ss);
return inputdatainteger;
public static void fillRandom(double[] numbers){
int rn=Math.abs(random.nextInt(300));
public static void executionTimeStatistics(double[] numbers){
startNewton = System.currentTimeMillis();
endNewton = System.currentTimeMillis();
System.out.println(startNewton+" "+endNewton);
startMath = System.currentTimeMillis();
endMath = System.currentTimeMillis();
System.out.print("Statistics for "+X+" computations.\nExecution time using Newton: "+timeNewton+" ms.\nExecution time using Math: "+timeMath+" ms.\n");
System.out.print("The winner is Java.Math.\n");
else if(timeMath==timeNewton){
System.out.print("The winner is ................ none.This is a draw.\n");
System.out.print("The winner is Newton.\n");
PS:sorry if some things are a little sloppy,I'm trying several things to debug this.
Last edited by JavaBean; 10-30-2007 at 09:11 PM. Reason: Code placed inside [code] tag.
Windows does not count/report every milliseconds to you. There is a granularity in reporting which depends on the underlying OS.
If it reports the same number, that means your procedure does not run long enough to capture the difference. You can try your code with a dummy for loop which took more time..
And here is an article that will tell you more details about this problem:
Bring Java's System.currentTimeMillis() Back into the Fold for Transaction Monitoring
Join Date
Oct 2007
Rep Power
I have to say that I followed the form that my professor has given into powerpoint slides and it has to work cause his produces results.Now if I understand correctly you are implying that if
I use it on Linux it should work or at least has better chances than windows???
PS:If it isn't what you are saying please forgive my noobity,I only recently began this course.
I saw the same problem on Windows before but i did not test it on Linux. That tutorial tells this:
"The resolution or granularity of a clock is the smallest time difference that can be measured between two consecutive calls. For example, on many Microsoft Windows systems, the resolution of
the call to currentTimeMillis is 10 ms. This is because the native implementation makes an operating-system call that has a resolution of 10 ms. Most other operating systems claim to have a
resolution of 1 ms."
So you look like to have a better chance.
But you should check your method for inputs which may take more time to compute. Your prof. may use inputs which take more time! (Note that i am not thinking about newton method now or
checking your code. Based on your input, i told you my quick informed guesses.)
Join Date
Feb 2008
Rep Power | {"url":"http://www.java-forums.org/new-java/3447-prob-exercise.html","timestamp":"2014-04-16T05:22:00Z","content_type":null,"content_length":"86940","record_id":"<urn:uuid:98a3c63e-42c7-4ac9-8722-43a6058f3a11>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by keith on Saturday, June 26, 2010 at 1:51pm.
Equation for a line that passes through (6,26) and has a slope of 3
Equation for a line that passes through the points (5,5) and (10,20)
Equation for a line that passes through (9,25) and has a slope of -3.
Equation for a line that passes through (1,) and is perpendicular to the line 4x+6y=18.
Equatuion for a cirle with a radius of 4 and a midpoint at (2,2).
• Try them first - Damon, Saturday, June 26, 2010 at 2:21pm
I already know how to do these. Try them yourself first and show me where you got stuck.
The first three can all be done substituting in
y = m x + b
where m is the slope.
For the fourth one, put the equation in form y = m x + b first
Then find m' = -1/m
That is the slope of a line perpendicular to the original one
then do y = m'x + b and substitute in your point, which you forgot to type the y coordinate for.
In the last one a circle equation is of form
(x-k)^2 + (y-h)^2 = r^2
where (k,h) is the center and r is the radius.
• mat 101 - Anonymous, Sunday, October 14, 2012 at 6:05pm
Related Questions
Math 101 - Find the equation for each of the following items below: a. A line ...
math - i have more than one question so if u no any of the answers please tell ...
Math - This question concerns the straight line that passes through the points...
Mathamatics - This question concerns the straight line that passes through the ...
Algebra (grade 9) - Write an equation in point-slope form of the line that ...
algebra (check plz) - write an equation in slope-intercept form of the line that...
Algebra II - Can you make sure my Algebra II work is correct? I've done the ...
algebra - Write an equation in slope-intercept form of the line that passes ...
algebra - Write the equation of the line that passes through point (5, 4) with a...
algebra - Write an equation in slope-intercept form for each described line. 1.... | {"url":"http://www.jiskha.com/display.cgi?id=1277574676","timestamp":"2014-04-23T18:38:32Z","content_type":null,"content_length":"9283","record_id":"<urn:uuid:8bb5f437-85b1-4eeb-ba4e-cb2fa44fb1fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Recreational mathematics
Recreational mathematics
for the fun of it. It includes many
mathematical games
, and can be extended to cover such areas as
and other puzzles of
reasoning. Some of the most interesting problems in this field do not require a knowledge of advanced mathematics.
The subject can include other topics such as the aesthetics of mathematics, and peculiar or amusing stories and coincidences about mathematics and mathematicians. Its greatest contribution is its
ability to pique curiosity and inspire the further study of mathematics.
The Journal of Recreational Mathematics[?] is the biggest publication on this topic.
The foremost advocates of recreational mathematics have included
Bibliography [external link]:
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/re/Recreational_mathematics?title=Walter_William_Rouse_Ball","timestamp":"2014-04-21T12:16:02Z","content_type":null,"content_length":"15446","record_id":"<urn:uuid:9140ed58-9c67-40f6-8afe-2ba45142d333>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
I have an doubts in the gravitation law.
...which is an immense amount of force. Earth is about 1.3E+14 m2 in cross-sectional area, so spreading out the force means an average of ~46 GPa pressure applied over the whole surface pointing in
that direction. Just a bit high at ~460,000 bars.
Gravity acts on every particle on earth, it is a force applied in the whole volume.
And you are right, as surface force it would be extremely high.
For moving the earth, I think asteroids are the way to go. The system is chaotic and thousands of years would be no issue, therefore a good description of the objects and enough computing time would
reduce the required momentum changes a lot.
The gravitational force between earth and sun is ~
3.5*10^22 N
, while the force between earth and moon is only
1.8*10^20 N
The forces between the earth and other planets are some orders of magnitude smaller. | {"url":"http://www.physicsforums.com/showthread.php?p=3877940","timestamp":"2014-04-21T02:15:07Z","content_type":null,"content_length":"68078","record_id":"<urn:uuid:15c081fb-5fbe-4e16-82d9-1da88f011801>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Say that you wanted to find out the height of a pyramid in Egypt. Impressed by the heights at
which these were built, with all that human labour that went into their construction, this
particular pyramid seems most impressive.
You ask your tour guide about its height. You are given a response, but you want to calculate
height yourself, given your state of awe. Would you be able to do so?
Absolutely, yes! How so?
Assuming that you have verified the size of the area of the pyramid, and all the implications
thereof, with the calculated distance you are from it and the angle from your eye, from which the
top of the pyramid appears, you may calculate the actual height of the pyramid.
All such a possibility is found in the gamut of trigonometry.
Trigonometry involves the relationship between the angles and the length of sides of a
triangle. In other words, the lengths of the sides of a triangle have a bearing on each angle
within the triangle. To understand more fully, we examine firstly, the simplest triangle ─ the
right-angled triangle:
The Right-angled Triangles in Trigonometry
The longest side of a right angled triangle is called the hypotenuse. Unlike Pythagoras Theorem,
the other two sides are neither horizontal nor vertical. They actually have names; adjacent or
The next identifiable side of a right angle triangle in trigonometry is called the opposite. This
side, as it were faces the acute angle in question, or for which the acute angle being considered
opens up to it. By virtue of having identified the opposite, we unwittingly identify the
adjacent. To illustrate:
The opposite of is AB
B C
Notice the C = . Since this acute angle is given (as it were), then AB becomes the opposite.
AB faces the angle . The angle opens up to AB. Naturally then, BC becomes the adjacent.
We further illustrate:
The opposite of is JK
J K
The opposite of is DE
D E L
In triangle CDE, the acute angle being highlighted is . Since this is the case, the side which is
called the opposite is DE. In triangle JKL, the highlighted acute angle is . Hence the opposite
is the side JK.
Trigonometrical Ratios
Trig ratios (the shortened form of trigonometrical ratios) only apply to right-angled triangles.
These ratios tell that there is a direct bearing of any two given sides of the triangle and its acute
angle. As hinted on the outset, the 90° angle in this triangle ensures that those ratios are
established, just about the same as it does for Pythagoras Theorem.
There are three such ratios: Sine (shortened as sin), cosine (shortened as cos) and tangent
(shortened as tan).
Sine (sin) is the ratio of the opposite divided by the hypotenuse. Sine of an angle (denoted sin
), is the ratio that corresponds with .
Sine = opposite (SOH is the acronym for sin= opp/hyp)
The sine of each acute angle of a r.a.t is specific and conversely, the ratio that corresponds to
each acute angle is always specific. For instance:
Sin 30° = 0.5 Sin 35° ≈ 0.573
Sin 40° ≈ 0.643 Sin 45° ≈ 0.707
So, no matter what the size of the right angled triangle, once the angle at one of the vertex is ,
the ratio of the opposite divided by the hypotenuse is always going to be the same. Thus, for any
Sin = opposite
Naturally, once a ratio of an angle is given, we can find the desired degree. In the case of sine, it
is equal to the inverse of the calculated ratio (i.e. the opposite divided by the hypotenuse). That
= sin-1 opposite
No doubt, since the sine ratio involves an acute angle, its opposite and the hypotenuse, we use
sine ratios only with respect to these measurements. Even in saying so, a math problem dictates
that one of three pieces of information is missing.
QRS is a triangle with S = 90°, QR = 35 cm and RS = 18cm. Determine the angle at Q.
18cm 35cm
S Q
Sin Q = RS = 18 = 0.514 (3dp)
QR 35
Q = sin-1 0.514 = 30.9497° (4 dp) ≈ 31°
Cosine represents the ratio of the adjacent divided by the hypotenuse of a r.a.t. Like sine, the
cosine ratio is specific for each angle. Therefore, the size of any r.a.t. is irrelevant to the ratio
that each angle produces – the adjacent vís–a–vís the hypotenuse. Ideally:
Cosine = adjacent (cos – adjacent/hypotenuse – CAH)
Naturally, cosine of an angle is the corresponding ratio of its adjacent divided by the hypotenuse.
Therefore, for any :
Cos = adjacent
Once a math problem calls for the use of the adjacent and the hypotenuse, it calls for the
application of the cosine ratio. Ideally, when using this ratio in any math problem, one of three
things (the angle, the size of the adjacent, or the size of the hypotenuse) is missing. Our aim
thereafter is to ascertain the value of the missing measurement.
We now look at one example.
FGH is a triangle such that G = 90°, F = 40° and FG = 23cm. Evaluate the length of FH.
Let FG = h, FH = g and GH = f
Cos F = h g= h 40°
g cos F F 23cm G
g= 23 = 23 = 30. 024 (3 dp)
Cos 40° 0.766
In considering sine and cosine, we noted that their ratios of the opposite and adjacent were
relative to the hypotenuse, respectively. With regards to tangent, however, the ratio represents
the value of the opposite divided by the adjacent. That is:
Tangent = opposite (tan = opposite/adjacent – TOA)
Like sine and cosine, the tangent ratio is specific for each angle. Thus, for any , of an acute
angle of a right-angle triangle:
Tan = opposite
Of course, to use this ratio, we consider only the adjacent, hypotenuse and opposite of a right
angled triangle, with one of these measurements being outstanding. We now look at a problem:
The triangle DGN is such that DG = 2cm, GN = 3.3cm and G marks the angle of 90°.
(i) Evaluate the size of D
(ii) Hence, or otherwise, calculate the size of N to the nearest whole
G 3.3cm N Tan D = GN = 3.3cm
DG 2cm
Tan D = 1.65
D = tan-1 1.65 = 58.78°
An electronic compass is placed on the ground, some 40m away from a vertical building. The
angle of elevation (the angular measurement from the compass to the height of the building, in
this instance) is computed at 80°. Given that the compass and the building are at the same
ground level, calculate the height of the building.
(To get a gist of the nature of the question, we frame the components of the question
Let us assume that the point at which the compass is at is C, and the height of the building is GB.
C 40m G
GB = tan 80° GB = tan 80° GB = tan 80° 40m ≈ 5.671 40
CG 40m
GB ≈ 2,026.84m | {"url":"http://www.docstoc.com/docs/129392635/trigonometry","timestamp":"2014-04-20T01:55:25Z","content_type":null,"content_length":"59244","record_id":"<urn:uuid:b5130c88-82bd-4cc1-b22c-a59b81fdaaa9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reliability of 2 coats of paint
February 25th 2011, 02:16 AM #1
Feb 2011
Reliability of 2 coats of paint
Hi guys. This might seem simple but cant get my head around it.
A paint coating is applied twice at a particular location. The reliability of a coating is 0.85. Assuming the coatings are statistically dependant so that if the first fails, the probability that
the second will also fail is 0.2. However if the first application is successful, the reliability of the second application is unchanged at 0.85. What is the probability of at least one coating
being successful?
Assuming A and B are the failure events
P(A)=0.15 and P(B/A) =0.2
P(A)xP(B/A)= 0.03 is the probability failure of both layers
But how do we deal with the second part of the question ie. if 'the first application is successful, the reliability of the second application is unchanged at 0.85'?
Well... Suppose you were told that if the first application is successful, the reliability of the second application is zero. How would that change your answer?
Well I wouldn't change my answer, because the probability of failure is not really linked to the probability of success of the second coat given the sucess of the first? Inother words may be the
information given at the end of the question is not relavent and the correct answer is 1-0.03= 0.97?
If the first coat succeeds, the success or failure of the second coat is irrelevant.
It was the additional info at the end of the question that threw me!! Thanks very much for your help.
February 25th 2011, 10:13 AM #2
February 26th 2011, 02:42 AM #3
Feb 2011
February 26th 2011, 04:52 AM #4
February 26th 2011, 06:52 AM #5
Feb 2011 | {"url":"http://mathhelpforum.com/advanced-statistics/172561-reliability-2-coats-paint.html","timestamp":"2014-04-17T03:09:31Z","content_type":null,"content_length":"40615","record_id":"<urn:uuid:f7a4b2e0-fba0-4a25-b2b8-c031a2847ffe>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Number Theorem
February 10th 2007, 05:59 AM #1
Senior Member
Apr 2006
Prime Number Theorem
Definition: Pi(x) is number of primes less than or equal to x.
lim(x->infinity) Pi(x)/[x/ln(x)] = 1
Then, obviously for large x, Pi(x) ~ x/ln(x)
This one stumped me.
My mathematics advisor wrote a popular and succesful book on Complex Analysis. In the end of the book he shows where complex variables can be applied, one of problems solved is the prime number
theorem. But I do not think you will understand it, it is a graduate textbook. Thus, I will agree with CaptainBlank that Hardy and Wright offer a more elementary proof, I never seen it but I know
they have an elementary proof there.
My mathematics advisor wrote a popular and succesful book on Complex Analysis. In the end of the book he shows where complex variables can be applied, one of problems solved is the prime number
theorem. But I do not think you will understand it, it is a graduate textbook. Thus, I will agree with CaptainBlank that Hardy and Wright offer a more elementary proof, I never seen it but I know
they have an elementary proof there.
I beleive it is Selberg's "Elementary Proof" of ca 1948 which he got into a dispute with Erdős over.
I think you are wrong.
I will quote my number theory textbook.
Elementary Number Theory by David Burton
Until recent times, the opinion prevailed that the Prime Number Theorem could not be proved without the help of the properties of the zeta function, and without recourse to complex function
theory. It came as a great supprise when in 1949 the Norwegian mathematician Atle Selberg discovered a purely arithmetical proof. His paper Elementary Proof of the Prime Number Theorem is
"elementary" in the technical sense of avoiding the methods of modern analysis; indeed, its content is exceedingly difficult. Selberg was awarded the Fields Medal at the 1950 International
Congress of Mathematicians for his work in this area.
Why did he fight with Erdos?
February 10th 2007, 08:25 AM #2
Grand Panjandrum
Nov 2005
February 10th 2007, 02:22 PM #3
Global Moderator
Nov 2005
New York City
February 10th 2007, 06:54 PM #4
Grand Panjandrum
Nov 2005
February 10th 2007, 07:04 PM #5
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/number-theory/11428-prime-number-theorem.html","timestamp":"2014-04-17T07:31:15Z","content_type":null,"content_length":"47839","record_id":"<urn:uuid:010a49c4-8356-4202-9964-cd45e6914781>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
D3DBook:(Lighting) Cook-Torrance
A common problem with the Phong and Blinn-Phong models discussed in the previous chapter is that they look too artificial. Whilst they are general purpose in design a number of simplifications means
that a lot of the subtleties of materials cannot be represented. The model discussed throughout this chapter was published by Robert Cook and Kenneth Torrance in 1982 ([Cook&Torrance82]) – a few
years after James Blinn published his modifications to Phong’s original model.
The Cook-Torrance lighting model is a general model for rough surfaces and is targeted at metals and plastics – although it is still possible to represent many other materials. The model is still
split into two distinct terms – diffuse and specular – as was true of the previous model (and for those discussed in later chapters); the key difference is the computation of the specular term.
Phong’s model uses simple mathematical principles of reflection to determine how much, if any, of a specular highlight is visible. Whilst the colour of this highlight can be varied as a per-pixel
input into the equation it is a constant colour from all angles regardless of the viewing direction. The Cook-Torrance model details a more complex and accurate way of computing the colour of the
specular term – an approach based more on physics than pure mathematics.
The Cook-Torrance Equation
This equation is much more involved than the simpler Phong-based equations of the previous chapter. Despite this the equation can be neatly summarised as follows:
$R = I \times (Normal \bullet Light) \times (Specular \times R_s + Diffuse)$
$R_s = \frac{Fresnel \times Roughness \times Geometric}{(Normal \bullet View) \times (Normal \bullet Light)}$
As previously discussed the key differentiator in the Cook-Torrance equation is the specular term, Rs. The remainder of this section will focus on this term and specifically work through the three
main components making up its numerator – Fresnel, roughness and geometric.
Modelling a rough surface might seem possible by using a high resolution per-pixel normal map (as discussed in chapter 3) but this doesn’t tend to yield the desired results. Cook-Torrance adopted the
“Microfacet” method for describing the roughness of a given surface:
Diagram 5.1
The microfacets are intended to be smaller than can be displayed on the screen; rather it is the combined effect across the entire surface that shows the roughness. Constants can be used to control
the depth and size of these facets and thus control the perceived roughness of the surface. The implementation in this chapter sets these constants on a per-material basis but it is perfectly
possible to store per-pixel roughness coefficients so as to allow for a much higher degree of variation.
Half vectors introduced as part of the Blinn-Phong model make an appearance in the Cook-Torrance model. In much the same way as with Blinn-Phong, the reflected specular energy is defined about this
vector rather than the reflection vector as in the original Phong equation.
The geometric term
In the original paper ‘Geometric Attenuation’ is a product of two effects – shadowing and masking. Fundamentally they are the same effect, but one caters for incoming energy and the other for
outgoing energy.
Shadowing can be referred to as ‘self shadowing’ – a term quite commonly used for several currently popular algorithms. Diagram 5.2 shows how incoming light energy can be blocked by its own, or a
neighbouring, facet:
Diagram 5.2
The other factor, masking, is very similar but covers the possibility that the reflected light is blocked by the facet. Diagram 5.3 illustrates this:
Diagram 5.3
Given that these microfacets are supposed to be extremely small it might seem irrelevant that they block light energy, but it is a very important factor in the final image. Shadowing and masking
increases with the depth of the microfacets leading to very rough surfaces being quite dull or matt in appearance.
The following equation can be used to compute the geometric term:
$Geometric = min \begin{pmatrix} {1,} \\ {\frac{2 \times (Normal \bullet Half) \times (Normal \bullet View)}{View \bullet Half},} \\ {\frac{2 \times (Normal \bullet Half) \times (Normal \bullet
View)}{View \bullet Half}} \end{pmatrix}$
The Roughness Term
This factor is a distribution function to determine the proportion of facets that are oriented in the same direction as the half vector. Only facets facing in this direction will contribute to
reflected energy and a simple observation of modelling a rough surface is that not all of the surface will be facing in the same direction. As the surface becomes rougher it is more likely that parts
of the surface will be oriented away from the half vector.
Various distributions have been experimented with for this component of the Cook-Torrance model. Many models aim to reproduce the wavelength scattering measured from specific real-world rough
materials and tend to be very complicated albeit much more physically accurate.
When graphed, these distribution functions appear to be similar to the specular distribution used in the Phong model; light energy is scattered within a volume about the reflected vector. For a very
smooth surface this distribution is concentrated about the reflected vector and yields a very sharp highlight, for a rough surface the distribution is much wider due to the roughness scattering light
energy in many different directions. The smooth-surface case is very similar to that modelled by Phong.
Diagram 5.4
Beckmann’s distribution ([Beckmann&Spizzichino63]) covers a wide range of materials of varying roughness with considerable accuracy; the main trade-off is that it is a complex distribution to
evaluate and thus not so suitable for performance-intensive usage.
$Roughness = \frac{1}{m^2 \times \cos^4 \alpha} \times e^{-(\frac{\tan \alpha}{m})^2}$
The above equation can be converted to vector form so as to eliminate the two trigonometric functions. α is defined as being the angle between the normal vector and the half vector. This makes the
equation much more suitable for implementation in a pixel shader:
$\tan^2 \alpha \equiv \frac{\sin^2 \alpha}{\cos^2 \alpha}$
$1 \equiv \sin^2 \alpha + \cos^2 \alpha$
$\sin^2 \alpha \equiv 1 - \cos^2 \alpha$
$\tan^2 \alpha \equiv \frac{1 - \cos^2 \alpha}{\cos^2 \alpha}$
$-(\frac{\tan^2 \alpha}{m^2}) \equiv -(\frac{\frac{1 - \cos^2 \alpha}{\cos^2 \alpha}}{m^2}) \equiv -(\frac{1 - \cos^2 \alpha}{m^2 \times \cos^2 \alpha}) \equiv \frac{\cos^2 \alpha - 1}{m^2 \times \
cos^2 \alpha}$
$\equiv \frac{(Normal \bullet Half)^2 - 1}{m^2 \times (Normal \bullet Half)^2}$
$Roughness = \frac{1}{m^2 \times (Normal \bullet Half)^4} \times e^{\frac{(Normal \bullet Half)^2 - 1}{m^2 \times (Normal \bullet Half)^2}}$
It is also worth considering that this distribution could be stored in a texture as a look-up table (a method suggested in the first chapter of this section). Using <Normal•Half, m^2> as a 2D texture
coordinate will allow a single texture sample to replace the entire evaluation of this distribution. The following code can be used to create the look-up texture:
HRESULT CreateRoughnessLookupTexture( ID3D10Device* pDevice )
HRESULT hr = S_OK;
// The dimension value becomes a trade-off between
// quality and storage requirements
const UINT LOOKUP_DIMENSION = 512;
// Describe the texture
D3D10_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R32_FLOAT;
texDesc.Height = LOOKUP_DIMENSION;
texDesc.Width = LOOKUP_DIMENSION;
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_IMMUTABLE;
// Generate the initial data
float* fLookup = new float[ LOOKUP_DIMENSION*LOOKUP_DIMENSION ];
for( UINT x = 0; x < LOOKUP_DIMENSION; ++x )
for( UINT y = 0; y < LOOKUP_DIMENSION; ++y )
// This following fragment is a direct conversion of
// the code that appears in the HLSL shader
float NdotH = static_cast< float >( x )
/ static_cast< float >( LOOKUP_DIMENSION );
float Roughness = static_cast< float >( y )
/ static_cast< float >( LOOKUP_DIMENSION );
// Convert the 0.0..1.0 ranges to be -1.0..+1.0
NdotH *= 2.0f;
NdotH -= 1.0f;
// Evaluate a Beckmann distribution for this element
// of the look-up table:
float r_sq = Roughness * Roughness;
float r_a = 1.0f / ( 4.0f * r_sq * pow( NdotH, 4 ) );
float r_b = NdotH * NdotH - 1.0f;
float r_c = r_sq * NdotH * NdotH;
fLookup[ x + y * LOOKUP_DIMENSION ]
= r_a * expf( r_b / r_c );
D3D10_SUBRESOURCE_DATA initialData;
initialData.pSysMem = fLookup;
initialData.SysMemPitch = sizeof(float) * LOOKUP_DIMENSION;
initialData.SysMemSlicePitch = 0;
// Create the actual texture
hr = pDevice->CreateTexture2D
if( FAILED( hr ) )
ERR_OUT( L"Failed to create look-up texture" );
SAFE_DELETE_ARRAY( fLookup );
return hr;
// Create a view onto the texture
ID3D10ShaderResourceView* pLookupRV = NULL;
hr = pDevice->CreateShaderResourceView
if( FAILED( hr ) )
SAFE_RELEASE( pLookupRV );
SAFE_RELEASE( g_pRoughnessLookUpTex );
SAFE_DELETE_ARRAY( fLookup );
return hr;
// Bind it to the effect variable
ID3D10EffectShaderResourceVariable *pFXVar
= g_pEffect->GetVariableByName("texRoughness")->AsShaderResource( );
if( !pFXVar->IsValid() )
SAFE_RELEASE( pLookupRV );
SAFE_RELEASE( g_pRoughnessLookUpTex );
SAFE_DELETE_ARRAY( fLookup );
return hr;
pFXVar->SetResource( pLookupRV );
// Clear up any intermediary resources
SAFE_RELEASE( pLookupRV );
SAFE_DELETE_ARRAY( fLookup );
return hr;
James Blinn proposed the use of a standard Gaussian distribution:
$Roughness = c \times e^{-(\frac{\alpha}{m^2})}$
This distribution is not as physically accurate as Beckmann’s distribution but in a lot of scenarios the approximation will be perfectly acceptable. Evaluation of the Gaussian distribution is much
easier than Beckmann’s thus making the approximation a compelling option for real-time use. The only potential problem of the Gaussian distribution is that it relies on an arbitrary constant that
needs to be empirically determined for best results.
The Fresnel Term
An important part of the Cook-Torrance model is that the reflected specular light is not a constant colour; rather it is view-dependent. The Fresnel term, introduced in the first chapter of this
section, can be used to model this characteristic.
The full Fresnel equation is overly complicated for this purpose and relies on expensive and complex evaluation. The original publication contains the full equations along with useful derivations
regarding source data. However it is worth noting that any measurement used in Fresnel evaluations is dependent on numerous physical properties such as the wavelength of light and temperature. It
depends on the intended usage whether modelling these characteristics is worth the additional complexity. For real-time computer graphics it is generally more suitable to use an approximation
presented by Christophe Schlick ([Schlick94]).
$Fresnel = F_0 + (1 - (Half \bullet View))^5 \times (1 - F_0)$
In the above approximation F0 is the Fresnel reflectance as measured at normal incidence. Available Fresnel information is typically expressed in this form which makes it a convenient parameter to
work with.
Schlick’s approximation is simply a nonlinear interpolation between the refraction at normal incidence and total reflection at grazing angles.
The previous section covered the components of the Cook-Torrance lighting model. The following list summarises each of these parts in the order of which they must be evaluated:
1. Compute geometric term
$Geometric = min \begin{pmatrix} {1,} \\ {\frac{2 \times (Normal \bullet Half) \times (Normal \bullet View)}{View \bullet Half},} \\ {\frac{2 \times (Normal \bullet Half) \times (Normal \bullet
View)}{View \bullet Half}} \end{pmatrix}$
2. Compute roughness term using one of the following options:
2a Sample from a texture generated by the application – distribution becomes unimportant. [Texture2D look-up and roughness value required]
2b Evaluate with the Beckmann Distribution [roughness value required]
$Roughness = \frac{1}{m^2 \times (Normal \bullet Half)^4} \times e^{\frac{(Normal \bullet Half)^2 - 1}{m^2 \times (Normal \bullet Half)^2}}$
2c Evaluate with a Gaussian Distribution [roughness value and arbitrary constant required]
$Roughness = c \times e^{-(\frac{Normal \bullet Half}{m^2})}$
3 Compute the Fresnel term
3a Evaluate Christophe Schlick’s approximation [Requires index of refraction]
$Fresnel = F_0 + (1 - (Half \bullet View))^5 \times (1 - F_0)$
4 Evaluate Rs term of the full equation:
$R_s = \frac{Fresnel \times Roughness \times Geometric}{(Normal \bullet View) \times (Normal \bullet Light)}$
5 Evaluate complete equation
$R = I \times (Normal \bullet Light) \times (Specular \times R_s + Diffuse)$
The above series of steps when implemented in HLSL is as follows, note that this is a single HLSL function that relies on conditional compilation to split into different techniques; this is a
valuable technique for code re-use and maintenance but arguably it can make the code less intuitive to read:
float4 cook_torrance
in float3 normal,
in float3 viewer,
in float3 light,
uniform int roughness_mode
// Compute any aliases and intermediary values
// -------------------------------------------
float3 half_vector = normalize( light + viewer );
float NdotL = saturate( dot( normal, light ) );
float NdotH = saturate( dot( normal, half_vector ) );
float NdotV = saturate( dot( normal, viewer ) );
float VdotH = saturate( dot( viewer, half_vector ) );
float r_sq = roughness_value * roughness_value;
// Evaluate the geometric term
// --------------------------------
float geo_numerator = 2.0f * NdotH;
float geo_denominator = VdotH;
float geo_b = (geo_numerator * NdotV ) / geo_denominator;
float geo_c = (geo_numerator * NdotL ) / geo_denominator;
float geo = min( 1.0f, min( geo_b, geo_c ) );
// Now evaluate the roughness term
// -------------------------------
float roughness;
if( ROUGHNESS_LOOK_UP == roughness_mode )
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
if( ROUGHNESS_BECKMANN == roughness_mode )
float roughness_a = 1.0f / ( 4.0f * r_sq * pow( NdotH, 4 ) );
float roughness_b = NdotH * NdotH - 1.0f;
float roughness_c = r_sq * NdotH * NdotH;
roughness = roughness_a * exp( roughness_b / roughness_c );
if( ROUGHNESS_GAUSSIAN == roughness_mode )
// This variable could be exposed as a variable
// for the application to control:
float c = 1.0f;
float alpha = acos( dot( normal, half_vector ) );
roughness = c * exp( -( alpha / r_sq ) );
// Next evaluate the Fresnel value
// -------------------------------
float fresnel = pow( 1.0f - VdotH, 5.0f );
fresnel *= ( 1.0f - ref_at_norm_incidence );
fresnel += ref_at_norm_incidence;
// Put all the terms together to compute
// the specular term in the equation
// -------------------------------------
float3 Rs_numerator = ( fresnel * geo * roughness );
float Rs_denominator = NdotV * NdotL;
float3 Rs = Rs_numerator/ Rs_denominator;
// Put all the parts together to generate
// the final colour
// --------------------------------------
float3 final = max(0.0f, NdotL) * (cSpecular * Rs + cDiffuse);
// Return the result
// -----------------
return float4( final, 1.0f );
Image 5.5
The above image shows the Beckmann distribution (top) and Gaussian distribution (bottom) with roughness values of 0.2 (left), 0.4, 0.6, 0.8 and 1.0 (right). Whilst the Gaussian form is noticeably
different to the Beckmann form it is worth noting that there is an arbitrary constant controlling the Gaussian distribution. Experimentation with this coefficient can generate closer or more visually
acceptable results.
Image 5.6
The two key inputs into the Cook-Torrance model are reflectance at normal incidence and roughness. Image 5.6 serves to show their relationship and the types of results that can be achieved.
It should come as no surprise that low roughness values, implying a smooth surface, generate the shiniest results with specular highlights similar to that seen with the Phong model. Experimentation
for achieving other materials is necessary, but from Image 5.6 it should be apparent that the Cook-Torrance model spans a large number of possibilities; the range shown in the image is representative
but does not display the extreme values – lower or higher inputs are allowed.
The assembly code emitted by the HLSL compiler for the Cook-Torrance model yields some surprising results. In particular the Gaussian roughness distribution compiles to more instructions than the
Beckmann distribution – given that the motivation behind the Gaussian distribution was its simple implementation!
Using a look-up texture shaves around 16% off the instruction count, which is a worthwhile saving provided there aren’t any problems with using an extra 1-2mb of video memory.
[Cook&Torrance82] “A Reflectance Model for Computer Graphics” ACM Transactions on Graphics, volume 1, number 1, January 1982 pages 7-24
[Schlick94] “An inexpensive BRDF model for physically-based rendering.” Christophe Schlick, Computer Graphics Forum, 13(3):233—246, 1994.
[Beckmann&Spizzichino63] “The scattering of electromagnetic waves from rough surfaces.” MacMillan, New York, 1963, pages 1-33 and 70-98.
Navigate to other chapters in this section:
Foundation & Theory Direct Light Sources Techniques For Dynamic Per-Pixel Lighting Phong and Blinn-Phong Cook-Torrance Oren-Nayar Strauss Ward Ashikhmin-Shirley Comparison and Summary | {"url":"http://content.gpwiki.org/index.php/D3DBook:(Lighting)_Cook-Torrance","timestamp":"2014-04-20T15:51:38Z","content_type":null,"content_length":"55411","record_id":"<urn:uuid:36ed16f5-7aab-4c11-adb0-267efc0e9a2c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: on the relations of traditional CS theory to modern programming practice — Rob Warnock Lisp usenet archive
Subject: Re: on the relations of traditional CS theory to modern programming practice
From: rpw3@rpw3.org (Rob Warnock)
Date: Sat, 20 Oct 2007 20:07:12 -0500
Newsgroups: comp.lang.lisp,comp.lang.functional
Message-ID: <reSdndm0kZndO4fanZ2dnUVZ_smnnZ2d@speakeasy.net>
Chris F Clark <cfc@shell01.TheWorld.com> wrote:
| Although I hate to participate in this thread, I have a simple
| question. Is it possible to prove that a machine which visits the
| tape only in sequential order is as powerful as a machine that can
| visit the tape non-sequentially, perhaps by appealing to multi-tape
| TMs in the argument?
I don't think so. In fact, I think I can present a rather trivial
Let M be a multi-tape TM with N internal states and R rules
for which each of T tapes can only be visited in sequential
order [e.g., the forward direction]. Then it is not possible
for M to compute [that is, write onto (one or more of) the
tape(s) being used as the "output device"] a result which is
greater than N*R*T (and the actual limit might be much, much
smaller -- I'm just picking a "safe" value). For example, if
each of the T tapes starts out with a number of 1's representing
unary integers followed by infinite 0's, then it is not possible
for M to compute the product of those numbers if that result
would be greater than N*R*T [or whatever the actual limit is].
Whereas if as few as *one* of tapes is writable and reversable
[can both read & write and step both forwards & backwards],
then a TM with a fixed finite number of states & rules can
compute the product of the numbers on the tapes [or the square
of the number, say, if there's only one tape] no matter *how*
big that product might be!!
Rob Warnock <rpw3@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607 | {"url":"http://xach.com/rpw3/articles/reSdndm0kZndO4fanZ2dnUVZ_smnnZ2d%40speakeasy.net.html","timestamp":"2014-04-16T07:13:55Z","content_type":null,"content_length":"4094","record_id":"<urn:uuid:4bad63d8-638c-4804-802c-34674c26c6fd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMA Newsletter #341
Elena Dimitrova (Virginia Tech) A graph-theoretic method for the discretization of gene expression measurements
Abstract: The paper introduces a method for the discretization of experimental data into a finite number of states. While it is of interest in various fields, this method is particularly useful in
bioinformatics for reverse engineering of gene regulatory networks built from gene expression data. Many of these applications require discrete data, but gene expression measurements are continuous.
Statistical methods for discretization are not applicable due to the prohibitive cost of obtaining sample sets of sufficient size. We have developed a new method of discretizing the variables of a
network into the same optimal number of states while at the same time maintaining high information content. We employ a graph-theoretic clustering method to affect the discretization of gene
expression measurements. Our C++ program takes as an input one or more time series of gene expression data and discretizes these values into a number of states that best fits the data. The method is
being validated by incorporating it into the recently published computational algebra approach to the reverse engineering of gene regulatory networks by Laubenbacher and Stigler.
Qiang Du (Pennsylvania State University) Phase field modeling and simulation of cell membranes
Abstract: Recently, we have produced a series of works on the phase field modeling and simulation of vesicle bio-membranes formed by lipid bilayers. We have considered both the shape deformation of
vesicles minimizing the elastic bending energy with volume and surface area constraints and those moving in an incompressible viscous fluid. Rigorous mathematical analysis have been carried out along
with extensive numerical experiments. We have also developed useful computational techniques for detecting the topological changes within a broad phase field framework. References: 1. A Phase Field
Approach in the Numerical Study of the Elastic Bending Energy for Vesicle Membranes, Q. Du, C. Liu and X. Wang, J. Computational Physics, 198, pp. 450-468, 2004 2.Retrieving topological information
for phase field models, Q. Du, C. Liu and X. Wang, 2004, to appear in SIAM J. Appl. Math 3. Phase field modeling of the spontaneous curvature effect in cell membranes, Q. Du C. Liu, R. Ryham and X.
Wang, 2005, to appear in CPAA 4. A phase field formulation of the Willmore problem. Q. Du, C. Liu, R. Ryham and X. Wang, 2005, to appear in Nonlinearity
Weinan E (Princeton University) Lecture 1: Overview of Multiscale Methods
Abstract: We will begin by reviewing the basic issues and concepts in multiscale modeling, including the various models of multi-physics, serial and concurrent coupling strategies, and the essential
features of the kind of multiscale problems that we would like to deal with. We then discuss some representative examples of successful multiscale methods, including the Car-Parrinello method and the
quasi-continuum method. Finally we discuss several general methodologies for multiscale, multi-physics modeling, such as the domain decomposition methods, adaptive model refinement and heterogeneous
multiscale methods. These different methodologies are illustrated on one example, the contact line problem. Throughout this presentation, we will emphasize the interplay between physical models and
numerical methods, which is the most important theme in modern multiscale modeling.
Weinan E (Princeton University) Lecture 2: Problems with multiple time scales
Abstract: We will discuss the mathematical background and numerical techniques for three types of problems with multiple time scales: stiff ODEs, Markov chains with disparate rates and rare events.
Ryan S. Elliott (University of Michigan) Bifurcation and stability of multilattices with applications to martensitic transformations in shape memory alloys
Abstract: Some of the most interesting and technologically important solid--solid transformations are the first order diffusionless transformations that occur in certain ordered multi-atomic
crystals. These include the reconstructive martensitic transformations (where no group--subgroup symmetry relationship exists between the phases) found in steel and ionic compounds such as CsCl, as
well as the thermally-induced, reversible, proper (group--subgroup relationships exist) martensitic transformations that occur in shape memory alloys such as NiTi. Shape memory alloys are especially
interesting, for engineering applications, due to their strong thermomechanical (multi-physics) coupling. The mechanism responsible for these temperature-induced transformations is a change in
stability of the crystal's lattice structure as the temperature is varied. To model these changes in lattice stability, a continuum-level thermoelastic energy density for a bi-atomic multilattice is
derived from a set of temperature-dependent atomic potentials. The Cauchy-Born kinematic assumption is employed to ensure, by the introduction of internal atomic shifts, that each atom is in
equilibrium with its neighbors. Stress-free equilibrium paths as a function of temperature are numerically investigated, and an asymptotic analysis is used to identify the paths emerging from
"multiple bifurcation" points that are encountered. The stability of each path against all possible bounded perturbations is determined by calculating the phonon spectra of the crystal. The advantage
of this approach is that the stability criterion includes perturbations of all wavelengths instead of only the long wavelength information that is available from the stability investigation of
homogenized continuum models. The above methods will be reviewed, and results corresponding to both reconstructive and proper martensitic transformations will be presented. Of particular interest is
the prediction of a transformation that has been experimentally observed in CuAlNi, AuCd, and other shape memory alloys.
Leslie F. Greengard (New York University) Lecture 1: Fast multipole methods and their applications
Abstract: In these lectures, we will describe the analytic and computational foundations of fast multipole methods (FMMs), as well as some of their applications. They are most easily understood,
perhaps, in the case of particle simulations, where they reduce the cost of computing all pairwise interactions in a system of N particles from O(N²) to O(N) or O(N log N) operations. FMMs are
equally useful, however, in solving partial differential equations by first recasting them as integral equations. We will present examples from electromagnetics, elasticity, and fluid mechanics.
Thomas C. Hales (University of Pittsburgh) IMA Public Lecture: Computers and the Future of Mathematical Proof
Abstract: Computers crash, hang, succumb to viruses, run buggy programs, and harbor spyware. By contrast, mathematics is free of all imperfection. Why are imperfect computational devices so vital for
the future of mathematics?
Viet Ha Hoang (Cambridge University) High-dimensional finite elements for elliptic problems with multiple scales
Abstract: Joint work with Christoph Schwab. Elliptic homogenization problems in a d dimensional domain
Jesus A. Izaguirre (University of Notre Dame) Multiscale approaches to molecular dynamics and sampling
Abstract: In the first part of this talk, I will survey some approaches for producing multiscale models for molecular dynamics (MD) and sampling. I will consider two parts of the problem: finding
coarsened variables, and then integrating or propagating the coarsened model. I will discuss the approach of Brandt and collaborators to semi-automatically determine the coarsened variables, and the
more ad-hoc approach of Gear and collaborators, who assume a reaction-coordinate is known which produces a natural separation of scales. Both methods attempt to sample the fast scales, and then to do
an accurate integration of the slow scales. Related approaches will be mentioned, such as Leimkuhler's and Reich's reversible integrators.
Brian Laird (University of Kansas) Direct calculation of crystal-melt interfacial free energies from molecular simulation
Abstract: The crystal-melt interfacial free energy, the work required to create a unit area of interface between a crystal and its own melt, is a controlling property in the kinetics and morphology
of crystal growth and nucleation, especially in the case of dendritic growth. Despite the technological importance of this quantity, accurate experimental data is difficult to obtain. The paucity of
experimental measurements has motivated the development of a variety of novel computational methods to determine the interfacial free energy via molecular simulation. After a short tutorial on
thermodynamic integration techniques for free energy calculation, I will introduce our method of cleaving walls for the calculation of the crystal-melt interfacial free energy, and a competing method
based on fluctuation spectra. Results for a variety of simple systems will be presented to give a broad picture of the interaction and crystal structure dependence of the interfacial free energy. The
results will be discussed in relation to popular empirical theories of the interfacial free energy.
Melvin Leok (University of Michigan) Generalized Galerkin variational integrators: Lie group, multiscale and spectral methods
Abstract: Geometric mechanics involves the study of Lagrangian and Hamiltonian mechanics using geometric and symmetry techniques. Computational algorithms obtained from a discrete Hamilton's
principle yield a discrete analogue of Lagrangian mechanics, and they exhibit excellent structure-preserving properties that can be ascribed to their variational derivation. We propose a natural
generalization of discrete variational mechanics, whereby the discrete action, as opposed to the discrete Lagrangian, is the fundamental object. This is achieved by appropriately choosing a finite
dimensional function space to approximate sections of the configuration bundle and numerical quadrature techniques to approximate the action integral. We will discuss how this general framework
allows us to recover high-order Galerkin variational integrators, asynchronous variational integrators, and symplectic-energy-momentum integrators. In addition, we will also introduce generalizations
such as high-order symplectic-energy-momentum integrators, Lie group integrators, high-order Euler-Poincare integrators, multiscale variational integrators, and pseudospectral variational
integrators. This framework will be illustrated by an application of Lie group variational integrators to rigid body dynamics wherein the discrete trajectory evolves in the space of 3x3 matrices,
while automatically staying on the rotation group, without the use of local coordinates, constraints, or reprojection. This is joint work with Taeyoung Lee and Harris McClamroch.
Hailiang Liu (Iowa State University) Critical intensities for phase transitions in a 3D Smoluchowski equation
Abstract: We study the structure of equilibrium solutions to a Smoluchowski equation on a sphere, which arises in the modelling of rigid rod-like molecules of polymers. A complete classification of
intensities for phase transitions to equilibrium solutions is obtained. It is shown that the number of equilibrium solutions hinges on whether the potential intensity crosses two critical values
alpha_1 approximately 6.731393 and alpha_2 = 7.5. Furthermore, we present explicit formulas for all equilibrium solutions. These solutions consist of a set of axially symmetric functions and all
those which are obtained from this set by rotation. In this joint work with Hui Zhang and Pingwen Zhang, we solve the Onsager's 1949 conjecture on phase transitions in rigid rodlike polymers.
Stefan Mueller (Max Planck Institute for Math in the Sciences) A variational model of dislocations in the line tension limit
Abstract: We study the (Gamma) limit of a dislocation model proposed by Ortiz et al., in which slip occurs only on one plane. Mathematically the core is an extension of the
Alberti-Bouchitte-Seppecher results for 1/eps nonconvex two-well energy + H^{1/2} norm squared to an periodic array of wells (hence no naive coercivity). From the analysis point of view H^{1/2} is
interesting since it leads to a logarithnmic rescaling.
Xiaochuan Pan (University of Chicago) Volumetric computed tomography and its applications
Abstract: Computed tomography (CT) is one of the most widely used imaging modality in medicine and other areas. In this lecture, following the introduction of the basic principle of CT, I will
describe what physical quantity is measured and how an image is reconstructed from the measured data in CT. Based upon such knowledge about CT, I will tour recent advances of CT technology and their
new biomedical applications. One of such important advances is the advent of helical cone-beam CT and the breakthroughs in imaging theory associated with it. These technological and theoretical
advances in CT have brought immediate important impact on medical and other applications of CT, offering tremendous opportunities to design innovative imaging protocols and applications that are
otherwise impossible. One of the important trends in CT imaging is the so-called targeted imaging of a region of interest (ROI) within the subject from truncated data. Such a strategy for targeted
imaging would substantially reduce the radiation dose delivered to the subject and scanning effort. I will discuss the theory and algorithms that we have developed recently for exact reconstruction
of ROI images. Finally, I will touch upon the implications of these new developments in CT imaging theory for other tomographic imaging modalities.
Paul R Schrater (University of Minnesota) Natural cost functions for contact point selection in grasping
Abstract: When reaching to touch or lift an object, how are contact points visually selected? In this talk I will formulate the issue as a statistical decision theory problem that requires minimizing
the expectation over a suitable loss function. However, it is the nature of this loss function that is the heart of the presentation. In the first part of the talk, I will show how contact points for
two fingered grasp can be optimally chosen, given a plan for the grasped object's motion. The basic assumption is that the minimum control framework used to predict hand trajectories should also
apply to the control of the grasped object. The cost function on the object's motion can then be rewritten in terms of finger placement and contact, inducing a cost function on finger contact points.
I will present human reaching data that supports this idea. In the second part of the talk, I will present evidence for a decomposition of the natural cost function for reaching into task completion
and motor control components. The issue can be framed as follows: In many reaching tasks there are a set of contact points that are equivalent in terms of task completion cost -- touching a line, for
example. In generating a path, the ambiguity is broken by motor control cost, which distinguishes the minimum control point of the set (e.g. the closest point on the line). This unique target point
could be selected to generate a simple feedback control strategy of minimizing distance to the target. Alternatively, a feedback control strategy could be based directly on a lumped cost function.
These two strategies behave differently under a perturbing force field mid-reach: the first corrects the perturbations, while the second "goes with the flow" to contact the new minimum control point
within the task completion set. I will present data supports the idea that reaches "go with the flow", adapting to external perturbations. This suggests that the brain visually encodes and adaptively
uses the set of viable contact points. Finally, I will discuss why the contact point selection problem is important for understanding the sensory demands made by the motor control system during
Joerg Schumacher (Philipps University Marburg) Stretching of polymers on sub-Kolmogorov scales in a turbulent flow
Abstract: First results on numerical studies of the stretching of Hookean dumbbells on scales below the viscous length of the advecting turbulent flow are presented. Direct numerical simulations of
the Navier-Stokes turbulence are combined with Brownian dynamics simulations for simple polymer chains. The role of extreme stretching events on the overall statistics is discussed. Our findings are
compared with recent analytical models for the polymer advection in Gaussian random flow without time-correlation.
James A. Sethian (University of California) Lecture 1: Advances in advancing interfaces: Level set methods, fast marching methods, and beyond
Abstract: Propagating interfaces occur in a variety of settings, including semiconductor manufacturing in chip production, the fluid mechanics of ink jet plotters, segmentation in cardiac medical
imaging, computer-aided-design, optimal navigation in robotic assembly, and geophysical wave propagation. Over the past 25 years, a collection of numerical techniques have come together, including
Level Set Methods and Fast Marching Methods for computing such problems in interface phenomena in which topological change, geometry-driven physics, and three-dimensional complexities play important
roles. These algorithms, based on the interplay between schemes for hyperbolic conservation laws and their connection to the underlying theory of curve and surface evolution, offer a unified approach
to computing a host of interface prn this tutorial, the author will cover (i) the development of these methods, (ii) the fundamentals of Level Set Methods and Fast Marching Methods, including
efficient, adaptive versions, and the coupling of these schemes to complex physics, and (iii) new approaches to tackling more demanding interface problems. The emphasis in this tutorial will be on a
practical, "hands-on" view, and the methods and algorithms will be discussed in the context of on-going collaborative projects, including work on semiconductor processing, industrial ink jet design,
and medical and bio-medical imaging.
Jan Vandenbrande (Boeing) Solid modeling: Math at work in design
Abstract: Design is the art of creating something new and predicting how it will perform before it is ever build. One of the major breakthroughs in the last 25 years is the ability to describe a
design as a virtual artifact in a computer, and simulate its physical characteristics accurately to enable designers to make better decisions. The core technology that underlies these mechanical
Computer Aided Design and Manufacturing (CAD/CAM) systems is solid modeling, whose theoretical underpinnings are grounded in mathematics. This talk will cover some of these mathematical concepts,
including point set topology, regularized set operations, Constructive Solid Geometry (CSG), representation schemes, algorithms and geometry. We will cover the impact of solid modeling in industry,
and discuss some of the remaining open issues such as the ambiguity between the topological representation and the computed geometric boundary.
Epifanio G. Virga (Universita di Pavia) Mathematical models for biaxial liquid crystals phases
Abstract: The search for thermotropic biaxial phases has recently found some firm evidence of their existence. It has rightly been remarked that this "announcement has created considerable
excitement, for it opens up new areas of both fundamental and applied research. It seems that a Holy Grail of liquid-crystal science has at last been found" (see G.R. Luckhurst, Nature 430, 413
(2004)). In this lecture, I shall present a mean-field model that has the potential to describe such an evanescent phase of matter. More specifically, I show the outcomes of a bifurcation analysis of
the equilibrium equations and I illuminate the complete phase diagram, which exhibits two tricritical points. The predictions of this analysis are also qualitatively confirmed by a Monte Carlo
simulation study. One of the main conclusions is that two order parameters suffice to label all equilibrium phases, though they exhibit different bifurcation patterns.
Rebecca Willett (University of Minnesota) Multiscale photon-limited image analysis
Abstract: Many critical scientific and engineering applications rely upon the accurate reconstruction of spatially or temporally distributed phenomena from photon-limited data. However, a number of
information processing challenges arise routinely in these problems: Sensing is often indirect in nature, such as tomographic projections in medical imaging, resulting in complicated inverse
reconstruction problems. Limited system resources, such as data acquisition time and image storage requirements, lead to complex tradeoffs between communications, sensing and processing. Furthermore,
the measurements are often "noisy" due to low photon counts. In addition, the behavior of the underlying photon intensity functions can be very rich and complex, and consequently difficult to model a
priori. All of these issues combine to make accurate reconstruction a complicated task, involving a myriad of system-level and algorithm tradeoffs. In this talk, I will demonstrate that nonparametric
multiscale reconstruction methods can overcome all the challenges above and provide a theoretical framework for assessing tradeoffs between reconstruction accuracy and system resources. First, the
theory supporting these methods facilitates characterization of fundamental performance limits. Examples include lower bounds on the best achievable error performance in photon-limited image
reconstruction and upper bounds on the data acquisition time required to achieve a target reconstruction accuracy. Second, existing reconstruction methods can often be enhanced with multiscale
techniques, resulting in significant improvements in a number of application domains. Underlying these methods are ideas drawn from the theory of multiscale analysis, statistical learning, nonlinear
approximation theory, and iterative reconstruction algorithms. I will demonstrate the effectiveness of the theory and methods in several important applications, including superresolution imaging and
medical image reconstruction.
Doug Wright (University of Minnesota) Higher order corrections to the KdV approximation for water waves
Abstract: In order to investigate corrections to the common KdV approximation to long waves, we derive modulation equations for the evolution of long wavelength initial data for the water wave and
Boussinesq equations. The equations governing the corrections to the KdV approximation are identical for both systems and are explicitly solvable. We prove estimates showing that they do indeed give
a significantly better approximation than the KdV equation alone. We also present the results of numerical experiments which show that the error estimates we derive for the correction to the
Boussinesq equation are essentially optimal.
Chenyang Xu (Siemens Corporate Research) Medical image segmentation using deformable models
Abstract: In the past four decades, computerized image segmentation has played an increasingly important role in medical imaging. Segmented images are now used routinely in a multitude of different
applications, such as the quantification of tissue volumes, diagnosis, localization of pathology, study of anatomical structure, treatment planning, partial volume correction of functional imaging
data, and computer-assisted surgery. Image segmentation remains a difficult task, however, due to both the tremendous variability of object shapes and the variation in image quality. In particular,
medical images are often corrupted by noise and sampling artifacts, which can cause considerable difficulties when applying classical segmentation techniques such as edge detection and thresholding.
As a result, these techniques either fail completely or require some kind of postprocessing step to remove invalid object boundaries in the segmentation results. To address these difficulties,
deformable models have been extensively studied and widely used in medical image segmentation, with promising results. Deformable models are curves or surfaces defined within an image domain that can
move under the influence of internal forces, which are defined within the curve or surface itself, and external forces, which are computed from the image data. By constraining extracted boundaries to
be smooth and incorporating other prior information about the object shape, deformable models offer robustness to both image noise and boundary gaps and allow integrating boundary elements into a
coherent and consistent mathematical description. Such a boundary description can then be readily used by subsequent applications. Since its introduction 15 years ago, deformable models have grown to
be one of the most active and successful research areas in image segmentation. There are basically two types of deformable models: parametric deformable models and geometric deformable models.
Parametric deformable models represent curves and surfaces explicitly in their parametric forms during deformation. This representation allows direct interaction with the model and can lead to a
compact representation for fast real-time implementation. Adaptation of the model topology, however, such as splitting or merging parts during the deformation, can be difficult using parametric
models. Geometric deformable models, on the other hand, can handle topological changes naturally. These models, based on the theory of curve evolution and the level set method, represent curves and
surfaces implicitly as a level set of a higher-dimensional scalar function. Their parameterizations are computed only after complete deformation, thereby allowing topological adaptivity to be easily
accommodated. Despite this fundamental difference, the underlying principles of both methods are very similar. In this talk, I will present an overall description of the development in deformable
models research and their applications in medical imaging. I will first introduce parametric deformable models, and then describe geometric deformable models. Next, I will present an explicit
mathematical relationship between parametric deformable models and geometric deformable models. Finally, I will present several extensions to these deformable models by various researchers and point
out future research directions. | {"url":"https://www.ima.umn.edu/newsletters/2005/03/","timestamp":"2014-04-16T15:59:07Z","content_type":null,"content_length":"70265","record_id":"<urn:uuid:0997b267-e8e7-4449-b9a5-21a1a671991a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structured Estimation in High-Dimensions
Sahand N Negahban
EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2012-110
May 11, 2012
High-dimensional statistical inference deals with models in which the the number of parameters $p$ is comparable to or larger than the sample size $n$. Since it is usually impossible to obtain
consistent procedures unless $p/n \to 0$, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank
matrices, and combinations thereof. Such structure arises in problems found in compressed sensing, sparse graphical model estimation, and matrix completion. In such settings, a general approach to
estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed
structure. We will present a unified framework for establishing consistency and convergence rates for such regularized $M$-estimators under high-dimensional scaling. We will then show how this
framework can be utilized to re-derive a few existing results and also to obtain a number of new results on consistency and convergence rates, in both $\ell_2$-error and related norms. An equally
important consideration is the computational efficiency in performing inference in the high-dimensional setting. This high-dimensional structure precludes the usual global assumptions---namely,
strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We will discuss ties between the statistical inference problem itself and efficient computational
methods for performing the estimation. In particular, we will show that the same underlying statistical structure can be exploited to prove global geometric convergence of the gradient descent
procedure up to \emph{statistical accuracy}. This analysis reveals interesting connections between statistical precision and computational efficiency in high-dimensional estimation.
Advisor: Martin Wainwright
BibTeX citation:
Author = {Negahban, Sahand N},
Title = {Structured Estimation in High-Dimensions},
School = {EECS Department, University of California, Berkeley},
Year = {2012},
Month = {May},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-110.html},
Number = {UCB/EECS-2012-110},
Abstract = {High-dimensional statistical inference deals with models in which the the number of parameters $p$ is comparable to or larger than the sample size $n$. Since it is usually impossible to obtain consistent procedures unless $p/n \to 0$, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices, and combinations thereof. Such structure arises in problems found in compressed sensing, sparse graphical model estimation, and matrix completion. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. We will present a unified framework for establishing consistency and convergence rates for such regularized $M$-estimators under high-dimensional scaling. We will then show how this framework can be utilized to re-derive a few existing results and also to obtain a number of new results on consistency and convergence rates, in both $\ell_2$-error and related norms.
An equally important consideration is the computational efficiency in performing inference in the high-dimensional setting. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We will discuss ties between the statistical inference problem itself and efficient computational methods for performing the estimation. In particular, we will show that the same underlying statistical structure can be exploited to prove global geometric convergence of the gradient descent procedure up to \emph{statistical accuracy}. This analysis reveals interesting connections between statistical precision and computational efficiency in high-dimensional estimation.}
EndNote citation:
%0 Thesis
%A Negahban, Sahand N
%T Structured Estimation in High-Dimensions
%I EECS Department, University of California, Berkeley
%D 2012
%8 May 11
%@ UCB/EECS-2012-110
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-110.html
%F Negahban:EECS-2012-110 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-110.html","timestamp":"2014-04-19T11:59:10Z","content_type":null,"content_length":"8477","record_id":"<urn:uuid:4143cbc2-dfc2-4f26-84eb-b11e169ece4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recovery of Missing Samples with Sparse Approximations
ISRN Signal Processing
Volume 2013 (2013), Article ID 830723, 5 pages
Research Article
Recovery of Missing Samples with Sparse Approximations
School of Engineering, Ruppin Academic Center, 40250 Emek Hefer, Israel
Received 16 July 2013; Accepted 12 September 2013
Academic Editors: S. K. Bhatia and W. Liu
Copyright © 2013 Benjamin G. Salomon. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
In most missing samples problems, the signals are assumed to be bandlimited. That is, the signals are assumed to be sparsely approximated by a known subset of the discrete Fourier transform basis
vectors. We discuss the recovery of missing samples when the signals can be sparsely approximated by an unknown subset of certain unitary basis vectors. We propose the use of the orthogonal matching
pursuit to recover missing samples by sparse approximations.
1. Introduction
Discrete signals are usually represented by their samples taken on a uniform sampling grid. However, in many applications, it may happen that some samples are lost or unavailable. In such cases, it
is required to convert the irregularly sampled signal to a regularly sampled one, that is, to restore the missing samples. One approach for the recovery of missing data in discrete signals is based
on the assumption that the underlying continuous-time signals are bandlimited. The celebrated sampling theorem by Whittaker [1], Kotel'nikov [2], and Shannon [3] implies that any continuous-time
bandlimited signal can be reconstructed by its regularly spaced samples if the sampling frequency is higher than two times the maximum frequency component of the signal. The solution of the
nonuniform sampling problem poses more difficulties and there exists a vast literature dealing with necessary and sufficient conditions for unique reconstruction and methods for reconstructing a
function from its samples, for example, [4–6]. The numerical reconstruction methods, however, have to operate in a finite-dimensional model, whereas the theoretical results are usually derived for
the continuous-time bandlimited functions (an infinite-dimensional subspace). The use of a truncated sampling series results with a finite-dimensional model but may lead, however, to severely
ill-posed numerical problems [7].
Another approach is to address this problem in a finite-dimensional model of discrete bandlimited signals. A discrete bandlimited (DBL) signal has a sparse representation in terms of a certain
unitary basis vector (e.g., discrete Fourier transform). That is, the signal can be represented by only a known subset of the unitary basis vectors. The recovery of missing samples is, in this case,
equivalent to solving a linear system of equations [8]. In this paper, we focus on the recovery problem, when the a priori knowledge is that the signal is sparsely represented by an unknown subset of
certain unitary basis vectors. This problem is much harder to solve, and there is an infinite number of possible solutions. Our approach is to choose the sparsest solution. That is, we are interested
in approximating the known samples with a minimum number of basis vectors. Standard methods for solving linear systems of equations cannot provide the sparsest solution. We suggest the use of the
orthogonal matching pursuit algorithm [9–11] for determining the sparsest approximation to the given samples from which the unknown samples can be determined.
2. Recovery of Missing Samples of Discrete Bandlimited Signals
Let be a discrete signal with samples. That is, can be described by an -dimensional real vector whose elements are denoted by . These elements correspond to the samples of the signal. Let be an
orthonormal transform matrix; that is, , where is the identity matrix and is the transpose of . The orthonormal matrix defines an orthonormal transform of the vector , denoted by , which is by
definition the signal . The inverse transform of is by definition the signal . In the complex case, the vectors belong to the space (space of -dimensional complex vectors), and the conjugate
transpose of the unitary matrix has to be taken when computing the inverse transform.
The columns of define an orthonormal basis of . That is, each signal can be represented by a linear combination of the columns of , where the coefficients in this linear combination are given by the
transform coefficients . Let () be the rows of (i.e., the columns of the matrix ). It follows that for each
Let be a -size proper subset of and assume that only the samples , , are known. Consequently, the available signal samples define systems of equations for the transform coefficients: The recovery of
missing samples from an arbitrary signal is of course impossible: there is an infinite number of solutions to the underdetermined system of equations (2). The signals that occur in applications,
however, often satisfy known constraints. The constraint used in this paper is called discrete band-limitedness: the signal can be represented by only a known subset of the columns of . That is, the
signal is a linear combination of certain subset of certain orthonormal basis vectors of .
Let be a -size proper subset of , where . A discrete bandlimited signal approximation to the signal is obtained by That is, we assume that all transform coefficients with indices are equal to zero.
For example, let be the discrete Fourier transform (DFT) matrix. That is, is the unitary matrix with elements ( is the imaginary unit) A signal is bandlimited if its DFT vanishes on some fixed
nonempty proper subset of . The complement of is the subset . When the subset is the set of elements (i.e., ) or equivalently (by periodicity) the signal is called a low-pass signal in DFT domain.
Another example is the discrete cosine transform. Let be the discrete cosine transform (DCT) matrix. That is, is the orthonormal matrix with elements A signal is bandlimited if its DCT vanishes on
some fixed nonempty proper subset of . When the complement of (i.e., subset ) is the set of elements the signal is called a low-pass signal in DCT domain.
Assuming that is a discrete bandlimited signal (i.e., , ), then, the available signal samples, , , can now be expressed as Equation (9) is a linear system consisting of equations with unknowns. The
existence and uniqueness of the solution depend, for any transform, on the subsets and (see, e.g., [8] for a discussion on the recovery of samples when the signals are bandlimited in the discrete
Fourier transform domain). When the system of equations (9) has a full column rank and , we can determine the unique exact solution. When the system has a full column rank and (overdetermined system
of equations), we will be interested in the least squares solution.
3. Sparse Approximations
In the previous section, we discussed the recovery of missing samples, when the band-limitedness was known a priori. That is, we had the a priori knowledge of which transform coefficients are equal
to zero. The recovery of the missing samples is, in this case, equivalent to solving a linear system of equations for which many algorithms can be used. If we know that the signal is a low-pass
bandlimited signal (e.g., in DFT or DCT domain), but the bandwidth is not given, we may start with a low bandwidth model and solve the resulting linear system of equations for increasing bandwidths
until a sufficient accuracy in approximating the given samples is obtained. If the only a priori knowledge is that the signal is sparse in terms of the basis vectors, that is, the signal can be
represented (or accurately approximated) with a few unknown basis vectors, the situation is very different. In this case, we have also to determine the basis vectors that sparsely approximate the
We have to solve the underdetermined system of equations (2): where is the -size subset of indices of known samples. The underdetermined system of equations (2) has an infinite number of solutions.
Since we know that the signal is sparsely approximated by the basis vectors, one reasonable approach is to choose the sparsest solution among the infinite number of solutions. That is, the optimal
approximation is defined as either the sparsest approximation (i.e,. with the fewest basis vectors) that yields an approximation error that is smaller than a prespecified threshold or the
approximation using a fixed number of basis vectors that minimizes the approximation error. Finding these approximations is an NP hard problem [12, 13].
We will use the following notations. Let be a signal of samples that contains the given samples; that is, , , . Let , , be the columns of the matrix . For each , let be a unit-norm vector of samples,
where , , , and the norm is the standard norm in : . The set of unit-norm vectors will be called a dictionary. The problem is to determine a sparse approximation to the vector with the dictionary
vectors , .
After determining a sparse solution using dictionary vectors: where the original signal will approximated by from which the missing samples can be determined.
Several algorithms have been proposed for reducing the computational complexity by searching for efficient but nonoptimal approximations. The matching pursuit (MP) [10, 11] is a popular iterative
greedy algorithm for approximate decomposition that addresses the sparsity issue directly. Vectors are selected one by one from the dictionary, picking at each iteration the vector that best
correlates with the present residual, and thus optimizing the signal approximation (in terms of energy) at each iteration. An intrinsic feature of this algorithm is that when stopped after a few
steps, it yields an approximation using only a few dictionary vectors.
We consider the space of real-valued signals of size . Let , , be a dictionary of vectors, having a unit norm. Let be the residual of an term approximation of a given signal . MP subdecomposes the
residue by projecting it on the dictionary vector that matches best. Starting from an initial approximation and residue , the MP algorithm builds up a sequence of sparse approximations stepwise. MP
begins by projecting on a vector and computing the residue : where is the residual vector after best approximating ( norm sense) with . Since is orthogonal to , It follows that to minimize , we must
choose such that is maximum. The process is iterated on the residual. Suppose that for is already computed. The next iteration chooses such that For unit-norm vectors, the last condition is
equivalent to That is, the optimal vector is the one which best correlates with the residual. The MP projects on the chosen vector: The orthogonality of and implies Summing (18) from between 0 and
(for any integer ) yields Similarly, summation of (19) from between 0 and gives When dealing with finite-dimension signals (as in our case), converges exponentially to 0 when tends to infinity [10].
The approximations of MP are improved by orthogonalizing the directions of projection with a Gram-Schmidt procedure [9]. The resulting orthogonal MP (OMP) converges with a finite number of steps,
which is not the case for a standard MP. The vector selected by the MP is a priori not orthogonal to the previously selected vectors , . After subtracting the projection of over , new components are
introduced in the direction of , . This is avoided by projecting the residues on an orthonormal set of vectors , , that span the same subspace that is spanned by , . That is, in each iteration, we
determine the best approximation of the residual in the subspace spanned by , . When an approximation of sufficient accuracy is obtained, to expand over the original dictionary vectors , , we perform
a change of basis by expanding in , .
We demonstrate the applicability of the approach by example. The original 128 samples length signal is the linear combination of two basis vectors of the DCT with randomly chosen coefficients: . The
given signal is a noisy version of the original signal with additive white Gaussian noise with standard deviation equal to 0.05. We assume that 30 contiguous samples (9 to 38) of the noisy signal are
missing. The results of the OMP are depicted in Figure 1, and we can see that the missing samples were approximately restored. The norm of the missing samples is 0.4339, while the norm of the error
is 0.0315. If the samples of the original exact DBL signal are given, the missing 30 samples are completely recovered by the OMP procedure.
4. Conclusions
In this paper, we addressed the recovery of missing samples of discrete bandlimited signals. We have suggested the use of the orthogonal matching pursuit to recover missing samples by sparse
approximations, when the signal can be sparsely represented by an unknown subset of basis vectors of a certain orthonormal transform.
1. E. T. Whittaker, “On the functions which are represneted by the expansion of the interpolation theory,” in Proceedings of the Royal Society of Edinburgh, vol. 35, pp. 181–194, 1915.
2. V. A. Kotel’nikov, “On the carrying capacity of “Ether” and wire in telecommunications,” in Material for the First All-Union Conference on Questions of Communications, Svyazi RKKA, Moscow,
Russia, 1963.
3. C. E. Shannon, “Communication in the presence of noise,” in Proceedings of the Institution of Radio Engineers, vol. 37, no. 1, pp. 10–21, 1949.
4. H. G. Feichtinger and K. H. Grochenig, “Theory and practice of irregular sampling,” in Wavelets: Mathematics and Applications, J. Benedetto and M. Frazier, Eds., vol. 1994, pp. 305–363, CRC
Press, Boca Raton, Fla, USA.
5. H. J. Landau, “Necessary density conditions for sampling and interpolation of certain entire functions,” Acta Mathematica, vol. 117, no. 1, pp. 37–52, 1967. View at Publisher · View at Google
Scholar · View at Scopus
6. J. L. Yen, “On nonuniform sampling of bandwidth-limited signals,” IRE Transactions on Circuit Theory, vol. 3, no. 4, pp. 251–257, 1956. View at Publisher · View at Google Scholar
7. T. Strohmer, “Numerical analysis of the non-uniform sampling problem,” Journal of Computational and Applied Mathematics, vol. 122, no. 1, pp. 297–316, 2000. View at Publisher · View at Google
Scholar · View at Scopus
8. P. J. S. G. Ferreira, “Iterative and noniterative recovery of missing samples for 1-D band-limited signals,” in Nonuniform Sampling: Theory and Practice, F. Marvasti, Ed., pp. 235–278, Kluwer
Academic/Plenum, New York, NY, USA, 2001.
9. Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,” in Proceedings of the 27th Asilomar
Conference on Signals, Systems & Computers, pp. 40–44, Pacific Grove, Calif, USA, November 1993. View at Scopus
10. S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3397–3415, 1993. View at Publisher · View at Google
Scholar · View at Scopus
11. S. Mallat, A Wavelet Tour of Signal Processing, 2nd Edition, Academic Press, New York, NY, USA, 1999.
12. G. Davis, S. Mallat, and M. Avellaneda, “Adaptive greedy approximations,” Constructive Approximation, vol. 13, no. 1, pp. 57–98, 1997. View at Scopus
13. B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at Scopus | {"url":"http://www.hindawi.com/journals/isrn.signal.processing/2013/830723/","timestamp":"2014-04-20T05:55:46Z","content_type":null,"content_length":"225406","record_id":"<urn:uuid:bced8ab5-71b7-48ac-b0a5-27e973510ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Complex Functions
June 15, 2009 by richbeveridge
In graphing real valued functions, each x value chosen is a real number, and each corresponding y value is also a real number.
Because both the x and y values are one-dimensional real numbers, the relationship can be graphed on a plane, showing the x and y values together only requires TWO dimensions.
We can use the graph of the function relationship between x and y values to solve equations.
To solve the quadratic equation
0= x²+5x+3
we can graph the function
and look to see what value(s) of x make the y be zero.
When we graph the function we look for the x values where the graph crosses the x-axis. This is because the value of y is zero along the x-axis.
So the values of x that make y be zero in the graph of (y=x²+5x+3) are approximately x≈-0.70 and x≈-4.31
Frequently, we see quadratic (parabolic) graphs that don’t intersect the x-axis at all:
x that make y be zero. In the case of the graph above (y=x²+5x+9) all the x values we see along the x-axis make y a positive number.
Does this mean that there aren’t any x values that make y be zero?
No. If we use the quadratic formula, we can find that there are complex-valued roots that are solutions of the equation
In this case, (x≈-2.5±1.658i) are the complex values of x that make y zero.
If we could see x values on the Complex Plane, then we would see these roots on the graph, but, as I mentioned in a previous post, this creates some difficulties.
The picture from Wikipedia that I posted recently is a graph of a different function from the ones above.
Even though it’s not the same graph, it does show one method to try to get around the difficulties of graphing relationships in the Complex Plane.
The picture
shows the Complex Plane of all x values. The y values are interpreted by the color and intensity of each x value on the Complex Plane. The roots (and asymptotes) for the function in this picture
are indicated by the points or holes or peaks (however you want to think of them) that we see in the picture.
I haven’t done enough research with this method of graphing to know all the details of how it is colored and how to tell the roots from the asymptotes, but I find it both visually and mathematically
on February 15, 2011 at 5:08 PM | Reply Graphing Complex Functions (Again) « Where the Arts Meet the Sciences
[...] in June of 2009, I wrote about the issues involved in graphing complex functions. Since then, I’ve been showing this material to my students and discussing the relationship [...]
One Response | {"url":"http://richbeveridge.wordpress.com/2009/06/15/graphing-complex-functions/","timestamp":"2014-04-20T18:22:46Z","content_type":null,"content_length":"54302","record_id":"<urn:uuid:5260da50-2c85-45e0-9336-ede8306872e7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
International Olympiad, Day 1
The problems for Day 1 of the IOI have been posted (in many languages). Exciting! Here are the executive summaries. (Though I am on the International Scientific Committee (ISC), I do not know all the
problems, so my appologies if I'm misrepresenting some of them. In particular, the running times may easily be wrong, since I'm trying to solve the problems as I write this post.)
The usual rules apply: if you're the first to post a solution in the comments, you get eternal glory. (If you post annonymously, claim your glory some time during the next eternity.)
Problem Fountain. Consider an undirected graph with costs (all costs are distinct) and the following traversal procedure. If we arrived at node v on edge e, continue on the cheapest edge different
from e. If the node has degree 1, go back on e. A walk following these rules is called a "valid walk."
You also have a prescribed target node, and an integer k. Count the number of valid walks of exactly k edges which end at the prescribed target node. Running time O~(n+m) ["O~" means "drop logs"]
Problem Race. You have a tree with integer weights on the edges. Find a path with as few edges as possible and total length exaclty X. Running time O(nlg n) [log^2 may be easier.]
Problem RiceHub. N rice fields are placed at integer coordinates on the line. The rice must be gathered at some hub, which must also be at an integer coordinate on the line (to be determined). Each
field produces one truck-load of rice, and driving the truck a distance d costs exactly d. You have a hard budget constraint of B. Find the best location for the hub, maximizing the amount of rice
that can be gathered in the budget constraint.
Running time: O~(N).
8 comments:
Take some edge e, with endpoints a and b; the desired path either passes through that edge (in which case we need one path rooted at a and one rooted at b such that their length is exactly X -
len(e) (*)) or doesn't pass through e (in which case we can solve the problem recursively for the 2 trees that are left, when e is removed).
(*) This is essentially checking whether there are indices i and j, such that A[i] + B[j] = const, which can be solved in O(N) with hashtable.
Choosing such an edge e that separates the the tree into two trees of almost equal sizes leads to O(N log N) divide and conquer solution.
Eternal glory goes to Anton Anastasov. Congrats! :)
I just want to add a minor detail, that is one can not necessarily partition the tree into two balanced trees by removing an edge (imagine a tree with a vertex of degree n-1 and all other nodes
of degree 1). You can do this by removing a vertex, but then you may have consider more than 2 subtrees where your two paths may be coming from.
Anon, thanks for pointing this out. I still think Anton gets the kudos since the idea is essentially the same.
I just wish to point out, as an addendum to what Anton said, that no hashtable is necessary, just counting sort and merging.
I agree.
Now the first problem. For each directed edge e compute the next edge in the walk after edge e, Next[0][e].
The super simple solution:
For each i between 1 and the greatest power of 2 less than or equal to k, compute Next[i][e] = Next[i-1][Next[i-1][e]]. Now it's easy to compute in O(log k) time the kth edge in the walk assuming
we start at a edge e. We compute this for the first edge in the walk that starts at every given vertex and verify if the final vertex is the target node. Running time O((n+m) log k).
The better solution:
Look at the directed graph where vertices are the edges of the initial graph, and edges consist of pairs (Next[0][e], e). Since all "vertices" have in-degree 1, our graph looks like a set of
cycles with trees attached to them.
Each starting vertex corresponds to a single "vertex" in our new graph since that's the first edge on the walk starting at that vertex. For each possible ending vertex (a directed edge uv in the
initial graph, where v is the target node) we want to see which starting vertices can be reached in k-1 steps. Due to the structure of our graph, this can only be done by first covering for a
number of times the cycle that this ending vertex is part of (which may consist of 0 edges), and then going straight to the desired starting vertex. This will then require
x*length_of_cycle_containing_ending_vertex + length_of_path_from_ending_vertex_to_starting_vertex, for any nonnegative integer x. We can compute the length of the cycle and the distances to all
other vertices in O(m) time with a breadth first search.
So the problem reduces to solving an equation of the form ax+b=k-1 for every possible starting vertex.
But there are several possible ending vertices. Again, looking at the structure, we see that there can be at most a constant number of such vertices per cycle, so the total running time is
Very nice, Anon!
A small variation on your Solution #1 gets O(nlg k + m), which is enough for full score in the IOI.
The third problem:
For a given set of k points the point that minimises the set of distances to it is at k/2 or k/2 + 1.
So you can do a solution that involves 2 moving pointers, where the first pointer i goes through each point and the second pointer is the rightmost point j for which the x[i..j] has a valid cost.
BTW I've heard the problem of finding a downwards path in a tree of sum S from a Microsoft interview last year, so at least that subproblem is pretty well known.
Queries in the first problem can be answered in O( log N ).
After making the cycle graph there are at most two important nodes ( let's call them P and P' ). For each node you can easily calculate the distance from P and P', these being d( P ) and d( P' )
Now let count( n, k ) equal the number of nodes leading to n in exactly k steps. Our solution becomes count( P, k ) + count( P', k ).
We can assume in count( n, k ) that n is a part of a cycle, as it is trivial to calculate if it were not. So let len( n ) be the length of the cycle containing n.
Now you can precompute ( for P and P' ) some vectors... Let v[0][x] be the vector of distances from P that give x modulo len( P ), and v[1][x] the same vector for P'.
Now let's say we're given a distance k, and that m0 = ( k mod len( P ) ).
We're looking for all the nodes whose distances give m0 modulo len( P ), but are also greater than or equal to k.
This is easily done by a binary search on the vector v[0][m0]...
I coded this during the contest cause I didn't see that the number of queries is equal to 2000 in the largest subtask...
I should also mention that a similar complexity can be achieved for queries where P changes too, so the problem had a much bigger potential :) | {"url":"http://infoweekly.blogspot.com/2011/07/international-olympiad-day-1.html?showComment=1311626463240","timestamp":"2014-04-17T04:35:16Z","content_type":null,"content_length":"61434","record_id":"<urn:uuid:64a34664-fd50-4b5e-bd89-497150d90c19>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expected number of bounces
November 29th 2009, 10:22 AM #1
Expected number of bounces
Say we have the following scenario:
A ball is bouncing forth and back between two parallel walls, located at a distance from each other so that the ball (which is always moving parallel to the normal of the walls) will have to
travel one meter to get from one wall to the other. In each bounce the ball will get a new speed by the wall it is bouncing on, which is uniformly distributed between 0 and 1 meter per second.
The velocity is of course directed against the other wall from where the ball is located.
At time $t_0$ the ball bounces against one of the walls. What is the expected number of bounces the ball will make after $t_0$ if the ball is stopped after ten seconds?
Let N be the number of bounces.
Then N is such that $X_1+\dots+X_{N-1}<10$ and $X_1+\dots+X_N>10$, where $X_i$ denotes the speed at which the ball travelled before the i-th bounce.
(In particular, $X_i$ are iid and following a uniform distribution over (0,1))
Does it help ? I must say I don't have much time, and haven't finished the problem yet, so I hope you can do some things with that.
Last edited by Moo; November 30th 2009 at 01:29 PM.
Sorry, I edited. I hope it's clearer...
I have made some work with this problem, and this is what I have found.
We will first define the function E(t), as the expected number of bounces after t seconds. Of course, $E(t) = 0,\text{ if }t \leq 1,$ since 1 second is the least amount of time the ball can take
to travel from one wall to the other. But what about if t > 1?
Suppose that we call the speed of the ball s. The time $\Delta t$ it takes for the ball to travel between the two walls is 1/s. If $\Delta t < t$, then the ball will reach the other wall and
bounce on it. $\Delta t < t$ in turn yields that s > 1/t. If s < 1/t, the ball will not bounce even once. But if s is between 1/t and 1, it will bounce, and the expected number of bounces after
that bounce will be $E(t-\Delta t)$ (since $t-\Delta t$ is the time the ball has left), making the expected total number bounces $1+E(t-\Delta t)=1+E(t-1/s)$.
Let's define the functin E(t, s) to be the expected number of bounces after t seconds, if we know that the initial speed will be s. We then have:
$E(t,s)=\left\{\begin{array}{ll}<br /> 0,&\text{if }t<1\text{ or }s < 1/t\\<br /> 1+E(t-1/s),&\text{otherwise}<br /> \end{array}\right.$
Also, since s is uniformly distributed between 0 and 1, we can conclude that $E(t)=\int_0^1 E(t,s)ds,$ wich gives (by substitution):
$E(t)=\left\{\begin{array}{ll}<br /> 0,&\text{if }t<1\\ \\<br /> \displaystyle{\int_{1/t}^1 (1+E(t-1/s))ds},&\text{otherwise}<br /> \end{array}\right.$
which can be rewritten as
$E(t)=\left\{\begin{array}{ll}<br /> 0,&\text{if }t<1\\ \\<br /> \displaystyle{1-\frac{1}{t}+\int_{1/t}^1 E(t-1/s)ds},&\text{otherwise}<br /> \end{array}\right.$
This is not a very nice function. Although E will be continuous, it's derivative won't be continuous since after t=1. It's second derivative won't be continuous since after t=2, etc. The best
shot is probably to approximate the function numerically. It may also be possible to solve it separatelly for $n < t \leq n+1$, and iterate n over the natural numbers. I doubt it works very well
for large n though, since each expression builds on all n-1 previous expressions for E, and will probably look very bad just after a few iterations.
One of the first things you should check when you have put up an equation and you are not really sure whether it holds or not, is if the two sides have matching units. In your case, $X_i$
represents a speed [m/s] while the number 10 represents a time interval [s], you therefore have a unit mismatch and you can't compare the two sides.
November 30th 2009, 02:32 AM #2
November 30th 2009, 05:04 AM #3
November 30th 2009, 01:29 PM #4
November 30th 2009, 02:18 PM #5
November 30th 2009, 02:26 PM #6 | {"url":"http://mathhelpforum.com/advanced-statistics/117389-expected-number-bounces.html","timestamp":"2014-04-18T04:31:41Z","content_type":null,"content_length":"52686","record_id":"<urn:uuid:9d66df42-5974-4e2e-90ab-0fb7503f6c9d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
ITSIM Simulations
Submitted by admin on Wed, 05/06/2009 - 7:27pm
1. Equation for Inner and Outer electrodes
Makarov, Anal. Chem., 2000, 72, 1156
subscripts 1 and 2 mean inner and outer electrodes, respectively
R1 = 4 mm = max. radius for inner electrode
R2 = 10 mm = max. radius for outer electrode
Rm = characteristic radius; radial trapping for r < Rm
2. Generate 2D solution using AutoCAD (Autodesk)
• rotational symmetry: 2D solution in r-z plane (r > 0)
• electrodes truncated at z = ±11 mm (exact position proprietary, unknown to us)
• outer electrode split at equator (z = 0 plane)
• ion injection slot not modeled in this study
G. Wu, R. J. Noll, W. R. Plass, Q. Hu, R. H. Perry, R. G. Cooks, Int. J. Mass Spectrom., 254, 1-2, 2006, 53-62.
3. Solve for Electric Field in COMSOL
4. Validate against analytical solution
The Electric field was solved a second time for an electrode geometry without truncations or a gap in the outer electrode, for the purpose of validating against the analytical solution (above).
For this numerical field, field values along the section Er(z = 0) deviated from the analytical solution by no more than 0.5%. Along the section Ez(r= 7 mm), field values deviated by no more than
5. Import into ITSIM
• Convert to field array file
• Sample along rectangular r,z grid
• 106 points
• Import into ITSIM 6.0, using
• home-written program CreatPot | {"url":"http://aston.chem.purdue.edu/research/instrumentation/orbitrap/itsim-simulations","timestamp":"2014-04-18T20:44:05Z","content_type":null,"content_length":"11733","record_id":"<urn:uuid:c4e9fc14-856e-4d68-a9b2-7704638fe7ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Submitted by Anonymous on February 21, 2011.
As i am not a good mathematician but to me "to be a mathematician is the best feeling, like we feel the beauty of nature but cant explain it to that extent which nature deserves". So being a
mathematician you are as close to nature as the mathematics is. Because "nature speaks the language of mathematics". The connections between the different ideas and the mathematical ideas is so
closed even its very difficult for to categorized the field. | {"url":"http://plus.maths.org/content/comment/reply/5304/2237","timestamp":"2014-04-19T22:38:00Z","content_type":null,"content_length":"20371","record_id":"<urn:uuid:9f15b644-b653-4830-956c-c8f93ce7da33>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about fun on SuperFly Physics
Category Archives: fun
About a month ago, I had an extraordinary experience: It was Bill Nye standing on me while I laid on a nail bed. Lots of fun, for sure, and I pointed out to the audience that it was the one …
Continue reading
Rhett Allain’s post about a human running around a loop has really got me (and him!) thinking (click through to see the video). I wondered if there was a more sophisticated way to do the calculation
for the minimum speed … Continue reading
A long time ago, Dan Meyer took to the twitter-sphere with a question: "When do the three clock hands form three 120-degree angles?" Fun problem from Bowen Kerins. bit.ly/iKCZSz— Dan Meyer (@ddmeyer)
June 30, 2011 At the time I thought … Continue reading
This week I was inspired by this intriguing post by my friend Ian Beatty. He talks about what it might be like to use a test-driven development process for teaching. Here’s the short version of what
I got from that: … Continue reading
This week will mark what I think is the 10th anniversary of the Hamline University Physics Talent Show. This is held after our Malmstrom Speaker dinner where we host a big name physics researcher who
gives a public talk the … Continue reading
There have been some interesting things on the interwebs, lately, about rotation, gyroscopes, precession, and helicoptors (all of it brought to my attention, or literally done by, Derek Muller of
Veritasium fame). It got me thinking about the modeling I’ve done … Continue reading
My son recently bought a 20-sided “spin down” die. Here’s a similar one: It’s really useful in the game “Magic The Gathering” because you use it to keep track of your life points. My other son and I,
however, were … Continue reading
Friday’s XKCD comic was about Pink Floyd’s Dark Side of the Moon album cover: I just want to pick on the optics part. Mind you, I think the humor is great! So what’s wrong? It looks like the lens
would … Continue reading
Last night a tweep of mine posted this: How many ways can I rearrange the letters ABCDEFGH so that no letter is in its original position? What a deceptively simple question!— R. Wright (@r_w_wright)
September 01, 2011 I thought about … Continue reading
I don’t like nuts. I really don’t like them in brownies. What I want is a way to tell if a particular brownie has nuts in it just by looking at it. That’s what this post is all about. If … Continue | {"url":"http://arundquist.wordpress.com/category/fun/","timestamp":"2014-04-21T04:55:50Z","content_type":null,"content_length":"46585","record_id":"<urn:uuid:9505ad0f-df2f-470e-a41d-5a6afe15afd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Access Violation?
Hey guys, so I'm trying to write a program that will calculate all the primes in a number up to n using the Sieve of Eratostheneus algorithm. I've written the code, it compiles with no errors or
warning. However, when I try to run it the program closes. After doing some debugging I found that the source of the problem is coming from a method call (i'll state which method later) and a message
in Visual Studio 2012 pops up saying Access Violation. However, the method I'm calling is set to public so I don't really understand what's happening. Any guidance would be appreciated!
I believe the error is occuring within the int sieveEratosthenes(LList* p, int lstSize) function and it's specifically occuring at the node->setPrime(false); call.
//Class Declaration
class ListNode
friend class LList;
ListNode(int item = 0, bool pVal = false, ListNode* link=nullptr);
int getItem() { return item_; }
bool getPrime() { return prime_; }
void setPrime(bool p) { prime_ = p; }
ListNode* getNext() { return link_; }
int item_;
bool prime_;
ListNode *link_;
int sieveEratosthenes(LList* p, int lstSize)
ListNode* node;
bool pTest = false;
//Populate the List
for(int j = 2; j < lstSize; j++) //Starts filling the list at 2
p->append(j, true); //Values are not considered prime until tested
cout << j << " has been added to the list." << endl;
for(int i = 2; i < lstSize; i++)
node = p->_find(i - 2); //i - 2, sets position in list equal to value being evaluated
pTest = node->getPrime();
if(pTest) //if node value is prime
for(int j = 2 * i; j <= lstSize; j += i)
node = p->_find(j-2);
return 0; | {"url":"http://www.dreamincode.net/forums/topic/309802-access-violation/","timestamp":"2014-04-21T09:26:14Z","content_type":null,"content_length":"130681","record_id":"<urn:uuid:67269419-8d32-4cbe-864c-7289da597d4d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Renton Trigonometry Tutor
Find a Renton Trigonometry Tutor
...I taught an English immersion course for Korean 6th graders who came to the US for one month. They were able to do short plays and read their compositions in class by the end of their course.
I also taught English Composition overseas to students who had English as a second language.
39 Subjects: including trigonometry, English, reading, writing
...I do stress proper warmups and warmdowns as well as voice strengthening exercises. I also want my students to know how to read notes. In addition, we work on ear training to ensure pitch is
right where it should be.
46 Subjects: including trigonometry, English, reading, chemistry
...I have significant experience teaching both the old and the new GRE. I've worked with students on Verbal, Quantitative, and the Writing assessment. My students have ranged from those who
haven't done math in years to those who are shooting for a near perfect score.
32 Subjects: including trigonometry, English, reading, writing
...Differential equations are one of the multitude of basic tools that engineers, scientists use. I've had formal training in my formal education as well as practical experience in the many
techniques that are employed to solve today's real-life problems. I've been using Macintosh computers for my engineering/science work for the past 25 years.
45 Subjects: including trigonometry, chemistry, physics, calculus
...I have been tutoring on campus for nearly two years as well, in both mathematics and logic. At the college level, I have taken both a symbolic logic course and a mathematical proofs course. I
have been a tutor for symbolic logic for nearly two years on the college campus.
10 Subjects: including trigonometry, physics, calculus, algebra 2
Nearby Cities With trigonometry Tutor
Auburn, WA trigonometry Tutors
Bellevue, WA trigonometry Tutors
Burien, WA trigonometry Tutors
Des Moines, WA trigonometry Tutors
Federal Way trigonometry Tutors
Issaquah trigonometry Tutors
Kent, WA trigonometry Tutors
Kirkland, WA trigonometry Tutors
Newcastle, WA trigonometry Tutors
Puyallup trigonometry Tutors
Redmond, WA trigonometry Tutors
Seatac, WA trigonometry Tutors
Seattle trigonometry Tutors
Tacoma trigonometry Tutors
Tukwila, WA trigonometry Tutors | {"url":"http://www.purplemath.com/Renton_trigonometry_tutors.php","timestamp":"2014-04-18T00:39:41Z","content_type":null,"content_length":"23818","record_id":"<urn:uuid:3f9db57d-1300-442a-9ce5-2a70dfaaf757>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
HowStuffWorks "Newton's First Law (Law of Inertia)"
Newton's First Law (Law of Inertia)
May the Force Be with You
The F, the m and the a in Newton's formula are very important concepts in mechanics. The F is force, a push or pull exerted on an object. The m is mass, a measure of how much matter is in an object.
And the a is acceleration, which describes how an object's velocity changes over time. Velocity, which is similar to speed, is the distance an object travels in a certain amount of time.
Let's restate Newton's first law in everyday terms:
An object at rest will stay at rest, forever, as long as nothing pushes or pulls on it. An object in motion will stay in motion, traveling in a straight line, forever, until something pushes or
pulls on it.
The "forever" part is difficult to swallow sometimes. But imagine that you have three ramps set up as shown below. Also imagine that the ramps are infinitely long and infinitely smooth. You let a
marble roll down the first ramp, which is set at a slight incline. The marble speeds up on its way down the ramp. Now, you give a gentle push to the marble going uphill on the second ramp. It slows
down as it goes up. Finally, you push a marble on a ramp that represents the middle state between the first two -- in other words, a ramp that is perfectly horizontal. In this case, the marble will
neither slow down nor speed up. In fact, it should keep rolling. Forever.
Physicists use the term inertia to describe this tendency of an object to resist a change in its motion. The Latin root for inertia is the same root for "inert," which means lacking the ability to
move. So you can see how scientists came up with the word. What's more amazing is that they came up with the concept. Inertia isn't an immediately apparent physical property, such as length or
volume. It is, however, related to an object's mass. To understand how, consider the sumo wrestler and the boy shown below.
Let's say the wrestler on the left has a mass of 136 kilograms, and the boy on the right has a mass of 30 kilograms (scientists measure mass in kilograms). Remember the object of sumo wrestling is to
move your opponent from his position. Which person in our example would be easier to move? Common sense tells you that the boy would be easier to move, or less resistant to inertia.
You experience inertia in a moving car all the time. In fact, seatbelts exist in cars specifically to counteract the effects of inertia. Imagine for a moment that a car at a test track is traveling
at a speed of 55 mph. Now imagine that a crash test dummy is inside that car, riding in the front seat. If the car slams into a wall, the dummy flies forward into the dashboard. Why? Because,
according to Newton's first law, an object in motion will remain in motion until an outside force acts on it. When the car hits the wall, the dummy keeps moving in a straight line and at a constant
speed until the dashboard applies a force. Seatbelts hold dummies (and passengers) down, protecting them from their own inertia.
Interestingly, Newton wasn't the first scientist to come up with the law of inertia. That honor goes to Galileo and to René Descartes. In fact, the marble-and-ramp thought experiment described
previously is credited to Galileo. Newton owed much to events and people who preceded him. Before we continue with his other two laws, let's review some of the important history that informed them. | {"url":"http://science.howstuffworks.com/innovation/scientific-experiments/newton-law-of-motion1.htm","timestamp":"2014-04-17T13:10:53Z","content_type":null,"content_length":"123906","record_id":"<urn:uuid:587b05a1-fdfe-4531-a349-6b01ca672867>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the solution to the equation log 2 x – log 2 4 = 2 ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee2d48ae4b0a50f5c565d1d","timestamp":"2014-04-16T16:20:54Z","content_type":null,"content_length":"44255","record_id":"<urn:uuid:55739e95-a2e1-4fb0-ace0-f14482a9a919>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about fun on SuperFly Physics
Category Archives: fun
About a month ago, I had an extraordinary experience: It was Bill Nye standing on me while I laid on a nail bed. Lots of fun, for sure, and I pointed out to the audience that it was the one …
Continue reading
Rhett Allain’s post about a human running around a loop has really got me (and him!) thinking (click through to see the video). I wondered if there was a more sophisticated way to do the calculation
for the minimum speed … Continue reading
A long time ago, Dan Meyer took to the twitter-sphere with a question: "When do the three clock hands form three 120-degree angles?" Fun problem from Bowen Kerins. bit.ly/iKCZSz— Dan Meyer (@ddmeyer)
June 30, 2011 At the time I thought … Continue reading
This week I was inspired by this intriguing post by my friend Ian Beatty. He talks about what it might be like to use a test-driven development process for teaching. Here’s the short version of what
I got from that: … Continue reading
This week will mark what I think is the 10th anniversary of the Hamline University Physics Talent Show. This is held after our Malmstrom Speaker dinner where we host a big name physics researcher who
gives a public talk the … Continue reading
There have been some interesting things on the interwebs, lately, about rotation, gyroscopes, precession, and helicoptors (all of it brought to my attention, or literally done by, Derek Muller of
Veritasium fame). It got me thinking about the modeling I’ve done … Continue reading
My son recently bought a 20-sided “spin down” die. Here’s a similar one: It’s really useful in the game “Magic The Gathering” because you use it to keep track of your life points. My other son and I,
however, were … Continue reading
Friday’s XKCD comic was about Pink Floyd’s Dark Side of the Moon album cover: I just want to pick on the optics part. Mind you, I think the humor is great! So what’s wrong? It looks like the lens
would … Continue reading
Last night a tweep of mine posted this: How many ways can I rearrange the letters ABCDEFGH so that no letter is in its original position? What a deceptively simple question!— R. Wright (@r_w_wright)
September 01, 2011 I thought about … Continue reading
I don’t like nuts. I really don’t like them in brownies. What I want is a way to tell if a particular brownie has nuts in it just by looking at it. That’s what this post is all about. If … Continue | {"url":"http://arundquist.wordpress.com/category/fun/","timestamp":"2014-04-21T04:55:50Z","content_type":null,"content_length":"46585","record_id":"<urn:uuid:9505ad0f-df2f-470e-a41d-5a6afe15afd3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foy, CA Precalculus Tutor
Find a Foy, CA Precalculus Tutor
...One thing I always practice in my classroom is starting from a level where the students can understand, and moving forward from there. I believe that building a good foundation is the key to
success in any subject. I have come up with many different tricks for helping students remember key idea...
11 Subjects: including precalculus, physics, geometry, algebra 1
...I am capable of tutoring in Prealgebra. I understand that many students have a certain anxiety when it comes to math. I try to reduce their anxiety by breaking down problems into smaller
31 Subjects: including precalculus, reading, chemistry, physics
...I am very passionate about teaching Geometry in a pedagogically sound way. Though I get few requests for this subject (I have helped only 3-5 students with this material in the last year, so
please bring your textbook along when coming to tutoring for Geometry), I believe in teaching with discov...
18 Subjects: including precalculus, English, calculus, physics
I have my Bachelor and Master of Science in Mechanical Engineering from University of Southern California. I have 8+ years of experience tutoring students from the Beverly Hills school district
in most subject areas of math and science (pre-alegbra, algebra, geometry, trigonometry, precalculus, calculus, and chemistry). I'm passionate of making sure my students succeed in school.
21 Subjects: including precalculus, reading, chemistry, calculus
...Since I finished my undergrad education, I have been tutoring high school students independently in algebra 2, pre-calculus and chemistry. I always enjoyed my math and science classes because
they featured the most objective learning. I think the key to understanding these subjects is to slow t...
18 Subjects: including precalculus, chemistry, Spanish, statistics
Related Foy, CA Tutors
Foy, CA Accounting Tutors
Foy, CA ACT Tutors
Foy, CA Algebra Tutors
Foy, CA Algebra 2 Tutors
Foy, CA Calculus Tutors
Foy, CA Geometry Tutors
Foy, CA Math Tutors
Foy, CA Prealgebra Tutors
Foy, CA Precalculus Tutors
Foy, CA SAT Tutors
Foy, CA SAT Math Tutors
Foy, CA Science Tutors
Foy, CA Statistics Tutors
Foy, CA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Cimarron, CA precalculus Tutors
Dockweiler, CA precalculus Tutors
Dowtown Carrier Annex, CA precalculus Tutors
Green, CA precalculus Tutors
Griffith, CA precalculus Tutors
La Tijera, CA precalculus Tutors
Oakwood, CA precalculus Tutors
Pico Heights, CA precalculus Tutors
Rimpau, CA precalculus Tutors
Sanford, CA precalculus Tutors
Santa Western, CA precalculus Tutors
Vermont, CA precalculus Tutors
Westvern, CA precalculus Tutors
Wilcox, CA precalculus Tutors
Wilshire Park, LA precalculus Tutors | {"url":"http://www.purplemath.com/foy_ca_precalculus_tutors.php","timestamp":"2014-04-20T10:50:04Z","content_type":null,"content_length":"24100","record_id":"<urn:uuid:6c919474-4825-43c1-964c-1360333b8502>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
You might like to look at Fencing in Lambs before looking at this challenge.
What we're about this time is using some fences that will give a good rectangular space to keep things in. The fences are quite thick and stand up on their own. You might like to use some Multilink
cubes or other cuboids (perhaps Cuisenaire rods) and pretend that they're the fences. Ask your teacher at school and they will be able to find some blocks to help.
Suppose we start off with four of these fences and put them together like this:-
We're looking at it from above. Notice that the Pink fences are $4$ long and $1$ wide. On my computer screen they happen to be exactly centimetres. [But that DOESN'T matter.]
The space I've made is $8$ squares [centimetres in my case].
Notice that the corners are good strong corners like this one:-
We do need strong corners and not weak ones like :-
The next one I could do uses five fence panels instead of four. So I get:-
Now my space is bigger - it is now $12$!
I know I've wasted a bit of two of the fences but it was the only way I could think of using five!
Using six is much easier:-
And using seven turns out like this when I try it:-
The area of the space has now increased to $24$!
My challenge starts off with asking you to have a go with blocks that are a bit like mine [$4$ long and $1$ wide].
Do you know you could possibly even try using some house building bricks as some of these are about four times as long as wide! BUT I'M PARTICULARLY ASKING YOU TO ARRANGE YOUR FENCES SO AS TO MAKE
THE SPACE THE LARGEST RECTANGULAR SPACE IT CAN BE, first with FOUR, then FIVE, then SIX etc.
By the way, the way I've arranged my fences has not always given the largest rectangular space inside.
Now I guess you might like to find a good way of recording what you managed to do.
Now my question "I wonder what would happen if ...?" just has to be: "I WONDER WHAT WOULD HAPPEN IF THE FENCES WERE $5$ LONG INSTEAD OF $4$?"
Well I'll start you off on this challenge
Here's the first few ... but not necessarily ones that give the biggest space, that's for you to discover.
Now you try to sort your own out!
When you think about how you can put your results down, it may be good to link them with the ones when the fences were $4$ long.
Now of course you can go to lengths of $6$, $7$, $8$, $9$ etc. etc.
What a lot to explore!
How about looking at your results and seeing what patterns you notice and what predictions you can make?
Photographs of your fence models would be good to have to put with the solutions. | {"url":"http://nrich.maths.org/68/index?nomenu=1","timestamp":"2014-04-18T08:26:49Z","content_type":null,"content_length":"7072","record_id":"<urn:uuid:4fadd12a-bdd3-4a6b-9fd0-5aa7f51488cf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
discrete stochastic process: exponentially correlated Bernoulli?
up vote 4 down vote favorite
There is a question that was asked on stackoverflow that at first sounds simple but I think it's a lot harder than it sounds.
Suppose we have a stationary random process that generates a sequence of random variables x[i] where each individual random variable has a Bernoulli distribution with probability p, but the
correlation between any two of the random variables x[m] and x[n] is α^|m-n|.
How is it possible to generate such a process? The textbook examples of a Bernoulli process (right distribution, but independent variables) and a discrete-time IID Gaussian process passed through a
low-pass filter (right correlation, but wrong distribution) are very simple by themselves, but cannot be combined in this way... can they? Or am I missing something obvious? If you take a Bernoulli
process and pass it through a low-pass filter, you no longer have a discrete-valued process.
(I can't create tags, so please retag as appropriate... stochastic-process?)
you could try to take a process $x(t)$ like an Ornstein-Uhlenbeck, that has a correlation structure that decreases exponentially, and then define $B_n = 1_{x(n) > \alpha}$ where $\alpha$ is a
well-chosen threshold - I have not done the computations, but I have the feeling that the correlation between these Bernoulli random variables also decreases exponentially. Do you really need the
correlation to be equal to $\alpha^{|m-n|}$ ? Would an exponentially decreasing correlation be enough for your particular purpose ? – Alekk Mar 15 '10 at 14:59
thx for the suggestion... I'm posting this on behalf of someone else (see the link in the 1st sentence) so I do not know the stringency of their requirements. The problem seemed simple enough to
state that I felt I could translate into a "proper" problem statement for mathoverflow. – Jason S Mar 15 '10 at 15:20
...and I had kind of the same hunch (make a continuous-value process, then use a threshold to produce a binary-value output) but don't quite know how to go about characterizing the output process w
/r/t correlation, other than an empirical calculation on the computer. – Jason S Mar 15 '10 at 15:56
By the way, the SO problem is not $\alpha^{|m-n|}$, but $c|m-n|^{-\alpha}$. – Douglas Zare Mar 15 '10 at 18:09
Yes, that was pointed out to me... but I am suspicious + wondering if the OP meant alpha ^ |m-n|. Using the c |m-n| ^ (-alpha) formula, correlation is undefined for m=n. – Jason S Mar 15 '10 at
show 1 more comment
4 Answers
active oldest votes
Here is a construction.
• Let $\{Y_i\}$ be independent Bernouilli random variables with probability $p$.
• Let $N(t)$ be a Poisson process chosen so that $P(N(1)=0)=\alpha$.
• Let $X_i = Y_{N(i)}$.
up vote 6 down In words, we have some radioactive decay which tells us when to flip a new (biased) coin. $X_n$ is the last coin flipped at time $n$. The correlation between $X_m$ and $X_n$ comes
vote accepted from the possibility that there are no decays between time $m$ and time $n$, which happens with probability $\alpha^{|m-n|}$.
The conditional correlation between $X_m$ and $x_n$ is $1$ if $N(m) = N(n)$, and $0$ if $N(m)\ne N(n)$, so $\text{Cor}(X_n,X_m) = P(N(m)=N(n)) = \alpha^{|m-n|}.$
You can simplify this by saying that $N(i) = \sum_{t=1}^i B_i$ where $\{B_i\}$ are independent Bernoulli random variables which are $0$ with probability $\alpha$.
1 fascinating! I think I understand... thanks! – Jason S Mar 15 '10 at 19:13
Brilliant answer – David Bar Moshe Mar 16 '10 at 9:44
Phrasing it in terms of a Poisson process seems overly complicated; the properties of Poisson processes aren't actually used. Couldn't one just phrase it as follows? Let $$X_{i+1}
= \begin{cases} X_i & \text{with probability }\alpha; \\ \text{a new Bernoulli trial independent of }X_i & \text{with probability }1-\alpha. \end{cases} $$ – Michael Hardy Jun 2
'10 at 20:36
add comment
In other words:
Start with a random variable $X_0$ Bernoulli with parameter $p$, random variables $Y_n$ Bernoulli with parameter $\alpha$, random variables $Z_n$ Bernoulli with parameter $p$, and assume
that all these are independent. Define recursively the sequence $(X_n)_{n\ge0}$ by setting $X_{n+1}=Y_nX_n+(1-Y_n)Z_n$ for every $n\ge0$.
up vote 3
down vote Then $X_n$ and $X_{n+k}$ are conditionally correlated if and only if $Y_i=1$ for every $i$ from $n$ to $n+k-1$. This happens with probability $\alpha^k$, hence you are done.
This is Douglas Zare's idea, but with no Poisson process.
Interesting variation, thanks! – Jason S Mar 16 '10 at 14:06
The last line of my answer gave the same construction. My $B_i$ is your $1-Y_i$. – Douglas Zare Mar 16 '10 at 14:33
add comment
I suggest also to look a the paper: Generating spike-trains with specified correlations. By Jakob Macke, Philipp Berens, et al. (Max Planck Institute for Biological Cybernetics.).
Generating spike-trains with specified correlations
They also offer a Matlab Package for 'Sampling from multivariate correlated binary and poisson random variables' ... also available at Matlab central:
up vote 1 down vote
Sampling from multivariate correlated binary and poisson random variables
Also look at the page link
add comment
The above solution is very nice, but relies on the very special structure of the desired process. In a much more general framework, I think that one could use a perfect simulation
algorithm as described in:
up vote 1 down Processes with long memory: Regenerative construction and perfect simulation, Francis Comets, Roberto Fernández, and Pablo A. Ferrari, Ann. Appl. Probab. 12, Number 3 (2002),
vote 921-943.
add comment
Not the answer you're looking for? Browse other questions tagged st.statistics or ask your own question. | {"url":"http://mathoverflow.net/questions/18268/discrete-stochastic-process-exponentially-correlated-bernoulli/23441","timestamp":"2014-04-21T12:38:00Z","content_type":null,"content_length":"74376","record_id":"<urn:uuid:7be19500-6ad8-4db5-831a-683b40f22fe0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cartersville, GA Algebra Tutor
Find a Cartersville, GA Algebra Tutor
...The subjects that I teach specifically are: pre-algebra, algebra, biology, anatomy/physiology, chemistry, physics, and forensic science. I enjoy helping students find their niche and helping
student to overcome their challenges in math and science. I try to help students find a LOVE for math or science; but if they still don't LOVE it after working with me, that's okay.
17 Subjects: including algebra 1, algebra 2, chemistry, physics
During my last year of college I worked as a Biology tutor for my university and thoroughly enjoyed every minute of it. Since graduating, I have missed the opportunity to help out other students
and would like to continue my own learning in the subject (you can never master a science!). Teaching st...
14 Subjects: including algebra 1, biology, anatomy, Microsoft Excel
...I am a hands on tutor so you will have a project to work on through which you will learn what you need to know.I have been sewing, knitting, crocheting, and crafting for about 9 years.
Needlework covers a variety of different crafts. My fortes in needlework include knitting, crocheting, and cross stitch.
4 Subjects: including algebra 1, algebra 2, sewing, needlework
...I specialize in the areas of Reading and Writing, as a certified ESL teacher and Reading Specialist with many years of experience. Let me know if I can help! To prepare for the PSAT, students
need to gain confidence by becoming familiar with the test content and structure, as well as brushing up on test-taking strategies, such as time management and knowing when to guess or pass.
26 Subjects: including algebra 1, reading, English, ASVAB
...I'm patient and detailed with my explanations, helping students understand not just how to work a problem, but why the problem works that way. From my days as a governor's honors high school
student in Cobb County, to my perfect 800 on the math SAT, to my days as a highest honor graduate at Geor...
13 Subjects: including algebra 2, algebra 1, calculus, SAT math | {"url":"http://www.purplemath.com/cartersville_ga_algebra_tutors.php","timestamp":"2014-04-19T14:59:06Z","content_type":null,"content_length":"24284","record_id":"<urn:uuid:f903ae19-73f1-4549-93d7-89f170da479d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Green, CA Geometry Tutor
Find a Green, CA Geometry Tutor
...I am a math major at Caltech currently doing research in graph theory and combinatorics with a professor at Caltech. I have taken several discrete math courses and I spent a summer solving
hard problems in discrete math with a friend. I began programming in high school, so the first advanced ma...
28 Subjects: including geometry, Spanish, French, chemistry
...I have tutored students in math and biology for more than a year, and I have been a teaching assistant for Calculus I at a high school. I am a patient and friendly tutor, and I will help you
build and solidify your understanding of the topics of your interest from the fundamentals. Or I can simply help you with your homework.
14 Subjects: including geometry, calculus, biology, algebra 1
...Students I worked with have scored higher on their finals and other placement tests. I am very flexible and available weekdays and weekends. I will be a great help for students who require
science classes in their majors or for those who are looking to score high on their entry exams.
11 Subjects: including geometry, chemistry, algebra 1, algebra 2
...In addition, I have performed extensive Genetics research during my undergraduate education, so I understand the subject in an in-depth level. Genetics has also always been a personal passion
of mine, so I constantly keep up-to-date on new developments! I look forward to helping you enjoy Genetics!
62 Subjects: including geometry, chemistry, reading, English
...Finally, as a teacher I always felt that if a student did not understand something then I did not present the material in a clear enough manner. It is not always an issue with the student if
they do not understand something they were taught at school.I am qualified to tutor in Algebra 1. I first alleviate any anxiety students may have over math.
31 Subjects: including geometry, reading, chemistry, physics
Related Green, CA Tutors
Green, CA Accounting Tutors
Green, CA ACT Tutors
Green, CA Algebra Tutors
Green, CA Algebra 2 Tutors
Green, CA Calculus Tutors
Green, CA Geometry Tutors
Green, CA Math Tutors
Green, CA Prealgebra Tutors
Green, CA Precalculus Tutors
Green, CA SAT Tutors
Green, CA SAT Math Tutors
Green, CA Science Tutors
Green, CA Statistics Tutors
Green, CA Trigonometry Tutors
Nearby Cities With geometry Tutor
Broadway Manchester, CA geometry Tutors
Cimarron, CA geometry Tutors
Dockweiler, CA geometry Tutors
Dowtown Carrier Annex, CA geometry Tutors
Firestone Park, CA geometry Tutors
Foy, CA geometry Tutors
La Tijera, CA geometry Tutors
Lafayette Square, LA geometry Tutors
Miracle Mile, CA geometry Tutors
Pico Heights, CA geometry Tutors
Rimpau, CA geometry Tutors
Wagner, CA geometry Tutors
Westvern, CA geometry Tutors
Wilcox, CA geometry Tutors
Wilshire Park, LA geometry Tutors | {"url":"http://www.purplemath.com/green_ca_geometry_tutors.php","timestamp":"2014-04-16T10:11:27Z","content_type":null,"content_length":"24151","record_id":"<urn:uuid:7227ca84-9f41-44aa-8375-05088957c9d7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Model Of A Helicopter Rotor Has Four Blades, ... | Chegg.com
A model of a helicopter rotor has four blades, each of length 4.00 m from the central shaft to the blade tip. The model is rotated in a wind tunnel at a rotational speed of 500 rev/min.What is the
radial acceleration of the blade tip expressed as a multiple of the acceleration of gravity, g? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/model-helicopter-rotor-four-blades-length-400-m-central-shaft-blade-tip-model-rotated-wind-q931118","timestamp":"2014-04-17T15:02:45Z","content_type":null,"content_length":"20545","record_id":"<urn:uuid:af51723d-7192-4412-a366-0fc1ac0532ea>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
4 4's Puzzler
September 7th 2005, 01:44 PM #1
4 4's Problem [Help QUICK!]
Yeah Okay, I got this question in class and we have to do the four fours question thing from 1-100.
You use 4 fours, and multiplication, subtraction, division, addition, squareroots and powers[exponents].
I've gotten to 42 but I can't use the number 44 or 444 Just a 4 by its self. I can use as many operations but only 4 fours.
Example: (4+4-4)/4=1
Can I do: [4exp3+(4exp2]+4)/sqrt4=42? AND
the [] and () are different brackets.
And if so, I need help for 43 =/
[exp = exponent/power]
[sqrt = squareroot]
Im a bit confused
I dont really know where to put the brackets, I just want it to go in order, and not in Order of operations [OOO] Help me figure that too please x]
Here's a brain teaser! Can you (with the help of your calculator, as needed) "build" all the whole numbers between 1 and 100 using only four 4's? Use only the + - X / ( ) . ^2 = and 4 keys on
your calculator. 4!=4X3X2X1 is allowed, along with repeating decimal 4 (.4~=.4444…), also the number 44 is allowed.
so 4^3 is not allowed.
37: (4!/4)^2+(4/4)
42: (4!/.4~)-4^2+4
43: 44-(4/4)
Many things to look for
you should really search in google about the 4 4's problem:
you'll find many ways to solve your problem instead of having just one solution. It will maybe give you ideas...
September 7th 2005, 08:10 PM #2
September 8th 2005, 12:21 PM #3
Junior Member
Aug 2005 | {"url":"http://mathhelpforum.com/math-challenge-problems/845-4-4-s-puzzler.html","timestamp":"2014-04-17T06:59:28Z","content_type":null,"content_length":"33192","record_id":"<urn:uuid:4481da93-79ad-4f58-84aa-b3e670d087e5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cartersville, GA Algebra Tutor
Find a Cartersville, GA Algebra Tutor
...The subjects that I teach specifically are: pre-algebra, algebra, biology, anatomy/physiology, chemistry, physics, and forensic science. I enjoy helping students find their niche and helping
student to overcome their challenges in math and science. I try to help students find a LOVE for math or science; but if they still don't LOVE it after working with me, that's okay.
17 Subjects: including algebra 1, algebra 2, chemistry, physics
During my last year of college I worked as a Biology tutor for my university and thoroughly enjoyed every minute of it. Since graduating, I have missed the opportunity to help out other students
and would like to continue my own learning in the subject (you can never master a science!). Teaching st...
14 Subjects: including algebra 1, biology, anatomy, Microsoft Excel
...I am a hands on tutor so you will have a project to work on through which you will learn what you need to know.I have been sewing, knitting, crocheting, and crafting for about 9 years.
Needlework covers a variety of different crafts. My fortes in needlework include knitting, crocheting, and cross stitch.
4 Subjects: including algebra 1, algebra 2, sewing, needlework
...I specialize in the areas of Reading and Writing, as a certified ESL teacher and Reading Specialist with many years of experience. Let me know if I can help! To prepare for the PSAT, students
need to gain confidence by becoming familiar with the test content and structure, as well as brushing up on test-taking strategies, such as time management and knowing when to guess or pass.
26 Subjects: including algebra 1, reading, English, ASVAB
...I'm patient and detailed with my explanations, helping students understand not just how to work a problem, but why the problem works that way. From my days as a governor's honors high school
student in Cobb County, to my perfect 800 on the math SAT, to my days as a highest honor graduate at Geor...
13 Subjects: including algebra 2, algebra 1, calculus, SAT math | {"url":"http://www.purplemath.com/cartersville_ga_algebra_tutors.php","timestamp":"2014-04-19T14:59:06Z","content_type":null,"content_length":"24284","record_id":"<urn:uuid:f903ae19-73f1-4549-93d7-89f170da479d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Math Forum » Discussions » Policy and News » geometry.announcements
Topic: Leonardo Da Vinci's Mathematical Codes of "THE LAST SUPPER, and "MONA LISA" (Part 15)
Replies: 0
Leonardo Da Vinci's Mathematical Codes of "THE LAST SUPPER, and "MONA LISA" (Part 15)
Posted: Apr 12, 2010 12:43 AM
Copyrights 2010 by Ion Vulcanescu.
WELCOME to the newest branch of the science:
This article shall extend the presentation of
Leonardo Da Vinci's Mathematical Codes of the paintings
"THE LAST SUPPER", and "MONA LISA", but
as to Mathematically Sustain this "extension of the presentation"
I have to present firstly a Hidden Effect that takes place in
the presente Universitar and Academic Worlds where
Physical , and/or Mathematical Effects that can not be explained
they are simplu IGNORED, or spiritual mediocry
are Covered-Up as NONSENSE !
that is in need to be explained as the "extension of presentation"
of Leonardo Da Vinci's paintings to be understood
of a Circle or radius 2 inscribed in a Square of side 4 where:
(a) the Circle's Area equals the Circle's Circumference, and
(b) the Square Area equals the Square's Perimeter.
As this MATHEMATICAL EFFECT
belongs not to the presented science of mathematics,
but to the now lost Ancients World
the Understanding of this Mathematical Effect
requires that the researcher to look to it
not to the presented PI Value "3.141592654...", but
through the ancients PI Value "3.1436264369..." a PI Value now
by the French Academy of Sciences, a Supression now
Politically Sustained by the US National Academy of Sciences, and
by the presented US House of Representatives through HR - 224.
See the article:
herein indicatted I have called it with the name of
and you can see it in the article:
Please, be advised that this article contains not only the Research
Data, but also my Opinions of philosopher,
both fully protected by the Federal Laws of the United States,
and by the International Copyrights Laws. If you intend to use partial,
or total this Research Data, you can use it by the Fair Uses Rights
of the Copyrights Laws, or by the Grant I have indicatted in
the Copyrights Notice of
that can be found at the end of the article:
"The Condemnation by the Paris Livre
of the French Academy of Sciences"
or ask the Mathforum for Permission in writing.
The Understanding of the Research Data of this article
requires familiarization with Ion Vulcanescu's
prior Research Data of the articles of
that can be found on Mathforum at:
As here-and-there may be typing, or other errors
this article shall be continuu corrected and reeditted
until it shall remain fidel to the intended form.
Due to this Effect
the interested researchers who shall reach this article
either as subscribers to Mathforum, or through an Internet
Search Engine shall allways return to
and consider only the last corrected edition.
As the Research Data is ample and requires large writing space
it shall be ONLY PARTIAL DECODED in this article
and for Copyright Security Reason
it shall be presented only with few explanations, and with,
or without THE SPIRITUAL PLUGS.
The Real Values of the Words
as they may be published in a future possible book,
or I shall return to mathforum in future years,
and fully present them.
Considering that you were already Warned that the understanding
of this Research Data requires familiarization
with the prior published articles, and
as the majority of the Numerical Codes herein seen
were already explained in the before published articles
this article shall not explain them again.
To understand this
you must follow me step by step !
Step 1
Draw a Cube of edge 4.
Strating from rightside back corner(A) draw its Base Diagonal,
and mark it as AB, and from the same corner (A)
draw the Volume Diagonal and mark it as AC.
Step 2
Next of the Cube draw a SQuare of side 4 and inscribe in it
a Circle of Radius 2.
Step 3
By the ancient civilizations PI Value "3.146264369..."
you have now into the Cube AB + AC = 12.58505748.
Step 4
Look now to the Square:
Area = Perimeter = 16
Step 5
Look now to the Circle:
Area = Circumference = 12.58505748..
Step 6
Now look to this "Expression":
SQUARE : .548696845 : SUMERIAN GRAIN = 648
CIRCLE: 1.271348981 : SUMERIAN FINGER = 648.
Note: Observe the EQUALITY through "648"!
Step 7
".548696845" has its GEOMETRICAL FREQUENCY = 8910
"1.271348981" has its GEOMETRICAL FREQUENCY = 4344
(a) The Meaning of ".548696845" as require larges explanations
it is not presented now, but
(b) the Meaning of "1.271348981" it was presented: It is the Ratio
between the Square and Circle as 16 : 12.58505748 = 1.271348981.
(c) Please, keep your eyes on these Geometrical Frequencies !
Step 1
SQUARE (8910 x .548696845) : 8 : 1.111... = 550
CIRCLE (4344 x 1.271348981) : 8 : 1.111... = 621.3082474
(a) Observe here the "SIGNALS":
- "8" are the EIGHT PARTS OF THE ANCIENTS GEOMETRIC POINT
I have indicatted many times here on Mathforum that from
it appears as to have been THE PREIMETER OF A SQUARE
separated in 8 parts.
(b) Observe here also
They did not need any Calculus, and it appears that
since their "Calculus" did not have the Flaw the todays Calculus has
and we call it with the name of THE ASSUMING CONCEPT.
See my years back finding of this
as taking such "...0001" of Calculus Assuming Concept,
and through Decimal Steps carrying it back until it became seen as
"1.111..." in the article:
and see that here in this Step 1 you have it as A PROOF.
Let's go further !
Step 2
"621.3082474 : 550 = 1.849655715 x (1.001187323 x
x 1.001187323) : 10 + 1.111..."
(a) Observe here HOW ARMONIOUS appears this "1.111..."
in Full Coordonation with:
(b) The Decimal System "10"
(c) The Squared Ratio between the Septenary and Geometric Cubit :
"1.001187323 x 1.00118732", and
(d) This "1.849655715". Do you remember what it is this "1.849655715" ?
Let's see it again:
- "(1.849655715 x 15) : 100 = THE MYCENEAN FOOT
- "((1.849655715 x 15)x 1.111... )) : 100 = GREEK FOOT
- ( 1.849655715 x 16) : 100 = ROMAN FOOT, and
- (1.849655715 : 8 ) x 7 = GOLDEN SECTION.
I have expained this ancient civilizations ratio "1.849655715" ,
and these FEET, as also the GOLDEN RATIO in the articles:
As in Part 14 I have explained THE MASTER FREQUENCIES of
- LEONARDO DA VINCI = 41320
- THE LAST SUPPER = 51272
- ION VULCANESCU = 64492
now I have to lett you see also THE MASTER FREQUENCY of
- MONA LISA = 7636
...and having them again under your eyes, let's see their
with the above explained now MATHEMATICAL EFFECT :
...and here it is the view of this MATHEMATICAL EFFECT
Step 1
SQUARE VIEW in NUMBERS !
"(16 : .548696845 : .045 - 1 + 162) x 8 + 10,000 x 10 =
= 41320 + 51272 + 7636 + 64492"
SQUARE VIEW in WORDS!
"(SQUARE Perimeter, or Area : .548696845 : SUMERIAN GRAIN - ONE + THE NATURAL GEOMETRIC PI VALUE ( in its 612 Grains or Fingers) x 8+ SWASTIKA x DECIMAL SYSTEM =
= LEONARDO DA VINCI + THE LAST SUPPER +
+ MONA LISA + ION VULCANESCU"
Step 2
CIRCLE VIEW in NUMBERS !
"(12.58505748 : 1.271348981 : .015276203 - 1 + 162) x 8 + 10,000 x 10 = 41320 + 51272 + 7636+ 64492"
CIRCLE VIEW in WORDS !
" ( CIRCLE Circumference, or Area : Square/Circle : Sumerian Finger - ONE +
+ THE NATURAL GEOMETRIC PI VALUE (in its 162 Grains or Fingers) x
-x 8 + SWASTIKA x DECIMAL SYSTEM =
= LEONARDO DA VINCI = THE LAST SUPPER +
+ MONA LISA + ION VULCANESCU"
Step 3
(16 : .548696845 : SUMERIAN GRAIN - 1 + 162) = 809
(12.58505748 : 1.271348981 : SUMERIAMN FINGER - 1 + 162) = 809
GEOMETRICAL FREQUENCY OF "809" = 2788
Step 4
"5772 - 1- 162 - 809 - 2788 = 2012 AD "
Look HOW STRANGE it is this Time-Message !
"The Old Hebrews Calendar Year 5772 - One - The Natural Geometric
PI Value - 809 - 809's Geometrical Frequency 2788 = 2012 AD !"
Step 5
...and now we can see the appearition of
THE DESTRUCTIVE EFFECT seen as follows:
"((162 x 1.111...) + 740 - 809)) x 8 = 888"
Let's see the Time-Message !
"(( THE NATURAL GEOMETRIC PI VALUE x 1.111...) +
+THE BLACK STONE OF MECCA ( AL HAJAR UL ASWAD -
- 809)) x 8 = 888"
The Code "888" was explained in the article:
Let's see WHAT IT IS IN HERE !
(a) We see that The Natural Geometric PI Value "162" it is
"mathematically extended: through :"1.111...", then
(b) We see that THE BLACK STONE OF MECCA it is ADDED, then
(c) this "809" it is subtracted, then
(d) "x8" symbolizes THE SQUARE PERIMETER EIGHT PARTS
thus THE GEOMETRIC POINT it is creatted in
that we see a Signal indicating that by them it shall take place
in 2012 AD which by the Old Hebrews Calendar it is their year 5772
seen in Step 4 of this Section.
...and so the ancients have let after them in time their Spirit!
...and so the presentation of Leonardo Da Vinci's
Mathematical Codesof 'THE LAST SUPPER", and "MONA LISA"
Ion Vulcanescu - Philosopher
Independent Researcher in Geometry
Author, Editor and Publisher of
April 11 2010
Sullivan County
State of New York | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2062295","timestamp":"2014-04-17T02:12:14Z","content_type":null,"content_length":"25986","record_id":"<urn:uuid:222a4120-7545-49f5-be45-daf26b7ffc34>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of Convergence Rates of Some Gibbs Samplers on Continuous State
Analysis of Convergence Rates of Some Gibbs Samplers on Continuous State Spaces
ABSTRACT We use a non-Markovian coupling and small modi?cations of techniques from the
theory of ?nite Markov chains to analyze some Markov chains on continuous state
spaces. The ?rst is a Gibbs sampler on narrow contingency tables, the second a
gen- eralization of a sampler introduced by Randall and Winkler.
[show abstract] [hide abstract]
ABSTRACT: It is shown that the “hit-and-run” algorithm for sampling from a convex body K (introduced by R.L. Smith) mixes in time O *(n 2 R 2/r 2), where R and r are the radii of the inscribed
and circumscribed balls of K. Thus after appropriate preprocessing, hit-and-run produces an approximately uniformly distributed sample point in time O *(n 3), which matches the best known bound
for other sampling algorithms. We show that the bound is best possible in terms of R,r and n.
Mathematical Programming 11/1999; 86(3):443-461. · 2.09 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The main technique used in algorithm design for approximating heart of the method is the study of the convergence (mixing) rates of particular Markov chains of interest. In this paper
we illustrate a new approach to the coupling technique, which we call path coupling, for bounding mixing rates. Previous applications of coupling have required detailed insights into the
combinatorics of the problem at hand, and this complexity can make the technique extremely difficult to apply successfully. Path coupling helps to minimize the combinatorial difficulty and in all
cases provides simpler convergence proofs than does the standard coupling method. However the true power of the method is that the simplification obtained may allow coupling proofs which were
previously unknown, or provide significantly better bounds than those obtained using the standard method. We apply the path coupling method to several hard combinatorial problems, obtaining new
or improved results. We examine combinatorial problems such as graph colouring and TWICE-SAT, and problems from statistical physics, such as the antiferromagnetic Potts model and the hard-core
lattice gas model. In each case we provide either a proof of rapid mixing where none was known previously, or substantial simplification of existing proofs with consequent gains in the
performance of the resulting algorithms
Foundations of Computer Science, 1997. Proceedings., 38th Annual Symposium on; 11/1997
[show abstract] [hide abstract]
ABSTRACT: We determine, up to a log factor, the mixing time of a Markov chain whose state space consists of the successive distances between n labeled “dots” on a circle, in which one dot is
selected uniformly at random and moved to a uniformly random point between its two neighbors. The method involves novel use of auxiliary discrete Markov chains to keep track of a vector of
quadratic parameters.
Approximation, Randomization and Combinatorial Optimization, Algorithms and Techniques, 8th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2005
and 9th InternationalWorkshop on Randomization and Computation, RANDOM 2005, Berkeley, CA, USA, August 22-24, 2005, Proceedings; 01/2005
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
0 Downloads
Available from | {"url":"http://www.researchgate.net/publication/51933316_Analysis_of_Convergence_Rates_of_Some_Gibbs_Samplers_on_Continuous_StateSpaces","timestamp":"2014-04-21T01:02:24Z","content_type":null,"content_length":"171438","record_id":"<urn:uuid:77fe3b48-f4fb-42d7-9ff0-186b621eaf76>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometrical staircase
• Dictionary definitions
• Enter a word for the dictionary definition.
geometrical staircase
From The Collaborative International Dictionary of English v.0.48:
Geometric \Ge`o*met"ric\, Geometrical \Ge`o*met"ric*al\, a. [L.
geometricus; Gr. ?: cf. F. g['e]om['e]trique.]
1. Pertaining to, or according to the rules or principles of,
geometry; determined by geometry; as, a geometrical
solution of a problem.
[1913 Webster]
2. (Art) characterized by simple geometric forms in design
and decoration; as, a buffalo hide painted with red and
black geometrical designs.
Syn: geometric.
[WordNet 1.5]
Note: Geometric is often used, as opposed to algebraic, to
include processes or solutions in which the
propositions or principles of geometry are made use of
rather than those of algebra.
[1913 Webster]
Note: Geometrical is often used in a limited or strictly
technical sense, as opposed to mechanical; thus, a
construction or solution is geometrical which can be
made by ruler and compasses, i. e., by means of right
lines and circles. Every construction or solution which
requires any other curve, or such motion of a line or
circle as would generate any other curve, is not
geometrical, but mechanical. By another distinction, a
geometrical solution is one obtained by the rules of
geometry, or processes of analysis, and hence is exact;
while a mechanical solution is one obtained by trial,
by actual measurements, with instruments, etc., and is
only approximate and empirical.
[1913 Webster]
Geometrical curve. Same as Algebraic curve; -- so called
because their different points may be constructed by the
operations of elementary geometry.
Geometric lathe, an instrument for engraving bank notes,
etc., with complicated patterns of interlacing lines; --
called also cycloidal engine.
Geometrical pace, a measure of five feet.
Geometric pen, an instrument for drawing geometric curves,
in which the movements of a pen or pencil attached to a
revolving arm of adjustable length may be indefinitely
varied by changing the toothed wheels which give motion to
the arm.
Geometrical plane (Persp.), the same as Ground plane .
Geometrical progression, proportion, ratio. See under
Progression, Proportion and Ratio.
Geometrical radius, in gearing, the radius of the pitch
circle of a cogwheel. --Knight.
Geometric spider (Zool.), one of many species of spiders,
which spin a geometrical web. They mostly belong to
Epeira and allied genera, as the garden spider. See
Garden spider.
Geometric square, a portable instrument in the form of a
square frame for ascertaining distances and heights by
measuring angles.
Geometrical staircase, one in which the stairs are
supported by the wall at one end only.
Geometrical tracery, in architecture and decoration,
tracery arranged in geometrical figures.
[1913 Webster] | {"url":"http://www.crosswordpuzzlehelp.net/old/dictionary.php?q=Geometrical%20staircase","timestamp":"2014-04-16T04:18:52Z","content_type":null,"content_length":"8745","record_id":"<urn:uuid:4cdf8115-681d-44b5-97e9-b13240014327>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: GENERATING ROOTED TRIANGULATIONS WITHOUT
David Avis*
School of Computer Science
McGill University
3480 University
Montr’eal, Qu’ebec, Canada
H3A 2A7
We use the reverse search technique to give algorithms for generating all
graphs on n points that are two and three connected planar triangulations with r
points on the outer face. The triangulations are rooted, which means the outer
face has a fixed labelling. The triangulations are produced without duplications
in O(n 2 ) time per triangulation. The algorithms use O(n) space. A program for
generating all 3connected rooted triangulations based on this algorithm is avail
able by ftp.
1. Introduction
Let G = (V , E) be a planar graph with vertex set V = {v 1 , . . . , v n }, and let 3 £ r £ n be an
integer. G is an r - rooted triangulation if it can be embedded in the plane such that the outer
face has labels {v 1 , . . . , v r } in clockwise order, and all interior faces are triangles. A vertex ( or | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/148/3744166.html","timestamp":"2014-04-21T15:34:20Z","content_type":null,"content_length":"8074","record_id":"<urn:uuid:3f063c5e-9a6b-4d8c-b6d7-2b5ac1b42693>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Websites for Interactive
Math Websites
Interactive Math Websites, Math Websites for Interactive Whiteboards
Interactive Math Websites for Interactive Whiteboards Manipulatives National Library of Virtual Manipulatives
Virtual manipulatives related to the NCTM standards. This site has a lot of resources.
Base block addition and subtraction is great for teaching regrouping.
Big Online Calculator
Use to teach students how to use a calculator.
Base ten blocks, counters, number lines, etc...
Lots of resources for math lessons.
Numbers Wash Line
Put numbers in the correct order.
Number Recognition Launch the Spaceship
Students must click on the correct number to launch a spaceship/
Number Sequence
Students pick the correct number to complete a sequence.
Two Digit Numbers
Students match names of two digit numbers.
Students guess patterns on the numberline.
Odd or Even
Students sort numbers as odd or even.
Number Track
Place numbers in correct order.
Caterpillar Ordering and Sequencing
Students put number in the correct order to put a caterpillar back together.
Thinking of a Number
Students use clues to guess a mystery number.
Students can learn patterns counting by different numbers.
Spooky Sequences Count by 2
10 Addition Break the Wall
Students solve addition problems to destroy a wall.
Base Blocks Addition
Use base ten blocks to show regrouping in addition.
Subtraction Break the Wall
Students solve subtraction problems to destroy a wall.
Base Blocks Subtraction
Use base ten blocks to show regrouping in subtraction.
Multiplication Break the Wall
Students solve multiplication problems to destroy a wall.
Division Break the Wall
Students solve division problems to destroy a wall.
Time Bang on Time
Students stop the clock when it gets to the correct time
Stop the Clock Half Hour 15 minutes 5 minutes 1 minutes
Students match times to the clock.
Time for Time
Students set the clock to the time displayed
Money Money Powerpoints
Powerpoints that teach about money.
Shapes Polygon Sort Symmetry Sort Triangle Sort Virtual Geoboard Virtual Geoboard 2 Shapes Tangrams Puzzles Interactive Math Sites With Multiple Resources Math Playground
Grades 1-12 Free
This site has insructional material, flash cards, and games.
Grades 1-6 Free
This site has a lot of resources. Each topic has lessons teach the skill, flash cards, and games to reinforce and practice skills.
Grades 6-12 Free
Companion site to CoolMath4Kids but geared for grades 6-12. Each topic has lessons to teach the skill, flash cards, and games to reinforce and practice skills.
Grades 1-12 Free
Another companion site to CoolMath4Kids. This site has a lot of educational games to play.
Mr. Nussbaum's Math Lab
Grades 1-12 Free
This is part of a larger site with a lot of games in every subject. It has math games, worksheet generators, calculators, and more.
Grades K-12 Free
102 online activities for grades k-12.
Count Us In
Grades prek-2 Free
This site has games to help children with basic number concepts. You can also find lesson plans to teach each concept. It also has the option to download games to run on your computer.
Math Advantage
Grades K-8 Free
This site is from Harcourt and supports their math series. You can find games to practice math skills from K-8.
K-8 Free
This site has teaching tools, tests, and games to reinforce math skills.
FunBrain Math Arcade
Grades K-2 Free
Arcade-style games to reinforce math skills.
Grades K-2 Free
Interactive games for younger kids to practice basic math skills.
K-2 Free
BBC site that focuses on numbers, shapes, measurements, and data. It has games to reinforce skills.
National Library of Virtual Manipulatives for Interactive Mathematics
Grades 1-12 Free
Interactive manipulatives to help you teach a skill. Resources are intended to be used with or after direct instruction from a teacher.
Grades 6-12 Free
Website is more appropriate for older grades. Resources are intended to be used with direct instruction from teacher. It does contain a lot of activities.
Crickweb Interactive Math Resources
Lots of great interactive resources to teach and practice math skills. | {"url":"http://www.theteachersguide.com/InteractiveSitesMathSmartBoard.htm","timestamp":"2014-04-17T21:41:43Z","content_type":null,"content_length":"28026","record_id":"<urn:uuid:e929ee90-5b0e-4270-a7bc-b460a29ebe21>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: 2 stage estimation with missing values
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: 2 stage estimation with missing values
From "Paulo Regis" <pauloregis.ar@googlemail.com>
To statalist@hsphsun2.harvard.edu
Subject st: 2 stage estimation with missing values
Date Wed, 28 May 2008 20:30:11 +0100
Dear all,
I am working with nonlinear lest squares and forecast. My boss wants
to program everything to be able to use the procedure in a command
style (i.e. I need to program it so he can use it tipying into the
command line). However, I have some problems because of nonlinear
least squares and missing values. I will simplify the problem as much
as possible. Lets say my procedure has 2 steps.
1st step:
To obtain the forecast for variable x. From the complete sample (size
N), I use the first N1 observations to obtain the forecast at N1+1 and
the actualize the sample for the following obervations. At the end, I
have forecast for N1+1 to N.
2nd step:
I use the forecast of x(t) one period ahead x^(t+1) in a regression
to explain y(t). The relationship is nonlinear, so that
y(t) = f[x(t), x^(t+1)]+e
and I use nonlinear least squares to obtain the coefficients.
The problem is in the second step I need to drop the missign values
but I need them back afterwards to work with the full sample once more
to perform new estimations.
Summing up, what I need is to create a temporary database without the
missing values for the second step and be able to recover teh original
database at the end of the routine. There is any useful trick for
programmers to do this?
thank you in advanced
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-05/msg01156.html","timestamp":"2014-04-17T01:30:32Z","content_type":null,"content_length":"6385","record_id":"<urn:uuid:a8936998-0465-408d-87f8-77e193082ce5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of Imaginary Numbers in the Real World
B 3 2 1 0 IMAGINARY NUMBERS 
Welcome to our fun (and admittedly geeky) web site that is totally dedicated to imaginary numbers. Here at picomonster, we explore typical imaginary number questions such as, “Can you show me
examples of imaginary numbers in the real world?” or “Are there any practical applications of imaginary numbers?”
In general, imaginary numbers are used in combination with a real number to form something called a complex number, a+bi where a is the real part (real number), and bi is the imaginary part (real
number times the imaginary unit i). This complex number is useful for representing two dimensional variables where both dimensions are physically significant. Think of it as the difference between a
variable for the length of a stick (one dimension only), and a variable for the size of a photograph (2 dimensions, one for length, one for width). For the photograph, we could use a complex number
to describe it where the real part would quantify one dimension, and the imaginary part would quantify the other.
Don’t see the interactive applet?
You may need the Java plugin.
Install it here.
The key point to remember is that imaginary numbers are often used to represent a second physical dimension. Remember, a purely imaginary voltage in an AC circuit will shock you as badly as a real
voltage – that’s proof enough of it’s physical existence!
Imaginary Motion
Let’s look quickly at a fun concept – imaginary motion. Hit the “Imaginary” button in the figure left to see an engineer’s definition of Imaginary motion. Clearly, this motion is every bit as
physical as “Real” motion (hit “Real” button for comparison). Imaginary doesn’t directly imply non-existent as some may believe.
But if Imaginary motion is physical, what makes it “Imaginary”?
We use some cool interactive gadgets to answer this question and more, taking you from the dry math to a solid grasp of how imaginary numbers are used in real world applications. Each of our three
pico-lessons takes only two minutes. Have fun!
PS. Thanks everybody so far for your comments. It’s been a lot of fun reading them all.
Bradley Chung | {"url":"http://www.picomonster.com/","timestamp":"2014-04-21T07:04:08Z","content_type":null,"content_length":"11104","record_id":"<urn:uuid:814a08df-a094-4de3-9f02-aeea99d1a104>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
ESPIRiT Reconstruction with L1-Wavelet Demo
This is a demo on how to generate ESPIRiT maps and use them to perform ESPIRiT reconstruction for parallel imaging. It is based on the paper Uecker et. al, MRM 2013 DOI 10.1002/mrm.24751. ESPIRiT is
a method that finds the subspace of multi-coil data from a calibration region in k-space using a series of eigen-value decompositions in k-space and image space. Here we also use the "soft" sense
idea (Uecker et. al, "ESPIRiT Reconstruction using Soft-SENSE", Proceedings of the ISMRM 2013, pp-127) by using the eigen values to weight the eigen-vectors.
Prepare DATA
Here we perform ESPIRiT calibration on data which has strong aliasing in the phase-encode direction. SENSE often fails with this type of data.
load brain_8ch
DATA = DATA/max(max(max(abs(ifft2c(DATA))))) + eps;
ksize = [6,6]; % ESPIRiT kernel-window-size
eigThresh_k = 0.02; % threshold of eigenvectors in k-space
eigThresh_im = 0.9; % threshold of eigenvectors in image space
% parameters for L1-reconstruction with splitting
nIterCG = 5; % number of CG iterations for the PI part
nIterSplit = 15; % number of splitting iterations for CS part
splitWeight = 0.4; % reasonable value
lambda = 0.0025; % L1-Wavelet threshold
[sx,sy,Nc] = size(DATA);
% create a sampling mask to simulate x2 undersampling with autocalibration
% lines
mask = mask_randm_x4;
mask = repmat(mask,[1,1,8]);
ncalib = getCalibSize(mask_randm_x4);
DATAc = DATA.*mask;
calib = crop(DATAc,[ncalib,Nc]);
Display coil images:
im = ifft2c(DATAc);
figure, imshow3(abs(im),[],[1,Nc]);
title('magnitude of physical coil images');
colormap((gray(256))); colorbar;
figure, imshow3(angle(im),[],[1,Nc]);
title('phase of physical coil images');
colormap('default'); colorbar;
Compute Eigen-Value Maps
Maps are computed in two steps.
% compute Calibration matrix, perform 1st SVD and convert singular vectors
% into k-space kernels
[k,S] = dat2Kernel(calib,ksize);
idx = max(find(S >= S(1)*eigThresh_k));
Display the singular vectors and values of the calibration matrix
kdisp = reshape(k,[ksize(1)*ksize(2)*Nc,ksize(1)*ksize(2)*Nc]);
figure, subplot(211), plot([1:ksize(1)*ksize(2)*Nc],S,'LineWidth',2);
hold on,
legend('signular vector value','threshold')
title('Singular Vectors')
subplot(212), imagesc(abs(kdisp)), colormap(gray(256));
xlabel('Singular value #');
title('Singular vectors')
crop kernels and compute eigen-value decomposition in image space to get maps
[M,W] = kernelEig(k(:,:,:,1:idx),[sx,sy]);
show eigen-values and eigen-vectors. The last set of eigen-vectors corresponding to eigen-values 1 look like sensitivity maps
figure, imshow3(abs(W),[],[1,Nc]);
title('Eigen Values in Image space');
colormap((gray(256))); colorbar;
figure, imshow3(abs(M),[],[Nc,Nc]);
title('Magnitude of Eigen Vectors');
colormap(gray(256)); colorbar;
figure, imshow3(angle(M),[],[Nc,Nc]);
title('Magnitude of Eigen Vectors');
colormap(jet(256)); colorbar;
Warning: Image is too big to fit on screen; displaying at 67%
Warning: Image is too big to fit on screen; displaying at 67%
Compute Soft-SENSE ESPIRiT Maps
crop sensitivity maps according to eigenvalues==1. Note that we have to use 2 sets of maps. Here we weight the 2 maps with the eigen-values
maps = M(:,:,:,end-1:end);
% Weight the eigenvectors with soft-senses eigen-values
weights = W(:,:,end-1:end) ;
weights = (weights - eigThresh_im)./(1-eigThresh_im).* (W(:,:,end-1:end) > eigThresh_im);
weights = -cos(pi*weights)/2 + 1/2;
% create and ESPIRiT operator
ESP = ESPIRiT(maps,weights);
ESPIRiT CG reconstruction with soft-sense and 1 sets of maps
XOP = Wavelet('Daubechies_TI',4,6);
FT = p2DFT(mask,[sx,sy,Nc]);
disp('Performing ESPIRiT reconstruction from 2 maps')
tic; [reskESPIRiT, resESPIRiT] = cgESPIRiT(DATAc,ESP, nIterCG*3, 0.01,DATAc*0); toc
disp('Performing L1-ESPIRiT reconstruction from 2 maps')
[resL1ESPIRiT] = cgL1ESPIRiT(DATAc, resESPIRiT*0, FT, ESP, nIterCG,XOP,lambda,splitWeight,nIterSplit);
% GRAPPA reconstruction
disp('Performing GRAPPA reconstruction ... slow in Matlab! ')
tic; reskGRAPPA = GRAPPA(DATAc,calib,[5,5],0.01);toc
resGRAPPA = ifft2c(reskGRAPPA);
Performing ESPIRiT reconstruction from 2 maps
Elapsed time is 3.460917 seconds.
Performing L1-ESPIRiT reconstruction from 2 maps
Iteration: 1, consistency: 18.537199
Iteration: 2, consistency: 6.907292
Iteration: 3, consistency: 4.212248
Iteration: 4, consistency: 3.600387
Iteration: 5, consistency: 3.386763
Iteration: 6, consistency: 3.279046
Iteration: 7, consistency: 3.215016
Iteration: 8, consistency: 3.173626
Iteration: 9, consistency: 3.145386
Iteration: 10, consistency: 3.125352
Iteration: 11, consistency: 3.110704
Iteration: 12, consistency: 3.099735
Iteration: 13, consistency: 3.091360
Iteration: 14, consistency: 3.084862
Iteration: 15, consistency: 3.079751
Elapsed time is 22.371217 seconds.
Performing GRAPPA reconstruction ... slow in Matlab!
reconstructing coil 1
reconstructing coil 2
reconstructing coil 3
reconstructing coil 4
reconstructing coil 5
reconstructing coil 6
reconstructing coil 7
reconstructing coil 8
Elapsed time is 225.601446 seconds.
Note the typical center FOV aliasing in SENSE. Also, note that ESPIRiT has (very slightly) less error than GRAPPA
figure, imshow(cat(2,sos(resESPIRiT), sos(resL1ESPIRiT),sos(resGRAPPA)),[0,1]);
title('ESPIRiT reconstruction vs L1-ESPIRiT vs GRAPPA')
figure, imshow(cat(2,sos(ifft2c(reskESPIRiT-DATA)),sos(ifft2c(fft2c(ESP*resL1ESPIRiT)-DATA)),sos(ifft2c(reskGRAPPA-DATA))).^(1/2),[])
title('ESPIRiT reconstruction error vs L1-ESPIRiT vs GRAPPA') | {"url":"http://www.eecs.berkeley.edu/~mlustig/software/ESPIRiT_demo/demo_ESPIRiT_L1_recon.html","timestamp":"2014-04-19T09:27:05Z","content_type":null,"content_length":"17385","record_id":"<urn:uuid:ec9e5e55-29ba-4ceb-8f33-7e5ee92d717b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Purplemath Forums
Picture Problem -- completely lost!
I'm having a hard time figuring this problem out, using this image: http://i49.tinypic.com/35hlbm8.png The distance AD and BC is 18 feet. The distance between DC is 12 feet. And for clarification,
the 8 degrees is between the vertical by AD and the actual AD pole, if that makes any sense. I need to ...
An expression on my homework I just CAN'T figure out is sin(arccos (x/2)+(pi/3))
I'm not sure if I should do the arccos of both x/2 and pi/3...or if I should do the arccos of x/2, add pi/3, then take the sin of that. I've tried both ways and get answers that make sense either | {"url":"http://www.purplemath.com/learning/search.php?author_id=33285&sr=posts","timestamp":"2014-04-17T22:06:47Z","content_type":null,"content_length":"14882","record_id":"<urn:uuid:83c093d5-cb01-4424-ae8d-8450b6a9cacf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Corona, NY Trigonometry Tutor
Find a Corona, NY Trigonometry Tutor
...I provide all and or most of the practice materials for each and every subject. I can assist with Global History, Earth Science, Physics and English, of which I have 100% pass. For Geometry the
rate is 99% pass.
47 Subjects: including trigonometry, reading, accounting, biology
...While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with
students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum.
26 Subjects: including trigonometry, calculus, physics, GRE
...I've also taught and tutored organic chemistry courses for the past 8 years at both the high school and college levels. I also have experience teaching MCAT prep courses with focus in Organic
Chemistry, Chemistry, Physics, and Biology. I've been a certified teacher in NJ for 14 years.
83 Subjects: including trigonometry, chemistry, physics, calculus
...This leads to the topic of formulas and equations. In particular, proportions are solved and linear and quadratic equations are solved and graphed. Along the way, factoring polynomials and
properties of square roots are introduced.
6 Subjects: including trigonometry, calculus, algebra 2, geometry
...My previous experience includes: Public and private tutoring, public and private school teaching, prestigious fellowship teaching math and science in middle school classrooms. The subjects I
can tutor are as follows: ALL K-12 mathematics courses including AP courses. College level mathematics ...
31 Subjects: including trigonometry, English, statistics, geometry | {"url":"http://www.purplemath.com/Corona_NY_trigonometry_tutors.php","timestamp":"2014-04-17T11:09:55Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:8e2ca3ff-f93d-4e2d-bb7b-21de85deaa85>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
You can get some information from the answers to this question.
posted Jan 27 '12
John Palmieri
2970 ● 9 ● 26 ● 67
I was sent here from StackOverflow.
I have a vector space with given basis (it is also a Hopf algebra, but this is not part of the problem). How do I make it into a graded vector space? E. g., I know that in order to make it into an
algebra, I have to define a function called product_on_basis somewhere in its definition, and that in order to make it into a coalgebra, I have to define a function called coproduct_on_basis; but
what function do I have to define in order to make it into a graded vector space? How can I find out the name of this function? (It is not given in http://www.sagemath.org/doc/reference/sage/
categories/graded_modules_with_basis.html . I know the names of the functions for the multiplication and the comultiplication from python2.6/site-packages/sage/categories/examples/
hopf_algebras_with_basis.py , but I don't see such a .py file for graded vector spaces.)
Once this is done, I would like to do linear algebra on the graded components. They are each finite-dimensional, with basis a part of the combinatorial basis of the big space, so there shouldn't be
any problem. I have defined two maps and want to know, e. g., whether the image of one lies inside the image of the other. Is there an abstract way to do this in Sage or do I have to translate these
maps into matrices?
Context (not important): I have (successfully, albeit stupidly) implemented the Malvenuto-Reutenauer Hopf algebra of permutations:
html version resp. sws file
Now I want to check some of its properties. This checking cannot be automated on the whole space, but it is a finite problem on each of its graded components, so I would like to check it, say, on the
fifth one.
Okay, John, thanks - I did it at last. I didn't expect that writing everything as a matrix would be as simple as it turned out to be.
darijgrinberg (Feb 27 '12)
I think CombinatorialFreeModule and CombinatorialAlgebra are the places to start.
The most sophisticated graded Hopf algebras implemented in Sage are, I think, the Steenrod Algebras (Also see the online source code), so that implementation may give you some ideas too.
Good luck -- I am personally interested in computations with differential graded algebras so I would love to see how you implement your classes.
posted Jan 27 '12
3725 ● 7 ● 45 ● 104
Thanks for the link to the Steenrod algebra! I got the function, it's unsurprisingly called def homogeneous_component(self, n). I still would like to know how to find out such names in general,
rather than by analyzing the source code of Sage. And I still would like to know how to do linear algebra.
darijgrinberg (Jan 27 '12)
BTW version 2 is up (at the same addresses), but I doubt you can pick up any good programming habits from me...
darijgrinberg (Jan 27 '12)
For this issue, I think you're going to have to look at the source code. Graded objects are not well-developed in Sage. If you follow the link in my answer, you will also find a patch which
implements an example of a graded algebra, whose explicit purpose is to be a model on which other people can base their work.
John Palmieri (Jan 27 '12)
As far as doing linear algebra, I think that CombinatorialFreeModule is not that great, unfortunately. So I think converting to matrices is the right way to go.
John Palmieri (Jan 27 '12) | {"url":"http://ask.sagemath.org/question/1103/how-to-make-a-vector-space-with-basis-a-graded-one","timestamp":"2014-04-18T10:34:27Z","content_type":null,"content_length":"32106","record_id":"<urn:uuid:dd04d299-2a32-4d17-94f5-dd1b3a327b9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving a QM problem numerically
2) (which is much more important) How do I cancel out the solutions that "blow up" at infinity? When the computer solves the equations, it gives the most general solution, and doesn't care that the
solution blows up at infinity. I DO care ;) , and would like to only get the solution which doesn't blow up... and that would give me th right answer... how do I do it?
A numerical solution to a (system of) differential equation(s), is not a general solution, it is (a close approximation to) the *unique* solution for the initial values that you specify.
If your numerical solution is blowing up, possibilities include:
1) you have specified the wrong initial conditions;
2) you haven't coded things correctly;
3) the solution depends critically on initial values, with very small variations in initial conditions producing wildly different solutions. | {"url":"http://www.physicsforums.com/showthread.php?t=206733","timestamp":"2014-04-20T05:53:36Z","content_type":null,"content_length":"37403","record_id":"<urn:uuid:22206ef9-6372-4e31-a02f-ff1a3250c49e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Segre class of cones and Base change of projective cones
up vote 2 down vote favorite
I'm trying to work out a result in Fulton's intersection theory and I think I need the following basic result about base change of projective cones (whose support may not be the entire base scheme).
Let $X$ be a scheme of finite type over a field, and $S^\cdot$ a sheaf of graded $\mathcal{O}_X$ algebras on $X$ generated by $S^1$ over $S^0$, with the natural map $\mathcal{O}_X \rightarrow S^0$
surjective. $P = \textrm{Proj}(S^\cdot)$, along with it's nautral projection $\pi$ to $X$, is a projective cone on $X$ with support defined by the ideal sheaf $\ker( \mathcal{O}_X \rightarrow S^0)$.
Let $i: Z \rightarrow X$ be a closed immersion. Base change $\pi$ along $i$. Then, do we have:
1.) The fiber product is the cone corresponding to the pullback $i^*(S^\cdot)$ sheaf of algebras on $Z$.
2.) The induced morphism on cones pulls back $O_P(1)$ to $O_Z(1)$ where these sheaves denote the canonical sheaves on the projective cones over $X$ and $Z$ respectively.
In his intersection theory book, Fulton only states (1,2) as true in a more restrictive setting ($i$ can be replaced by an arbitrary proper morphism but $S^\cdot$ must correspond to a vector bundle).
He states (1) not for projective cones, but for cones (replace $\textrm{Proj}$ above with $\textrm{Spec}$) that moreover have the natural map $\mathcal{O}_X \rightarrow S^0$ an isomorphism (even
though he gives the more general definition above). To work out the details of something in his book I'd like this stronger result above. I'm asking for a reference in case such a result is actually
false in this stronger setting.
$$---$$ Just to be concrete, here is what I'm trying to work out.
(Fulton Intersection Theory, Example 4.1.6b) Let $X$ be a scheme of finite type over a field, and $C$ be a cone on $X\times \mathbb{A}^1$ flat over $\mathbb{A}^1$. Let $i_t: X \rightarrow X \times \
mathbb{A}^1$ be the inclusion of $X$ into the product at $t$. Then, $i_t^*s(C) = s(C_t)$. (Note that flattness would be automatic if the cone had support equal all of $X \times \mathbb{A}^1$, so
Fulton isn't assuming full support.) Here, $C_t$ denotes the restriction of the cone to $X_t = X \times \{t\}$.
Let $i_{C_t}: P(C_t \oplus 1) \rightarrow P(C \oplus 1)$ be the closed immersion predicted by (1).
After standard manipulations, we need to show that (for fixed integer $n$):
$$i_{C_t}^*\left( c_1( \mathcal{O}(1))^n \cap [P(C\oplus 1)]) \right) = c_1( \mathcal{O}(1))^n \cap [P(C_t\oplus 1)]$$
where the LHS and RHS $\mathcal{O}(1)$ denote the corresponding sheaves on $P(C\oplus 1)$ and $P(C_t \oplus 1)$ respectively.
Now, if we had (2) above, this would be true by Prop 2.6(e) (How gysin map acts on Chern classes).
ag.algebraic-geometry schemes intersection-theory reference-request
What I'm trying to prove is a fairly basic result. Can someone tell me if this is the right strategy, or how they would prove it? I've hunting for a reference in EGA II.8 with no luck. In fact, it
doesn't even appear that EGA defines cones with $S^0 \ne \mathcal{O}_X$ – LMN Oct 26 '12 at 2:32
Is there a problem with base-changing to ${\bf Spec}\ S^0$ first? – Allen Knutson Oct 30 '12 at 4:31
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry schemes intersection-theory reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/110676/segre-class-of-cones-and-base-change-of-projective-cones","timestamp":"2014-04-16T19:51:43Z","content_type":null,"content_length":"50759","record_id":"<urn:uuid:3863bc25-3aa0-4fcc-9efb-94f680de9e61>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase-A-Matic, Inc. Rotary Phase Converter Application Notes
Rotary Converter Application Notes
On this page:
Motor Load Types 4 Year Warranty
Resistive Loads
220V and 460V models available.
Computer, Rectifier, & Transformer Loads For 60Hz and 50Hz use.
Multiple Motor Loads
The Rotary Converter is designed to supply full running current to a three-phase motor normally providing it with full running torque. However, most motors will draw five times their running current
during start-up. When used at its maximum HP rating the Rotary Converter cannot deliver the full (5 times) starting current to the motor and therefore cannot provide full starting torque. For heavy
start-up loads a larger converter should be used.
NOTE: You can always use a larger Rotary Converter than the HP of the motor. There is no minimum load requirement for the Rotary Converters. Some customers will install a Rotary Converter larger than
they need to accommodate any future additions to their equipment. Below are the minimum size recommendations for various applications.
Sizing For Load Types
1. Motor Loads
A... TYPE 1 Motor Load:
May be used up to the HP rating of the converter.* For instant reversing (as for rigid tapping), size according to TYPE 3 LOADS
*Many restrictions apply. Most applications require sizing the converter a
minimum of 50% larger (see all load types). Contact Phase-A-Matic, Inc. to
verify load type.
B. TYPE 2 Motor Load:
These include domestic & European lathes without a clutch, some pumps, wheel balancers, paper cutters, flywheel driven equipment, air conditioners, blowers, woodworking band saws, dough
mixers, meat grinders, motors rated below 1000 RPM, etc. Use a converter with HP rating of at least 50% larger than HP of the motor.
C. TYPE 3 Motor Load:
These include Design "E" motors, Taiwanese, Chinese, Brazilian, Mexican motors, pumps starting under load, etc. Use a converter with twice the HP rating of the motor.
D. TYPE 4 Motor Load:
These include laundry extractors, hoists, elevators, etc. For these start-up loads use a converter with three times the HP rating of the motor.
E. TYPE 5 Motor Load:
Often hydraulic pumps, which come under a momentary load during use will be loaded well beyond their rated HP for the brief period of maximum PSI. Examples includes bailers, compactors,
paper cutters, shears, pumps, etc. The HP of the converter must be at least as high as the actual HP developed by the motor. To calculate the HP developed, you must first find the actual
amperage drawn during maximum PSI. This is different from the rated amps of the motor. Next you would divide the maximum amperage by 2.8 to find the actual HP being developed by the motor.
That figure is the minimum size of converter to be used. Example: A 10 HP compactor with a motor rated at 28 amps but draws a peak of 40 amps momentarily at maximum compression. Divide 40 by
2.8 = 14.3 HP being developed, use model R-15 Rotary Converter.
2. Resistive Loads
Resistive loads must use the Rotary type converter, the Static type should never be used because it would be damaged. There are two methods to determine the HP of the converter to be used. One
method is to take the amperage rating of the equipment and divide by 2.8 to find the equivalent HP. The other method is to take the KW rating and multiply times 1.34 or divide by .75 to find the
equivalent HP of the equipment.
3. Computer, Rectifier & Transformer Loads
Transformers and electric equipment (welders, lasers, EDM machines, CNC equipment, computers, plating rectifiers, power supplies, etc.) can operate on the Rotary Converter. Use the same formula
as for resistive loads to determine the proper size converter to use.
If a 4-wire wye input is required (all lines equal voltage to ground), a three phase delta-to-wye isolation transformer must be installed between the converter and the equipment to change the
delta power to wye power.
4. Multiple Motor Applications
Due to the high in-rush current required to start a motor (5 to 10 times the normal running current), most applications require sizing the HP of the Rotary Converter 50% larger, or more than the
horsepower of the largest motor, or any combination of motors started at exactly the same time. The first motor started, if not running heavily loaded, generates additional 3-phase power back
into the circuit. You can then run additional motors, provided they are not running heavily loaded and not all started at the same time. A maximum of up to 3 times the HP rating of the Rotary
Converter can run at the same time, if not heavily loaded, and not started simultaneously. For example, a 30 HP Rotary Converter potentially could run motors totaling up to 90 HP. Contact factory
for verification of sizing.
Back to top | {"url":"http://www.phase-a-matic.com/RotaryApplicationNotes.htm","timestamp":"2014-04-18T00:34:49Z","content_type":null,"content_length":"48080","record_id":"<urn:uuid:8beb27c2-e083-472c-b07b-0acf59c27724>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can Someone Pass the SAT, or Anyother Filling the Bubble Type Test, by Randomly Answering Each Question?
The What If? blog recently asked
What if everyone who took the SAT guessed on every multiple-choice question? How many perfect scores would there be?
This got me thinking. Since using the Monte Carlo Method is one of my favorite methods to solve problems, it seemed that this would be a perfect problem to solve.
I greatly simplified things from a SAT scoring system. But I wrote a program that would create a random test key with a specified number of solutions, and then take a specified number of problems for
a specified number of tests.
I ran this program with 3, 4, and 5 solution problems, with 100 problems per test, for 1 billion tests each ( just about the maximum my computer can reasonably handle ). I used a simple scoring
system where the number of right answers for each test was the percentage of the test ( hence the 100 questions ).
What I found was that the average was 33.33%, 25%, and 20% respectively. No test, out of 3 billion runs had a perfect 100%.
Here are the Standard Distribution Curves:
Here is a thought. I had a teacher point out that if as student gets a score that is the average +/- 1 standard deviation, they had to work for that score. If it's a high score, no problem, but if
it's a low score, then the student had to actually work harder than just guessing to get a | {"url":"http://www.doomd.net/2013/08/can-someon-pass-sat-or-anyother-filling.html","timestamp":"2014-04-20T23:27:14Z","content_type":null,"content_length":"56607","record_id":"<urn:uuid:a3631bdf-2277-420b-99a0-4a9b17341fce>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |